name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
368450 | Efficient techniques for accurate modeling and simulation of substrate coupling in mixed-signal IC''s. | Industry trends aimed at integrating higher levels of circuit functionality have triggered a proliferation of mixed analog-digital systems. Magnified noise coupling through the common chip substrate has made the design and verification of such systems an increasingly difficult task. In this paper we present a fast eigendecomposition technique that accelerates operator application in BEM methods and avoids the dense-matrix storage while taking all of the substrate boundary effects into account explicitly. This technique can be used for accurate and efficient modeling of substrate coupling effects in mixed-signal integrated circuits. | Introduction
Industry trends aimed at integrating higher levels of circuit functionality resulting from an
emphasis on compactness in consumer electronic products and a widespread growth and interest
in wireless communications, have triggered a proliferation of mixed analog-digital systems. Single
chip mixed-signal designs combining digital and analog blocks built over a common substrate,
provide reduced levels of power dissipation, smaller package count, as well as smaller package
interconnect parasitics. The design of such systems however, is becoming an increasingly difficult
task owing to the various coupling problems that result from the combined requirements for high-speed
digital and high-precision analog components. Noise coupling through the common chip
substrate, caused by the nonideal isolation provided by the common substrate has been identified
as a significant contributor to the coupling problem in mixed-signal designs [3, 2, 11, 1]. Fast
switching logic components inject current into the substrate causing voltage fluctuation which
can affect the operation of sensitive analog circuitry through the body-effect, since the transistor
threshold is a strong function of substrate bias.
Perhaps the most common way to deal with these problems is to resort to costly trial and
error techniques. Clearly such a methodology, which requires the ability to fabricate multiple
version of a design and relies heavily on the expertise and experience of the designer, is not
adequate in the face of rising fabrication costs and increasing demands for shorter design cycle
times [1]. Several approaches have been presented in the past to attempt to quantify the effects
of noise coupling through the substrate in order to account for and deal with it without the
need for expensive redesigns and multiple fabrication runs. Of these methods, those based on
an appropriate formulation of the substrate electromagnetic interactions which rely on detailed
numerical analysis are in general more reliable and accurate. Examples of 3D techniques include
Finite Element (FEM) and Finite Diference (FD) numerical methods for computing all the
currents and voltages in the substrate [2, 3, 4, 5, 6]. These techniques perform a full domain
discretization on the large but bounded substrate and can easily handle irregular substrates
(wells, doping profiles, etc). Unfortunately such methods are impractical for anything but simple
problems, since the number of unknowns resulting from the discretization is too large because
of volume-meshing of the entire substrate. Device simulators such as MEDICI and PISCES can
also be used for this task. However they are in general too slow, since they are meant to simulate
the drift-diffusion phenomenon in semiconductors, while we are interested only in the simple
resistive effects. Thus such methods are not efficient nor versatile enough for implementation in
standard CAD systems.
Boundary-Element BEM (methods) have been applied with some success to the problem of
modeling substrate coupling. BEM methods are very appealing for the solution of this type of
problems because by requiring only the discretization of the relevant boundary features they
dramatically reduce the size of the matrix to be solved. In [7] a Green's function for a two-layer
substrate without backplane is used. In effect this approach amounts to analyzing a chip with infinite
dimensions in the lateral directions. In [8] a distinct approach is taken in that point to point
impedances are precomputed which are then used to find the admittance matrix that describes
the contacts configuration. An interpolation process is used to fit the preprocessed impedance
values computed at various contact distances to a power series of the distance between contacts.
In order to avoid edge effects the authors also assume infinite lateral chip dimensions. This technique
can be very efficient because numerical acceleration techniques can be applied to speedup
the computation of the impedance matrix coefficients. Accuracy can be compromised due to
the assumption of infinite lateral dimensions if some of the contacts are placed near the physical
walls of the substrate. In [10] a substrate Green's function is derived that takes into account the
actual properties of the domain and the problem. Repeated evaluation of this Green's function
is performed in order to obtain an accurate substrate model. However, direct computation of
the substrate Green's function is avoided and instead the computation is performed by means
of a 2D DCT (Discrete Cosine Transform), which in turn can be efficiently computed with an
FFT. Despite these improvements, the computational effort involved in computing a model using
this approach is still considerable and the method has difficulty handling problems with a large
number of contacts.
The matrices produced by BEM methods are dense which has limited the usage of such methods
to small to medium size problems. However, substrate coupling is in general a global problem
and its effect cannot be accurately predicted by analyzing small subsets of the layout. Therefore
speeding up the computations encountered in BEM formulations is of paramount importance
if accurate models for large substrate coupling problems are to be obtained. In this paper we
present a novel eigendecomposition method, used in a Krylov subspace solver, that eliminates
dense-matrix storage and speeds up operator-vector application significantly. This method is used
to speedup the computations necessary for computing substrate models in a BEM formulation
and allows for the extraction of substrate models in problems containing several hundred surface
unknowns.
In Section 2 we present some background into the problem of modeling substrate coupling and
specifically on BEM methods. Then, in section 3 we present our algorithm based on a functional
eigendecomposition of the substrate current to voltage operator and show how to use it to speed
up substrate extraction. Then we show that the resulting model is in a form that makes it easy
to incorporate into standard circuit simulators such as spice or spectre to perform coupled
circuit-substrate simulation. In section 4 we include examples that illustrate the efficiency and
accuracy of the techniques described. Finally, in section 5, we present some conclusions from our
work.
Background
2.1 Problem Formulation
For typical mixed-signal circuits operating at frequencies below a few gigahertz, the substrate
behaves resistively [11, 8]. Assuming this electrostatic approximation, the substrate is therefore
usually modeled as a stratified medium composed of several homogeneous layers characterized
by their conductivity, as shown in Figure 2.1-a).
On the top of this stack of layers a number of ports or contacts are defined, which correspond
to the areas where the designed circuit interacts with the substrate. Examples of these contacts
include possible noise sources or receptors, such as contacts between substrate or wells to supply
lines, drain/source/channel areas of transistors, etc. Figure 2.1-b) exemplifies the typical model
assumed for the substrate and examples of contact areas or terminals. The contacts on the
substrate top are usually assumed to be planar (bidimensional). The bottom of the substrate is
either attached through some large contact to some fixed voltage (usually ground) or left floating.
Substrates with backplane connectors are known to provide better isolation against coupling, but
are also more expensive [3].
In the electrostatic case the governing equations reduce to the well known Laplace equation
inside the substrate volume, where OE is the electrostatic potential and oe is the substrate con-
ductivity, which is assumed constant in each layer. Application of Green's theorem, assuming
a modified Green's function G which accounts for the problem's boundary conditions, gives the
potential at some observation point r due to a unit current injected at some source point r 0
Z
ae(r
(a) (b)
Figure
1: Cross-section of substrate showing: a) a 3D model as an homogeneous multilayered
system, b) the layered model, and some points of contact between the circuit and the substrate.
where ae is the source current density. For this case, all the source and observation points are
at the substrate contacts which are on the top of the substrate and are assumed bidimensional.
The above volume integral thus reduces to a surface integral
Z
ae s
ae s is now the surface current density.
The usage of the medium's Green's function greatly simplifies the problem by implicitly taking
into account the boundary conditions, making it unnecessary to discretize the boundaries. The
substrate Green's function has been previously computed in analytical form and shown to be [11]
fmn Cmn cos( m-x
a
a
where a and b are the substrate lateral dimensions and the fmn can be computed with the help of
recursion formulas. In order to keep the description simple, we will refrain from reproducing the
exact formulas as they are somewhat cumbersome. For the exact expressions and their derivation
see [10] or [11].
2.2 Computing a Resistive Model
Once the Green's function is known, Eqn. (3) can be used to compute the potential at any
point with any current distribution on the substrate contacts. Given a set of S contacts, we
seek a model that relates the currents on those contacts, I c to their voltage distribution V c . In
practice, for reasons of accuracy it is necessary to discretize each of the contacts into a series of
panels. A set of equations relating the currents and potentials on all panels in the system is then
formulated
Each entry in this impedance matrix is of the form
Z
Z
are the surface areas of panels i and j respectively.
Using (4) directly to compute the elements of Z, leads to a doubly infinite series which exhibits
slow convergence and is therefore computationally expensive. In [10] it was shown that this
computation can be performed in a very efficient way by truncating the series and noticing that
the resulting equation can be rewritten such that each of the matrix elements can be obtained
from careful combinations of appropriate terms of a two-dimensional DCT. Since a DCT can be
efficiently computed with an FFT, this techniques leads to a significant speedup in computation
time.
The overall algorithm proceeds as follows: one of the contacts, j, is placed at one Volt, while
all the others contacts are set to zero Volt. This implies that all panels that are contained in
contact j will be set to a voltage of one Volt and the other to ground. Then Eqn. (5) can be
solved to obtain the corresponding current distribution. By appropriately adding the current
corresponding to panels on the same contact one obtains the equivalent contact currents. This
procedure is equivalent to computing one column of the admittance matrix Y c defined as
I
By repeating this procedure for every contact j in the problem the substrate admittance model,
Y c can be readily constructed. It should be noted that the model given by Eqn. (7) is a simple
resistive network where the contacts are the network nodes and entry (i;
the conductance between nodes i and j. Thus, inclusion of such a model in a standard circuit
simulator such as spice or spectre is a trivial task.
The method outlined above can quickly become computationally very costly if the number of
panels is large. Also the cost of storing the dense matrices resulting from the discretization is
in itself a problem. For anything but very small problems one of the dominant factors in the
computational cost will be that of solving the system in Eqn (5) S times. If Gaussian elimination
(i.e. LU-factorization) is used then this cost will be O(n 3 ) which is overwhelming for typical values
of n. Methods for speeding up the solution of this problem are however well known and have in
fact been applied to substrate extraction [8]. Iterative algorithms and namely Krylov-subspace
algorithms can be used to speedup the computation of (5). An example of such methods is the
Generalized Minimum Residual algorithm, GMRES [12]. GMRES solves the linear system by
minimizing the norm of the residual r
p at each iteration k, of the iterative process.
The major cost of these algorithms is the computation of a matrix-vector product which is
required at each iteration. Thus, if the number of iterations does not grow too rapidly and is
kept small, the total cost of obtaining the substrate admittance model is O(S KG
is the average number of GMRES iterations per solution. In [8] this cost is further decreased
to O(S K P n) by acceleration of the matrix-vector product using the hierarchical multipole
algorithm [13]. However this is only possible due to the simplifications assumed in computing
Z, specifically the translation invariance of the "implied" Green's function in this case, and is
thus not a general procedure. Therefore general methods have to be devised to accelerate the
computation of Eqn. (5) in order to be able to handle problems involving several hundreds of
substrate contacts with high accuracy.
3 Sparsification via Eigendecomposition
The extraction algorithm that we propose in this paper for computing a substrate model is
based on the above method for computing an admittance representation of the substrate. The
iterative algorithm GMRES is used to solve Eqn. (5) but direct computation of the matrix-vector
product, ZI k
p is avoided. This operation corresponds in essence to computing a set of
average panel potentials given a substrate injected current distribution. This can readily be done
by means of an eigenfunction decomposition of the linear operator that relates injected currents
to panel potentials. As we shall see, this computation can be performed very efficiently by means
of 2D DCT's.
3.1 Computation of the Operator Eigenpairs
The first step in our method is to discretize the substrate top surface in small rectangular
panels, where small means that the current that flows across them can be considered uniformly
distributed in each one. Considering the top of the substrate as a 2D surface, this discretization
leads to an M \Theta N set of panels or cells (usually since the substrate is square, but for
the sake of generalization in our derivation we will allow different discretizations in each axis).
With this approximation the current distribution in the M by N panels can be represented by
the following equation (in the remainder we will use the notation q for current instead of the
more usual i due to the obvious parallel with the capacitive problem in the electrostatic case.)
where q mn is the total current at panel (m; n), -(x; y) is a square-bump function, with 0 value
throughout the plane except in a rectangle defined by (\Gammaa=2M; \Gammab=2N ); (\Gammaa=2M; b=2N); (a=2M; \Gammab=2N )
and (a=2M; b=2N ), where it takes a constant amplitude of MN=ab so that it has unit volume.
In essence it serves as an averaging function.
Let us now assume that the current distribution function q(x; y) can be represented (decom-
posed) by a sum of functions of the form
are the functions and a ij are the coefficients of the decomposition.
are the eigenfunctions of the linear operator L which takes us from currents to
potentials, then by definition, the potential can be written as
are the eigenvalues of L. Therefore, if the eigenpairs (eigenfunctions and eigenvalues)
of the operator L implied by Poisson's equation are known, and an eigendecomposition of the
injected substrate currents can be obtained such as (9), then the potentials are trivially obtained
from (10).
For the substrate problem, the eigenfunctions of the impedance operator L can be derived
from Poisson's equation and the knowledge of the boundary conditions. In fact, since the first
derivative of the potential across the lateral boundaries must be 0, cosines are good candidates
to represent the current distribution. The particular form of the functions would then be
a
cos
To see if these functions are indeed eigenfunctions of L, we substitute the current distribution in
this equation to find if the potential can be written as in (10). Since all panels are at the top of
the substrate, the current distribution on the contacts is represented along the z axis as a delta
function at z = 0. Poisson's equation can then be written as
dy 2
amn' mn (x; y)
oe
where the potential is a function of x; y and z. The 3D potential is
-mn (z) evaluated at are the eigenvalues of L. Substituting (13) in (12), computing the
derivatives, multiplying both sides by cos
a
cos
and integrating in x from 0 to a and
in y from 0 to b, we finally get a second order differential equation for the eigenvalues -mn (z)
mn -mn
oe
where
The solution of Eqn. (14) can be readily obtained to be, for m;n 6= 0,
oe L fl mn (fi L cosh(fl mn d)
where L is the number of layers in the substrate profile with resistivities oe
its thickness. For
oe L fi L
The values of \Gamma L and fi L can be computed in a recursive manner as in [10]. It should come as no
surprise that the derivations are fairly similar given the definition of Green's function. However,
in [10] they were used to evaluate the Green's function between two panels, while here we use
them directly to expand global functions defined over the entire substrate.
At this point we have shown that the candidate functions introduced in Eqn. (11) are indeed the
impedance operator's eigenfunctions for the substrate problem. Furthermore we have computed
the eigenvalues of the operator to be given by (16) and (17).
3.2 Eigendecomposition Representation for Panel Potentials
Given this information we can now go back to Eqn. (9) and obtain the eigenfunction decomposition
for the current distribution. To that end, we need to determine a ij , which we can easily
do by multiplying both sides of (9) by ' kl (x; y) followed by integration in x between 0 and a and
in y between 0 and b. This yields
Z aZ bq(x; y) cos
a
cos
dxdy (18)
with
Since q(x; y) is constant by rectangles as given by (8), replacing we obtain, after some algebra,
cos
with
ab-j sin
ab-i sin
sin
Now that the potential distribution on the top of the substrate, given by Eqn. (10) is known,
the average potential in each panel can easily be computed. In fact, the average potential in
any panel can be obtained by taking the inner product between Eqn.(10) and the square-bump
function supported over the given panel. The result of this operation is easily shown to be
\Phi pq =X
cos
with
-j sin
-i sin
sin
Numerical evaluation of the average panel potential, amounts to truncating Eqn. (21) in order
to compute a finite number of products . The size of the summation is controlled by
the number of coefficients a ij available, and therefore by the number of cosine modes used in the
eigendecomposition.
3.3 Efficient Computation of the Panel Potentials
Equation (21) allows us to compute the average panel potentials given any arbitrary current
distribution on the top of the substrate. In order to accomplish this task, one has to compute the
individual a ij coefficients using (19), multiply then appropriately by the operator's eigenvalues
and compute the double summation indicated in (21). Inspection of Eqn. (19) reveals that, for
the coefficients a ij are the result of a 2D type-2 DCT on the
set q mn . Such an operation can be efficiently performed by means of an FFT. Furthermore, after
multiplication by the eigenvalues, computation of the average potentials from (21), assuming
truncation of the summation, again amounts, up to a scaling factor, to the computation of an
inverse 2D type-2 DCT on the set C
If at some point it becomes necessary to increase the accuracy of the potential computation,
the standard way to accomplish this is to further refine the substrate discretization. However this
directly affects the number of panels in the system and would increase total computation time
and storage. In our method, higher accuracy is obtained, without refining the discretization, by
increasing the size of the eigendecomposition, i.e., by employing more cosine modes. Apparently
such an alternative would preclude the usage of the efficient FFT algorithm for Eqn. (19) since
there would now be more a ij coefficients than panels (i.e. q mn coefficients). However, by using
the symmetry properties of the DCT, it can be shown that all a ij can be related
to the first MN cosines modes, a This process is termed
unfolding. Thus by simple computation of the DCT implied in (19), it is possible to obtain an
arbitrary number of cosine mode coefficients without incurring in any substantial extra cost. By
a similar argument refolding of these coefficients can be performed to obtain the average panels
potentials from (21). Specifically,
cos
is the 2D DCT of q mn as seen in (19) and
sin 2
sin 2
-o
ab- 4
sin 2
sin 2
and similarly for -
-, where an appropriate number of terms is
used.
3.4 Memory and Cost Comparison
The eigendecomposition algorithm just described is extremely efficient both in terms of memory
usage as in terms of computational cost when compared to the Green's function based algorithm
for similar accuracies. Without loss of generality and in order to simplify the description we will
now assume that the discretization of the substrate is such that the number of panels in each
direction is the same, that is
The memory requirements for the eigendecomposition method are O(k M 2 ) space to store the
eigenvalues of the system, the DCT coefficients and the vectors necessary for the computation
of the GMRES algorithm, and O(S 2 ) for the final admittance model (assuming S contacts are
being used). These requirements should be compared to the Green's function based methods.
The storage requirements for those methods are O(P is the size
of the 2D grid used in the DCT and resulting from the panel discretization, n is the number of
panels and S the number of contacts (usually n AE P and n AE S). In [10] it is indicated that
Z does not need to be explicitly computed, thus reducing the memory requirements. However
this is done at the expense of increasing the computation cost because the elements
to be repeatedly assembled from the result of the 2D DCT sequence. The interesting case to
consider is when the density of contacts is large, as is common in typical designs where area is a
major concern. In that case n can be a large percentage of P 2 , which is the maximum number of
contacts defined on a P \Theta P grid. Typically M and P are of comparable magnitude for reasons
of accuracy, even though as we saw, in the eigendecomposition method one can increase accuracy
increasing M . But even assuming that M and P are comparable, if n - 20%P 2 which
is in fact a small density of contacts, the storage requirements of the Green's function methods
are then O(P 2 For large enough P , the last term will dominate
the storage cost and the Green's function method will require substantially more space that the
eigendecomposition method whose cost is always O(P 2 ), albeit with a large constant.
The computational cost of both methods can also be compared. For the Green's function
method this cost is O(2P 2 log(P
where the first term corresponds to the 2D type-1 DCT and the second term corresponds to
stenciling the Z matrix and to the multiple system solutions if GMRES is used. It is easily seen
that the second term always dominates the total cost. In the eigendecomposition method, the
total cost is O(2M 2 log(M) S KE ). Experience shows that KE and KG , the average number of
GMRES iterations per contact in each method, are comparable. Thus the difference between
the cost of the two methods rely on the comparison of M 2 log(M) to n As we saw
previously, even for sparse designs in the number of contacts (ff - 1%) n 2 AE M 2 , which implies
that the eigendecomposition method is almost always much more efficient.
4 Experimental Results
In this section we present examples that show the accuracy and efficiency of the substrate
coupling extraction algorithm presented in this paper.
We will use as an example a layout from a simple mixed-signal circuit. Figure 4-a) shows the
layout for the example problem where the substrate contacts are marked and numbered. As can
be seen from the figure this example has 52 contacts, so it would fit in the category of small to
medium size problem.
Two experiments, using different substrate profiles were conducted on this example layout in
order to test the versatility, accuracy and efficiency of the extraction algorithm. The profiles
used were taken from [10] and are described in Figure 4-b). The high-resistivity substrate is used
in various BiCMOS processes, while the low-resistivity substrate is used in CMOS due to their
latch-up suppressing properties. For each of the substrate profiles, extraction was performed and
a resistive model was obtained in the form of a matrix relating the resistance from every contact
to every other contact.291113
28 29
43 44
p-type (0.1 Ohm.cm)
p-type (15 Ohm.cm)
p-type (1 Ohm.cm)
backplane backplane
(a) (b)
Figure
2: Example problem showing a significant number of substrate contacts in a mixed-signal
design: a) symbolic layout; b) Substrate cross-sections used profiles for low and high resistivity
substrates.
Table
1 shows a selected set of relevant resistances computed using both the method based on
direct application of the substrate Green's function and our eigendecomposition based method
for the case of the low-resistivity profile. As seen from the table the accuracy of both methods is
comparable. Similar accuracies were noted when the high-resistivity profile was employed. We
would like to point out that in fact it is possible to obtain for both methods the exact same
values, up to machine precision, by using the same uniform discretization.
As can be seen from Figure 4-a) the contacts for this problem are of varying dimensions, which
is typical of mixed-signal designs. Accuracy constraints will limit the discretization employed for
producing the set of panels that describe the problem. In this example, for both substrate profiles,
usage of a uniform discretization for the Green's function method would produce a problem with
too many panels and the computation time and memory requirements would be overwhelming.
Therefore an efficient non-uniform discretization algorithm was devised and employed in this
problem. The results in Table 2 show that the discretization using this algorithm produces a
Name Contact 1 Contact 2 Green's func Eigendecomp.
non-unif. disc. unif. disc.
R400 9 21 5.55422e+06 7.46023e+06
Table
1: Selected set of extracted resistances for the example layout using the low-resistivity
substrate. The node numbers refer to the respective contact numbers and the node called
refers, in this example, to the grounded backplane contact.
relatively small number of panels.
This is a very important observation because it implies that an efficient discretization algorithm
has to be developed if the Green's function method is to be applied for extraction. On the other
hand, the eigendecomposition method, as described in section 3, requires a uniform discretization
which is easier to implement and, as we shall see does not imply any loss of efficiency. Although
perhaps not the most advantageous from our point of view, this comparison seems fair and closer
to reality.
Table
2 summarizes the relevant parameters obtained for the extraction applied to both the
low and high-resistivity profile examples, such as the number of panels obtained, the memory
used, the CPU time necessary, etc. For the examples shown, in order to compute the solution
using the Green's function based method, GMRES was used to solve Eqn. (5) since usage of LU
factorization would take too long. Also, in order to maintain similar accuracy between methods
the minimal discretization used for each method was different. In particular a larger DCT was
necessary for the Green's function based method. However, the cost of computing the DCT is
not very relevant relative to the total cost, as seen in Table 2.
As can be seen from Table 2 the memory requirements (shown for the high-resistivity case) for
the eigendecomposition algorithm are much smaller. A factor of almost 6 was obtained in terms
of memory savings. For the low-resistivity profile similar results are to be expected.
In terms of computational cost a factor of over 6 speedup was obtained for the low-resistivity
substrate profile, and a speedup of almost 15 was recorded for the high-resistivity substrate, which
leads to a harder numerical problem. We point out that for the Green's function method, the Z
matrix relating the direct couplings between panels was explicitly computed and stored. While
Low-resistivity profile High-resistivity profile
Value Green's func Eigendecomp. Green's func Eigendecomp.
non-unif. disc. uniform disc. non-unif. disc. uniform disc.
Number of contacts 52 52 52 52
Number of panels 2547 17764 2547 17764
Average # panels/contact 50 341 50 341
Size of DCT 512 \Theta 512 256 \Theta 256 512 \Theta 512 256 \Theta 256
Memory usage - 144.6MB 25.5MB
Number of GMRES iterations 1238 1868 8030 2930
Average per contact 23 35 154 56
Computation Times (seconds on an Ultra Sparc 1)
discretization 0.06 0.54 0.06 0.54
Green's function DCT 12.90 N/A 10.33 N/A
Total setup time 14241.5 12.5 14241.1 9.92
Solve cost (GMRES) 16656.8 4111.4 107278 6450.05
Total extr. time 30965.5 4994.9 123630 8405.64
Table
2: Summary of the relevant parameters obtained for the extraction of the example problem
for both substrate profiles.
this is inefficient in terms of memory, the savings in computational time more than compensate
for it, so a compromise was taken here. As can be seen in the tables, the cost of computing
this matrix, which dominates the "Total setup cost" values shown, is non-trivial, specially in the
low-resistivity substrate. In either case a significant speedup was obtained using the eigendecomposition
method, which coupled with its efficient memory usage, makes it a very appealing
method for medium to large high-density substrate extraction problems.
Conclusions
In this paper we reviewed some of the commonly used techniques for extracting and generating
accurate models for substrate coupling, that are amenable to efficient circuit level simulation.
A new eigendecomposition-based technique to perform this extraction was presented. Examples
that show the relevance, accuracy and efficiency of this substrate coupling extraction algorithm
were presented. A speedup of over 15 was obtained when comparing the new extraction method
with direct usage of the problem's Green's function for some substrate profiles. This result coupled
with significant reductions in memory usage make the method presented here very interesting
for this problem.
--R
How to deal with substrate noise in analog cmos circuits.
Chip substrate resistance modeling technique for integrated circuit design.
Experimental results and modeling techniques for substrate noise in mixed-signal integrated circuits
A methodology for rapid estimation of substrate-coupled switching noise
Addressing substrate coupling in mixed-mode ic's: Simulation and power distribution systems
Extraction of circuit models for substrate cross-talk
Verification techniques for substrate coupling and their application to mixed-signal ic design
Modeling and analysis of substrate coupling in integrated circuits.
Modeling and analysis of substrate coupling in integrated circuits.
Modeling and Analysis of Substrate Coupling in Integrated Circuits.
GMRES: A generalized minimal residual algorithm for solving non-symmetric linear systems
Fast capacitance extraction of general three-dimensional structures
--TR
GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems
Mixed-signal switching noise analysis using Voronoi-tessellated substrate macromodels
Extraction of circuit models for substrate cross-talk
--CTR
Joo Paulo Costa , Mike Chou , L. Miguel Silveira, Efficient techniques for accurate extraction and modeling of substrate coupling in mixed-signal IC's, Proceedings of the conference on Design, automation and test in Europe, p.84-es, January 1999, Munich, Germany | Discrete Cosine Transform;mixed-signal;eigenfunction;substrate coupling;Fast Fourier Transform;eigenvalue;eigenpair |
369138 | Slicing Software for Model Construction. | Applying finite-state verification techniques (e.g., model checking) to software requires that program source code be translated to a finite-state transition system that safely models program behavior. Automatically checking such a transition system for a correctness property is typically very costly, thus it is necessary to reduce the size of the transition system as much as possible. In fact, it is often the case that much of a program's source code is irrelevant for verifying a given correctness property.In this paper, we apply program slicing techniques to remove automatically such irrelevant code and thus reduce the size of the corresponding transition system models. We give a simple extension of the classical slicing definition, and prove its safety with respect to model checking of linear temporal logic (LTL) formulae. We discuss how this slicing strategy fits into a general methodology for deriving effective software models using abstraction-based program specialization. | Introduction
Modern software systems are highly complex, yet they must
be extremely reliable and correct. In recent years, finite-state
verification techniques, including model checking tech-
niques, have received much attention as a software validation
method. These techniques have been effective in validating
crucial properties of concurrent software systems in a variety
of domains including: network protocols [23], railway
interlocking systems [5], and industrial control systems [3].
Despite this success, the high cost of automatically checking
a given correctness property against a software system
(which typically has an enormous state space) casts doubt
on whether broad application of finite-state verification to
software systems will be cost-effective.
Most researchers agree that the best way to attack the
state-explosion problem is to construct a finite-state transition
system that safely abstracts the software semantics
[7, 10, 26]. The transition system should be small enough
to make automatic checking tractable, yet it should large
Supported in part by NSF and DARPA under grants CCR-
9633388, CCR-9703094, CCR-9708184, and NASA under award NAG
21209.
y Supported in part by NSF under grant CCR-9701418, and NASA
under award NAG 21209.
enough to capture all information relevant to the property
being checked. One of the primary difficulties is determining
which parts of the program are relevant to the property
being checked. In this paper, we show how slicing can automatically
throw away irrelevant portions of the software
code, and hence safely reduce the size of the transition systems
that approximates the software's behavior.
We envision slicing as one of a collection of tools for
translating program source code to models that are suitable
for verification. We previously illustrated how techniques
from abstract interpretation and partial evaluation
can be integrated and applied to help automate construction
of abstract transition systems [11, 20, 21]. Applying
these techniques on several realistic software systems [12, 13]
has revealed an interesting interaction between slicing and
abstraction building: people currently perform slicing-like
operations manually to determine the portions of code that
are relevant for verifying a given property. Thus, preprocessing
software using slicing before applying partial-evaluation-
based abstraction techniques can: (i) provide a safe approximation
of the relevant portions of code, (ii) enable scaling of
current manual techniques to significantly larger and more
complex systems, (iii) reduce the number of components for
which abstractions must be selected and help guide that se-
lection, and (iv) reduce the size of the program to be treated
by abstraction-based partial evaluation tools.
This work is part of a larger project on engineering high-assurance
software systems. We are building a set of tools
that implements the transition system construction methodology
above for Ada and Java. In this paper, we use a simple
flowchart language in order to formally investigate fundamental
issues. We have implemented a prototype for the
slicing system in the paper, and based on this we are scaling
up the techniques. We refer the reader to the project
web-site http://www.cis.ksu.edu/santos/bandera for the
extended version of this paper (which contains more exam-
ples, technical extensions, and proofs), for the prototype,
and for applications of the abstraction techniques to concurrent
Ada systems.
In the next section, we describe the flowchart language
that we use throughout the paper. We then present, in
Section 3, the definition of slicing for this language. We discuss
a specific finite-state verification technique, LTL model
checking, and our approach to constructing safely abstracted
transition systems from source code in Section 4. Section 5
describes how slicing can be applied as a pre-phase to transition
system construction. Section 6 sketches several methods
for deriving slicing criteria from temporal logic specifications
based on the shape of commonly-used formula pat-
terns. Section 7 discusses related work on slicing, and Section
8 summarizes and concludes with a description of future
work.
2 The Flowchart Language FCL
2.1 Syntax
We take as our source language the simple flowchart language
FCL of Gomard and Jones [18, 25, 19]. Figure 1
presents an FCL program that computes the power func-
tion. The input parameters the program are m and n. These
variables can be referenced and assigned to throughout the
program. Other variables such as result can be introduced
at any time. The initial value of a variable is 0. The output
of program execution is the state of memory when the
return construct is executed.
Figure
2 presents the syntax of FCL. FCL programs are
essentially lists of basic blocks. The initial basic block to be
executed is specified immediately after the parameter list. In
the power program, the initial block is specified by the line
(init). Each basic block consists of a label followed a (pos-
sibly empty) list of assignments (we write \Delta for the empty
list, and this is elided when the list is non-empty). Each
block concludes with a jump that transfers control from that
block to another one. Instead of including boolean values,
any non-zero value represents true and zero represents false
in the test of conditionals.
In the presentation of slicing, we need to reason about
nodes in a statement-level control-flow graph (CFG) (i.e., a
graph where there is a separate node for each assignment
and jump) for given program p. We will assume that each
statement has a unique index i within each block. Then,
nodes can be uniquely identified by a pair [l:i] where l is
block label and i is an index value. In Figure 1, statement
indices are given as annotations in brackets [\Delta]. For exam-
ple, the second assignment in the loop block has the unique
identifier (or node number) [loop:2].
The following definition introduces notions related to
statement-level control-flow graphs.
ffl A flow graph e) consists of a set N of
statement nodes, a set E of directed control-flow edges,
a unique start node s, and unique end node e.
ffl The inverse G \Gamma1 of a flowgraph (N; E; s; e) is the flow-graph
all edges are reversed and
start/end states are swapped).
ffl Node n dominates node m in G (written dom(n; m)) if
every path from the start node s to m passes through n.
(note that this makes the dominates relation reflexive).
post-dominates node n in G
(written post-dom(m; n)) if every path from node m to
the end node e passes through n (equivalently, dom(n; m)
in G \Gamma1 ).
ffl Node n is control-dependent on m (some intuition follows
this definition) if
1. there exists a non-trivial path p from m to n such
that every node m 0 ng is post-dominated
by n, and
2. m is not post-dominated by n. [33]
We write cd(n) for the set of nodes on which n is
control-dependent.
Control dependence plays an important role in the rest of
the paper. Note that for a node n to be control-dependent
on m, m must have a least two exit edges, and there must
be two paths that connect m with e such that one contains
n and the other does not. For example, in the power
program of Figure 1, [loop:1], [loop:2], and [loop:3] are
control-dependent on [test:1], but [end:1] is not since it
post-dominates [test:1] (i.e., all paths from [test:1] to halt
go through it).
Extracting the CFG from an FCL program p is straight-
forward. The only possible hitch is that some programs
do not satisfy the "unique end node" property required by
the definition (for example, the program may have multiple
return's). To work around this problem, we assume that
when we extract the CFG from a program p, we insert an
additional node labeled halt that has no successors and its
predecessors are all the return nodes from p.
2.2 Semantics
The semantics of an FCL program p is expressed as transitions
on program states ([l:n]; oe) where l is the label of a
block in p, n is the index of the statement in block l, and oe
is a store mapping variables to values. A series of transitions
gives an execution trace through a program's statement-level
control flow graph. For example, Figure 3 gives a trace of the
power program computing 5 2 . Formally, a trace is finite non-empty
sequence of states written We
for the suffix starting at s i , i.e., -
1 . We omit a formal definition of the transition relation for
FCL programs since it is intuitively clear (a formalization
can be found in [19, 20]).
Slicing
3.1 Program slices
A program slice consists of the parts of a program p that
(potentially) affect the variable values that flow into some
program point of interest [31]. A slicing criterion
specifies the program point n (a node in p's CFG)
and a set of variables V of interest.
For example, slicing the power program with respect to
the slicing criterion yields the program
in
Figure
4. Note that the assignments to variables m and
result and the declaration of m as an input parameter have
been sliced away since they do not affect the value of n at
line [loop:2]. In addition, block init is now trivial and can
be removed, e.g., in a post-processing phase.
Slicing a program p yields a program ps such that the
traces of ps are projections of corresponding traces of p. For
example, the following trace of ps is a projection of the trace
1 Here, we consider only finite traces (corresponding to terminating
executions). The extended version of the paper treats infinite
executions, which are best expressed using co-inductive reasoning
(m n)
(init)
init: result := 1; [1] loop: result := *(result m); [1]
goto test; [2] n := -(n 1); [2]
goto test; [3]
then end
else loop; end: return; [1]
Figure
1: An FCL program to compute m n
Domains
l 2 Block-Labels[FCL]
a 2 Assignments[FCL]
al 2 Assignment-Lists[FCL]
Grammar
al ::= a al j \Delta
a ::= x :=
Figure
2: Syntax of the Flowchart Language FCL
Figure
3: Trace of power program with
(init)
init: goto test; [2] loop: n := -(n 1); [2]
goto test; [3]
then end
else loop; end: return; [1]
Figure
4: Slice of power with respect to criterion
in
Figure
3.
Intuitively, a trace -2 is a projection of a trace -1 if the
sequence of program states in -2 can be embedded into the
sequence of states in -1 . To formalize this, let oe j V denote
the restriction of the domain of oe to the variables in V .
Then, the definition of projection is as follows.
be a program. A projection
function # [M; -] for p-traces is determined by
ffl a set of nodes M from p's CFG, and
ffl a function - that maps each node in M to a set of
variables V
and is defined by induction on the length of traces as follows:
ae
In the classical definition [31, 32] of slicing criterion, one
specifies exactly one point node of interest in the CFG along
with a set of variables of interest at that node. This was the
case with the example slice of the power program above.
For our applications, we may be interested in multiple
program points, and so we generalize the notion of slicing
criterion as follows.
Definition 3 (slicing criterion) A slicing criterion C for
a program p is a non-empty set of pairs
where each n i is a node in p's statement flow-graph and V i
is a subset (possibly empty) of the variables in p. The nodes
are required to be pairwise distinct.
Note that a criterion C can be viewed as a function from
to -(Variables[FCL]). In this case, we write
domain(C) to denote nkg. Thus, a slicing criterion
C determines a projection function # [domain(C); C] which
we abbreviate as # [C]. We can now formalize the notion of
program slice.
Definition 4 (program slice) Given program p with an
associated CFG, let C be a slicing criterion for p. Then
a program ps (also called the residual program) is slice of p
with respect to C if for any p execution trace
where -s is the execution trace of ps running with initial
state s0 .
For example, let
-s be the execution traces of the power program (Figure 1)
and the slice of the power program (Figure 4), respectively.
Then,
3.2 Computing slices
Given a program p and slicing criterion C, Definition 4 admits
many programs ps as slices of p (in fact, p itself is a
(trivial) slice of p). Weiser notes that the problem of finding
a statement minimal slice of p is incomputable [32]. Below
we give a minor adaptation of Weiser's algorithm for computing
conservative slices, i.e., slices that may contain more
statements than necessary. 2
3.2.1 Initial approximation of a slice
Computing a slice involves (among other things) identifying
assignments that can affect the values of variables given in
the slicing criterion. To do this, one computes information
similar to reaching definitions. This requires keeping track
of the variables referenced and the variables defined at each
node in the CFG.
Definition 5 (definitions and references)
ffl Let def(n) be the set of variables defined (i.e., assigned
to) at node n (always a singleton or empty set).
ffl Let ref(n) be the set of variables referenced at node n.
Figure
5 shows the def and ref sets for the power program
of
Figure
1.
Next, for each node in the CFG we compute a set of
relevant variables. Relevant variables are those variables
whose values must be known so as to compute the values of
the variables in the slicing criterion.
Definition 6 (initially relevant variables)
be a slicing criterion. Then
C (n) is the set of all variables v such that either:
1.
2. n is an immediate predecessor of a node m such that
(a)
C (m).
Intuitively, a variable v is relevant at node n if (1) we are at
the line of the slicing criterion and we are slicing on v, or (2)
immediately precedes a node m such that (a) v is used to
define a variable x that is relevant at m (i.e., the value of x
depends on v), or (b) v is relevant at m and it is not "killed
" by the definition at line n. Figure 5 presents the initial
sets of relevant variables sets for the power program of Figure
1 with slicing criterion
n is relevant along all paths leading into node [loop:2]. In
the end block, n is a dead variable and thus it is no longer
relevant.
The classical definition of slicing does not require nodes
mentioned in the slice criterion to occur in the computed
slice. To force these nodes to occur, we define a set of obligatory
nodes - nodes that must occur in the slice even if they
fail to define variables that are eventually deemed relevant.
Definition 7 (obligatory nodes) The set OC of obligatory
nodes is defined as follows:
2 The algorithm we give actually is based on Tip's corrected version
of Weiser's algorithm [31].
fng fng f[test:1]g fng fng
Figure
5: Results of the slicing algorithm for the power program and slicing criteria
The initial slice set S 0
C is the set of nodes that define variables
that are relevant at a successor.
Definition 8 (initial slice set) The initial slice set S 0
C is
defined as follows:
Figure
5 presents the initial slice set S 0
C for the power program
of Figure 1. Node [loop:2] is the only node in S 0
C since
it is the only node that defines a variable that is relevant at
a successor.
Note that S 0
C does not include any conditionals since conditionals
make no definitions. How do we tell what conditionals
should be added? Intuitively, a conditional at node n
should be added if m 2 S 0
C [OC and m is control-dependent
on n. This set of conditionals B 0
C is called the branch set.
Definition 9 (branch set) The initial branch set B 0
C is
defined as follows:
Figure
5 presents the control-dependence information and
the initial branch set B 0
C for the power program of Figure 1.
As explained in Section 2.1, [loop:1], [loop:2], and [loop:3]
are control-dependent on [test:1]. Since [loop:2] is in S 0
OC , control-dependency dictates that [test:1] be included
in the B 0
C .
3.3 Iterative construction
Now we have to keep iterating this process. That is, we add
the conditionals that influence nodes already in the slice.
Then, we must add to the slice nodes that are needed to
compute expressions in the tests of conditionals, and so on
until a fixed point is reached.
ffl relevant variables
where the branch criterion That
is, the relevant variables at node n are those that were
relevant in the previous iteration, plus those that are
needed to decide the conditionals that control definitions
in the previous slice set. Finding such nodes for
a branch b is equivalent to slicing the program with
the criterion f(b; ref(b))g.
ffl slice set
That is, the slice set contains all the conditionals that
controlled nodes in the previous slice set, and all nodes
that define relevant variables.
ffl branch set
That is, the conditionals required are those that control
nodes in the current slice set or obligatory nodes.
Figure
5 presents the sets R 1
C
C which result from
the second iteration of the algorithm. On the next iteration,
a fixed point is reached since n is the only variable required
to compute the conditional test at [test:1] and it is already
relevant at [test:1].
In the iterations, the size of R i
C (n) for all nodes n and
C is increasing, and since R i
C (n) is bounded above by the
number of variables in the program and S i
C is bounded above
by the number of nodes in the CFG, then the iteration eventually
reaches a fixed points R i
C (n) and S i
C .
3.4 Constructing a residual program
Given RC and S C , the following definition informally summarizes
how a residual program is constructed. The intuition
is that if an assignment is in S C , then it must appear in
the residual program. If the assignment is not in S C but in
OC , then the assignment must be to an irrelevant variable.
Since the node must appear in the residual program, the
assignment is replaced with a skip. All goto and return
jumps must appear in the residual program. However, if an
if is not in S C , then no relevant assignment or obligatory
node is control dependent upon it. Therefore, it doesn't
matter if we take the true branch or the false branch. In
this case, we can simply jump to the point where the two
branches merge.
Definition 11 (residual program construction) Given
a program slicing criterion C, let RC ,
OC be the sets constructed by the process above. A residual
program ps is constructed as follows.
ffl For each parameter x in p, x is a parameter in ps only
l is the label of the initial block
of p.
ffl The label of the initial block of ps is the label of the
initial block of p.
ffl For each block b in p, form a residual block bs as follows
- For each assignment line a (with identifier [l:i]),
assignment a appears in the
residual program with identifier [l:i], otherwise if
then the assignment becomes a skip
with identifier [l:i] in the residual program, otherwise
the node is left out of the residual program.
- For jump j in b, if
then j is the jump in bs , otherwise we must have
Now if [l:i] 2 S C then j is the jump in bs , otherwise
the jump in bs is goto l 0 ; with identifier [l:i]
where l 0 is the label of the nearest post-dominating
block for both l 1 and l 2 .
Finally, post-processing removes all blocks that are not targets
of jumps in ps (these have become unreachable).
4 Finite-state Verification
As noted in the introduction, a variety of finite-state verification
techniques have been used to verify properties of soft-
ware. To make our presentation more concrete, we consider
a single finite-state verification technique: model checking of
specifications written in linear temporal logic (LTL). LTL
model checking has been used to reason about properties
of a wide range of real software systems; we have used it,
for example, to validate properties of a programming frame-work
that provides parallel scheduling in a variety of systems
(e.g., parallel implementations of finite-element, computational
fluid dynamics, and program flow analysis problems)
[16, 15].
4.1 Linear temporal logic
Linear temporal logic [27] is a rich formalism for specifying
state and action sequencing properties of systems. An LTL
specification describes the intended behavior of a system on
all possible executions.
The syntax of LTL includes primitive propositions P
with the usual propositional connectives, and three temporal
operators.
(propositional connectives)
(temporal operators)
When specifying properties of software systems, one typically
uses LTL formulas to reason about execution of particular
program points (e.g., entering or exiting a procedure)
as well as values of particular program variables. To capture
the essence of this for FCL, we use the following primitive
propositions.
ffl Intuitively, [l:i] holds when execution reaches
node i in the block labeled l.
ffl Intuitively, [x rop c] holds when the value of variable x
at the current node is related to [[c]] by the relational
operator rop (e.g., [x=0] where rop is =).
Formally, the semantics of a primitive proposition is defined
with respect to states.
ae
true
false otherwise
ae
true if oe(x)
false otherwise
The semantics of a formula is defined with respect to a
trace. The temporal operator 2 requires that its argument
be true from the current state onward, the 3 operator requires
that its argument become true at some point in the
future, and the U operator requires that its first argument
is true up to the point where the second argument becomes
true. Formally [24], let
there exists an i such that
Here are some simple specifications using the logic:
eventually block l5 will be executed
whenever block l2 is executed, block l3 is
always subsequently executed
whenever block l5 is executed x is non-zero
x is always less than 10
4.2 Software model construction
To apply finite-state verification to a software system, one
must construct a finite-state transition system that safely
abstracts the software semantics. The transition system
should be small enough to make automatic checking tractable,
yet it should be large enough to capture all information relevant
to the property being checked. Relevant information
can by extracted by an appropriate abstract interpretation
(AI) [9].
In our approach [12, 20], the user declares for each program
variable an abstract domain to be used for interpreting
operations on the variable. Using a process that combines
abstract interpretation and partial evaluation (which we call
abstraction-based program specialization (ABPS)), a residual
program is created by propagating abstract values and specializing
each program point with respect to these abstract
values [20, 21]. In the residual program, concrete constants
are replaced with abstract constants. The residual program
is a safely approximating finite-state program with a fixed
number of variables defined over finite abstract domains.
This program can then be submitted to a toolset [8, 14]
that generates input for existing model checking tools, such
as SMV [28] and SPIN [23]. This approach has been applied
to verify correctness properties of several software systems
written in Ada [12, 13].
In the steps described above, the user's main task is to
pick appropriate AI's, i.e., AI's that extract relevant infor-
mation, but throw away irrelevant information. The general
idea behind our methodology for chosing AI's is to start simple
(use an AI's that throw all information about dataflow
away) and incrementally refine the AI's based on information
from the specification to be verified and from the program
1. Start with the point AI: Initially all variables are
modeled with the point AI (i.e., a domain with a single
value ? where all operations return ?). In effect, this
throws away all information about a variable's value.
2. semantic features in the specification:
The specification formula to be checked includes, in
the form of propositions, different semantic features of
the program (e.g., valuations of specific program vari-
ables). These features must be modeled precisely by
an AI to have any hope of checking the property. For
example, if the formula includes a proposition [x=0],
then instead of using the point AI for x, one must use
e.g., an AI with the domain fzero; pos; ?g that we refer
to as a zero-pos AI.
3. Select controlling variables: In addition to variables
mentioned explicitly in the specification, we must
also use refined AI's for variables on which specification
variables are control dependent. The predicates
in the controlling conditionals suggest semantic features
that should be modeled by an AI. For example,
if a specification variable x is control-dependent upon a
conditional should use an even/odd
AI for y.
4. Select variables with broadest impact: When
confronted with multiple controlling variables to model,
select the one that appears most often in a conditional.
To illustrate the methodology, Figure 6 presents an FCL
rendering of an Ada process that controls readers and writers
of a common resource [8]. In the Ada system, this server
process runs concurrently with other client processes, and requests
such as start-read, stop-read are entry points (ren-
dezvouz points) in the control process. In the FCL code of
Figure
6, requests are given in the program parameter reqs
- a list of values in the subrange [1::4]. Figure 7 presents
the block-level control-flow graph for the FCL program.
Assume we are interested in reasoning about the invariance
property
The key features that are mentioned explicitly in this specification
are values of variable WriterPresent and execution
of the start-read block. The point AI does not provide
enough precision to determine the states where WriterPresent
has value zero. An effective AI for WriterPresent must
be able to distinguish zero values from positive values; we
choose the zero-pos AI.
At this point we could generate an abstracted model
and check the property or consider additional refinements
of the model; we choose the latter for illustrating our exam-
ple. We now determine the variables upon which the node
[start-read.1] and nodes with assignments to WriterPresent
are control dependent. In our example, there are three such
variables: WriterPresent, ActiveReaders and req. We are
already modeling WriterPresent and req is being used to
model external choice of interactions with the control program
via input. We could choose to bind ActiveReaders to
a more refined AI than point. Given that the conditional expressions
involving that variable are ActiveReaders=0 and
ActiveReaders?0, we might also choose a zero-pos AI. Thus,
only ErrorFlag is abstracted using the point AI.
At this point, we would generate an abstracted model
and check the property. If a true result is obtained then
we are sure that the property holds on the program, even
though the finite-state system only models two variables
with any precision. If a false result is obtained then we
must examine the counter-example produced by the model
checker. It may reveal a true defect in the program or it may
reveal an infeasible path through the model. In the latter
case, we identify the variables in the conditionals along the
counter-example's path as candidates for binding to more
precise AIs.
This methodology is essentially a heuristic search to find
the variables in the program that can influence the execution
behavior of the program relative to the property's proposi-
tions. When a variable is determined to be potentially influ-
its abstraction is refined to strengthen the resulting
system model. In the absence of such a determination, the
variable is modeled with a point abstraction which essentially
ignores any effect it may have; although in the future
it may be determined to have an influence in which case its
abstraction will be refined.
Reducing Models Using Slicing
As illustrated above, picking appropriate abstractions is non-trivial
and could benefit greatly from some form of automated
assistance. The key aspects of the methodology for
picking abstractions included
1. picking out an initial set of relevant variables V and
relevant statements (i.e., CFG nodes N) mentioned in
the LTL specification,
init:
ActiveReaders := 0; [2] raise-error:
WriterPresent := 0; [3] ErrorFlag :=
goto check-reqs; [5]
check-reqs: end:
if (null? reqs) [1] return; [1]
then end
else next-req;
next-req:
reqs := (cdr reqs); [2]
goto attempt-start-read; [3]
attempt-start-read: start-read:
if (req=1 and WriterPresent=0) [1] ActiveReaders := ActiveReaders+1; [1]
then start-read goto check-reqs; [2]
else attempt-stop-read;
attempt-stop-read: stop-read:
if (req=2 and ActiveReaders?0) [1] ActiveReaders := ActiveReaders-1; [1]
then stop-read if (WriterPresent=1) [2]
else attempt-start-write; then raise-error
else attempt-stop-write;
attempt-start-write: start-write:
if (req=3 and ActiveReaders=0 [1] WriterPresent :=
and WriterPresent=0) goto check-reqs; [2]
then start-write
else attempt-stop-write;
attempt-stop-write: stop-write:
if (req=4 and WriterPresent=1) [1] WriterPresent := 0; [1]
then stop-write else check-reqs; if (ActiveReaders?0) [2]
then raise-error
else check-reqs;
Figure
Read-write control example in FCL
2. identifying appropriate AI's for variables in V ,
3. using control dependence information, picking out an
additional set(s) of variables W that indirectly influence
V and N , and
4. identifying appropriate AI's for variables in W .
Intuitively, all variables not in V [ W are irrelevant and can
be abstracted with the point AI.
Clearly, item (1) can be automated by a simple pass over
the LTL specification. Moreover, the information in item
(3) is exactly the information that would be produced by
slicing the program p based on a criterion generated from
information in (1). Thus, pre-processing a program to be
verified using slicing provides automated support for our
methodology. Specifically, slicing can (i) identify relevant
variables (which require AI's other than the point AI), (ii)
eliminate irrelevant program variables from consideration in
the abstraction selection process (they will not be present in
the residual program ps yielded by slicing), and (iii) reduce
the size of the software and thus the size of the transition
system to be analyzed. Other forms of support are needed
for items (2) and (4) above.
For this approach, given a program p and a specification
/, we desire a criterion extraction function extract that extracts
an appropriate slicing criterion C from /. Slicing p
with respect to C should yield a smaller residual program
ps that (a) preserves and reflects the satisfaction of /, and
(b) has as little irrelevant information as possible.
The following requirement expresses condition (a) above.
Requirement 1 (LTL-preserving extract) Given program
p and a specification /, let ps the result
of slicing p with respect to C. Then for any p execution
trace
raise-error
init
attempt-stop-write
attempt-start-write
attempt-stop-read
attempt-start-read
next_req
check-reqs
stop-write
start-write
stop-read
start-read
Figure
7: Read-write control flowchart
where -s is the execution trace of ps running with initial
state s1 .
5.1 Proposition-based slicing criterion
We now consider some technical points that will guide us
in defining an appropriate extraction function. As stated
above, we want to preserve the satisfaction of the formula /
yet remove as much irrelevant information from the original
trace - as possible. We have already discussed the situation
where certain variables' values can be eliminated from the
states in a trace - because they do not influence the satifac-
tion of the formula / under -. What is important in this is
that we have used purely syntactic information (the set of
variables mentioned in /) to reduce the state space.
Let's explain this reduction in more general terms. Consider
a trace
Assume that the state transition s does not influence
the statisfaction of /. Formally, -
-s is the compressed trace (the transition s
compressed)
Another view of the change from - to -s is that the action ff
that causes the change from s i\Gamma1 to s i and the action ff 0 that
causes the change from s i to s i+1 have been combined into
an action ff 00 that moves from s i\Gamma1 to s i+1 . Intuitively, the
formula / "doesn't need to know'' about the intermediate
state s i . For example, the irrelevant transition might be an
assignment to an irrelevant variable, or a transition between
nodes [l:i] and [l:(i + 1)] not mentioned in /.
What is the technical justification for identifying compressible
transitions using a purely syntactic examination of
only the propositions in a formula /? The answer lies in
the fact that, for the temporal operators we are treating,
state transitions that don't change the satisfication of the
primitive propositions of the formula / do not influence the
satisfaction of / itself. This means that we can justify many
trace compressions by reasoning about only single transitions
and satisfaction of primitive propositions. We will see
below that this property does not hold when one includes
other temporal operators such as the next state operator ffi.
We now formalize these notions. The following definition
gives a notion of proposition invariance with respect to a
particular transition.
Definition 12 (P-stuttering transition) Let P be a primitive
proposition, and let
The transition s said to be P -stuttering when
If P is a set of primitive propositions and for each proposition
the transition is said to be P-stuttering.
The following lemma states that the satisfaction of a formula
containing primitive propositions P is invariant with
respect to expansion and compression of P-stuttering steps.
Lemma 1 Let / be a formula and let P be the set of primitive
propositions appearing in /. For all traces
where
This lemma fails when one includes the next state operator
[23] with the following semantics
For example, consider the trace
Let P be the proposition [l:3] and note -
-stuttering (P is false in both
states), but compressing the transition to obtain
does not preserve satification of the formula (i.e., -s 6j=
Intuitively, the next state operator allows one to count
states, and thus any attempt to optimize by compressing
transitions in this setting is problematic. For this reason,
some systems like SPIN [23] do not guarantee that the semantics
of ffi will be preserved during, e.g., partial-order reduction
optimizations.
Given a formula / where P is the set of propositions
in /, we now want to define an extraction function that
guarantees that transitions that are not P-stuttering are
preserved in residual program traces.
ffl For variable propositions observe that
only definitions of the variable x may cause the variable
to change value (i.e., cause a transition to be non-
P -stuttering). This suggests that for each proposition
[x rop c] in a given specification /, each assignment to
x should be included in the residual program. More-
over, x should be considered relevant.
ffl For a proposition entering or leaving CFG
node [l:i] can cause the proposition to change value
(i.e., cause the transition to be non-P -stuttering). One
might imagine that we only need the slice to include
the statement [l:i] for each such proposition in the for-
mula. However, it is possible that compression might
remove all intermediate nodes between two occurences
of the node [l:i]. This, as well as similar situations,
do not preserve that state changes associated with entering
and exiting the node. Therefore, in addition
to the node [l:i], we must ensure that all immediate
successors and all immediate predecessors of [l:i] are
included in the slice.
Based on these arguments, we define an extraction function
as follows.
(Proposition-based extraction)
Given a program p and specification /, let V be the set of
all program variables occurring in /, and let
be the set of all nodes that contain assignments to variables
in V unioned with the set NP of all nodes appearing in node
propositions of / together the successors and predecessors of
each node in NP . Then extract(/)
)g.
1 The extraction function extract satisfies Requirement
1.
As an example,
yields the following criterion
Here, the first three lines of the criterion are the [start-read.1]
node mentioned in the formula, along with its predecessor
and successor. The last three lines are the nodes where
WriterPresent is assigned a value.
Figure
8 presents the resulting slice. The slice is identical
to the original program except that the variable ErrorFlag
and the block raise-error disappear from the program.
Thus, slicing automatically detects what our abstracting
methodology yielded in the previous section: for the given
specification, only ErrorFlag is irrelevant. The previous
conditional jumps in stop-read and stop-write to raise-error
are replaced with unconditional jumps to check-req. In this
case, the slicing algorithm has detected that the nodes in the
raise-error block are irrelevant, and the conditional jumps
are replaced with unconditional jumps to the node where the
true and false paths leading out of the conditionals meet.
As a second example, consider the specification
3[check-reqs:1] ([check-reqs.1] is eventually executed). In
this case extract(/) yields the criterion C2 :? !
Here, the lines of the criterion are the [check-reqs.1] node
mentioned in the formula, along with its predecessor and
successors. Since there are no variable propositions in the
specification, no variables are specified as relevant in the
criterion.
Figure
9 presents the resulting slice. It is obvious that
the residual program is sufficient for verifying the reachability
of [check-req.1] as given in the specification. All
variables are eliminated except reqs which appears in the
test at [check-reqs.1]. Even though it not strictly necessary
for verifying the property, this conditional is retained
by the slicing algorithm since it is control-dependent upon
itself. In addition, the slicing criterion dictates that the node
[next-req.1] should be in the slice. However, since the assignment
at this node does not assign to a relevant variable,
the assignment can be replaced with skip. Finally, the jump
to check-reqs at node [next-req.3] in the residual program
is the result of chaining through a series of trivial goto's
during post-processing.
6 Future Work
The previous criteria have considered individual proposi-
tions. Many property specifications, however, describe states
using multiple propositions or state relationships between
states that are characterized by different propositions. In
this section, we give some informal suggestions about how
the structure of these complex specifications may be exploited
to produce refined slicing criterion.
init:
ActiveReaders := 0; [2]
WriterPresent := 0; [3]
goto check-reqs; [5]
check-reqs: end:
if (null? reqs) [1] return; [1]
then end
else next-req;
next-req:
reqs := (cdr reqs); [2]
goto attempt-start-read; [3]
attempt-start-read: start-read:
if (req=1 and WriterPresent=0) [1] ActiveReaders := ActiveReaders+1; [1]
then start-read goto check-reqs; [2]
else attempt-stop-read;
attempt-stop-read: stop-read:
if (req=2 and ActiveReaders?0) [1] ActiveReaders := ActiveReaders-1; [1]
then stop-read goto check-reqs; [2]
else attempt-start-write;
attempt-start-write: start-write:
if (req=3 and ActiveReaders=0 [1] WriterPresent :=
and WriterPresent=0) goto check-reqs; [2]
then start-write
else attempt-stop-write;
attempt-stop-write: stop-write:
if (req=4 and WriterPresent=1) [1] WriterPresent := 0; [1]
then stop-write else check-reqs; goto check-reqs; [2]
Figure
8: Slice of read-write control program with respect to C1
init:
goto check-reqs; [5]
check-reqs: end:
if (null? reqs) [1] return; [1]
then end
else next-req;
next-req:
reqs := (cdr reqs); [2]
goto check-reqs; [3]
Figure
9: Slice of read-write control program with respect to C2
not included in slice
not included in slice
Figure
10: Slicing abstracted programs
Consider a simple conjunction of propositions appearing
in an eventuality specification
Rather than slicing on the propositions separately, we can
use the semantics of - to refine the slicing criterion. For this
property, we are not interested in all assignments to x but
only those that can influence the value at [l:1]. Thus, our
slicing criterion would be: extract(/)
x)g. This
approach generalizes in any setting where the program point
proposition occurs positively with any number of variable
propositions as conjuncts.
Thus far, we have considered slicing as a prelude to
ABPS. Application of ABPS can, however, reveal semantic
information about variable values in statement syntax,
thereby making it available for use in slicing.
Figure
illustrates a sequence of assignments to x, on
the left, and the abstracted sequence assignments, in the
middle, resulting from binding of the classic signs AI [1]
to x during ABPS. In such a situation we can determine
transitions in the values of propositions related to x (e.g.,
syntactically.
Consider a response property [15] of the form
Our proposition slicing criterion would be based on solely
on /1 and /2 . As with the conjunctions above, we observe
two facts about the structure of this formula that can be
exploited.
1. Within the 2 is an implication, thus we need only
reason about statements that cause the value of /1
to become true (since false values will guarantee that
the entire formula is true).
2. Since the right-hand side of the ) is a 3, we need
only reason about the first statement, in a sequence of
statements, that causes /2 to become true.
The right column of Figure 10 illustrates the effect of
applying observation 1 to eliminate assignments that do not
cause a positive transition in from the sliced
program. Note that if a proposition involving x appears in
/2 then the slicing criterion may be expanded to include
additional statements.
In addition, a program point where /1 holds which is
post-dominated by a point at which /2 holds need not be
considered for the purpose of checking response, since the
existence of this relationship implies that the response holds
for this occurrence of /1 .
Observation 2 can be exploited using post-domination
information. A program point where /2 holds which is post-
dominated by another point where /2 holds does not need to
be included in the slice. This is because only one program
point at which /2 holds is required on any path for the
3 formula to become true. Thus, any post-dominated /2
nodes may be eliminated.
This refined slicing criteria defined above requires the
use of auxiliary information, such as post-domination infor-
mation, that needs to be available prior to slicing. While
the cost of gathering this information and processing it to
compute slicing criteria may be non-trivial, it will be dominated
by the very high cost of performing model checking
on the sliced system. In most cases, the cost of reducing the
the size of the system presented to the model checker will
be more than offset by reduced model check time.
We have discussed two refined criteria based on structural
properties of the formula being checked. Similar refinements
can be defined for a number of other classes of
specifications including precedence and chain properties [15].
These refinements use essentially the same information as
described above for response properties; precedence properties
require dominator rather than post-dominator information
We note that the refined response criteria is applicable
only when the property to be checked is of a very specific
even slight variations in the structure of the formula
may render the sliced program unsafe. A recent survey
of property specification for finite-state verification showed
that response and precedence properties of the form described
above occur quite frequently in practice [16]; 48% of
real-world specifications fell into these two categories.
For this reason, we believe that the effort to define a series
of special cases for extracting criteria based on formula
structure is justified despite its apparent narrowness.
7 Related Work
Program slicing was developed as a technique for simplifying
programs for debugging and for identifying parts of
programs that can execute in parallel [32]. Since its development
the concept of slicing has been applied to a wide
variety of problems including: program understanding, de-
bugging, differencing, integration, and testing [31]. In these
applications, it is crucial that the slice preserve the exact
execution semantics of the original program with respect to
the slicing criterion. In our work, we are interested only
preserving the ability to successfully model check properties
that are correct; this weakening allows for the refinement of
slicing criteria based on the property being checked.
Slicing has been generalized to other software artifacts
[30] including: attribute grammars, requirements models
[22] and formal specifications [4]. Cimitile et. al. [6] use Z
specifications to define slicing criteria for identifying reusable
code in legacy systems. In their work, they use a combination
of symbolic execution and theorem proving to process
the specifications and derive the slicing criteria. In con-
trast, we identify necessary conditions for sub-formula of
commonly occurring patterns of specifications and use those
conditions to guide safe refinement of our basic proposition
slicing criteria.
Our work touches on the relationship between program
specialization and slicing. We use slicing as a prelude to specialization
and suggest that abstraction-based specialization
may reveal semantic features in the residual program's syntax
that could be used by refined slicing criteria. Reps and
Turnidge [29] have studied this relationship from a different
perspective. They show that while similar the techniques
are not equivalent; not all slicing transformations can be
achieved with specialization and vice versa.
While slicing can be viewed as a state-space reduction
technique it has a number of important theoretical and practical
differences from other reduction techniques appearing
in the literature. State-space reduction, such as [17], preserve
correctness with respect to a specific class of correctness
properties. In contrast, our approach to slicing based
on criteria extracted from formulae yields compressed traces
that contain the state changes relevant to propositions contained
in the temporal logic formula. Our approach yields
programs that remain both sound and complete with respect
to property checking. This is in sharp contrast to the many
abstraction techniques developed in the literature (e.g.[7])
which sacrifice completeness for tractability. Finally, even
though significant progress has been made on developing
algorithms and data structures to reduce model checking
times, such as OBDDs [2], those techniques should be seen
as a complement to slicing. If slicing removes variables from
the system that do not influence the behavior to be checked
then the model checker will run faster regardless of the particular
implementation techniques it employs.
8 Conclusion
We have presented a variation of program slicing for a simple
imperative language. We have shown how slicing criteria can
be defined that guarantee the preservation of model check
semantics for LTL specifications in the sliced program. We
have implemented a prototype tool that performs this slicing
and experimented with a number of examples. Based on this
work we are scaling up the prototype to handle significantly
more complex features of programs including: structured
data, treatment of procedures, and multi-threaded programs
that communicate through shared data. While these extensions
are non-trivial they will build of the solid base laid out
in the work reported in this paper.
Acknowledgements
Thanks to James Corbett, Michael Huth, and David Schmidt
for several very illuminating discussions. Thanks also to
Hongjun Zheng for helpful comments on an earlier draft.
--R
Abstract Interpretation of Declarative Languages.
Symbolic model checking: 10 20 states and beyond.
Process control design using spin.
Model checking safety critical software with spin: an application to a railway interlocking system.
reusable functions using specification driven program slicing: A case study.
Model checking and abstraction.
Evaluating deadlock detection methods for concurrent software.
Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints.
Abstract interpretation of reactive systems.
Using partial evaluation to enable verification of concurrent software.
Model checking generic container implementations.
Translating ada programs for model checking
Property specification patterns for finite-state verification
Patterns in property specifications for finite-state verification
Using partial orders for the efficient verification of deadlock freedom and safety properties.
Compiler generation by partial evaluation.
An introduction to partial evaluation using a simple flowchart language.
Staging static analysis using abstraction-based program specialization
Specializing configurable systems for finite-state verification
Reduction and slicing of hierarchical state machines.
The model checker spin.
Logic in Computer Science: Modelling and Reasoning about Systems.
Partial Evaluation and Automatic Program Generation.
The Temporal Logic of Reactive and Concurrent Systems: Specification.
Symbolic Model Checking.
Program specialization via program slicing.
Beyond traditional program slicing.
A survey of program slicing techniques.
Program slicing.
Supercompilers for Parallel and Vector Computers.
--TR
On the adequacy of program dependence graphs for representing programs
A Formal Model of Program Dependences and its Implications for Software Testing, Debugging, and Maintenance
The temporal logic of reactive and concurrent systems
Partial evaluation and automatic program generation
Incremental program testing using program dependence graphs
Static slicing in the presence of goto statements
Model checking and abstraction
Design patterns
Property preserving abstractions for the verification of concurrent systems
Beyond traditional program slicing
Abstract interpretation of reactive systems
The Model Checker SPIN
Reduction and slicing of hierarchical state machines
Filter-based model checking of partial systems
Using partial evaluation to enable verification of concurrent software
Patterns in property specifications for finite-state verification
Logic in computer science
Bandera
Abstract interpretation
Symbolic Model Checking
Evaluating Deadlock Detection Methods for Concurrent Software
Staging Static Analyses Using Abstraction-Based Program Specialization
The Semantics of Program Slicing and Program Integration
Program Slicing of Hardware Description Languages
Slicing Programs with Arbitrary Control-flow
A Formal Study of Slicing for Multi-threaded Programs with JVM Concurrency Primitives
Program Specialization via Program Slicing
Program Analysis as Model Checking of Abstract Interpretations
Using Partial Orders for the Efficient Verification of Deadlock Freedom and Safety Properties
reusable functions using specification driven program slicing
--CTR
Heike Wehrheim, Slicing techniques for verification re-use, Theoretical Computer Science, v.343 n.3, p.509-528, 17 October 2005
Matthew B. Dwyer , John Hatcliff , Roby Joehanes , Shawn Laubach , Corina S. Psreanu , Hongjun Zheng , Willem Visser, Tool-supported program abstraction for finite-state verification, Proceedings of the 23rd International Conference on Software Engineering, p.177-187, May 12-19, 2001, Toronto, Ontario, Canada
James C. Corbett , Matthew B. Dwyer , John Hatcliff , Shawn Laubach , Corina S. Psreanu , Robby , Hongjun Zheng, Bandera: extracting finite-state models from Java source code, Proceedings of the 22nd international conference on Software engineering, p.439-448, June 04-11, 2000, Limerick, Ireland
Yunja Choi, From NuSMV to SPIN: Experiences with model checking flight guidance systems, Formal Methods in System Design, v.30 n.3, p.199-216, June 2007
Marieke Huisman , Kerry Trentelman, Factorising temporal specifications, Proceedings of the 2005 Australasian symposium on Theory of computing, p.87-96, January 01, 2005, Newcastle, Australia
Xianghua Deng , Matthew B. Dwyer , John Hatcliff , Masaaki Mizuno, Invariant-based specification, synthesis, and verification of synchronization in concurrent programs, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
G. J. Holzmann , M. H. Smith, An Automated Verification Method for Distributed Systems Software Based on Model Extraction, IEEE Transactions on Software Engineering, v.28 n.4, p.364-377, April 2002
Oksana Tkachuk , Sreeranga P. Rajan, Application of automated environment generation to commercial software, Proceedings of the 2006 international symposium on Software testing and analysis, July 17-20, 2006, Portland, Maine, USA
Antonella Santone , Gigliola Vaglini, A local approach for temporal model checking of java bytecode, Journal of Computer and System Sciences, v.70 n.2, p.258-281, March 2005
Yunja Choi , Sanjai Rayadurgam , Mats P.E. Heimdahl, Automatic abstraction for model checking software systems with interrelated numeric constraints, ACM SIGSOFT Software Engineering Notes, v.26 n.5, Sept. 2001
Matthew B. Dwyer , John Hatcliff , Robby , Venkatesh Prasad Ranganath, Exploiting Object Escape and Locking Information in Partial-Order Reductions for Concurrent Object-Oriented Programs, Formal Methods in System Design, v.25 n.2-3, p.199-240, September-November 2004
Ingo Brckner , Bjrn Metzler , Heike Wehrheim, Optimizing slicing of formal specifications by deductive verification, Nordic Journal of Computing, v.13 n.1, p.22-45, June 2006
Tool Support for Verifying UML Activity Diagrams, IEEE Transactions on Software Engineering, v.30 n.7, p.437-447, July 2004
Radu Iosif , Matthew B. Dwyer , John Hatcliff, Translating Java for Multiple Model Checkers: The Bandera Back-End, Formal Methods in System Design, v.26 n.2, p.137-180, March 2005
Baowen Xu , Ju Qian , Xiaofang Zhang , Zhongqiang Wu , Lin Chen, A brief survey of program slicing, ACM SIGSOFT Software Engineering Notes, v.30 n.2, March 2005 | software verification;program slicing;state-space reduction;linear temporal logic;program dependence graph;model-checking |
369187 | Optimal sequencing by hybridization in rounds. | Sequencing by hybridization (SBH) is a method for reconstructing a sequence over a small finite alphabet from a collection of probes (substrings). Substring queries can be arranged on an array (SBH chip) and then a combinatorial method is used to construct the sequence from its collection of probes. Technological constraints limit the number of substring queries that can be placed on a single SBH chip. We develop an idea of Margaritis and Skiena and propose an algorithm that uses a series of small SBH chips to sequence long strings while the number of probes used matches the information theoretical lower bound up to a constant factor. | Introduction
Consider the following problem. Let be an alphabet with letters. Given
a string s drawn uniformly at random from n and the ability to ask queries
of the type: "Is x a substring of s?". What is the minimum set of such
questions one can ask such that with high probability one can reconstruct s.
The problem is an abstraction of a problem that occurs in the sequencing
of DNA molecules. DNA strands can be seen as sequences drawn from the
four letter alphabet of nucleotides fA; C; G; Tg.
Sequencing by hybridization (SBH) (Bains and Smith, 1988; Drmanac
et al., 1989; Lysov et al., 1998) has been proposed as an alternative to the
traditional Gilbert-Sanger method of sequencing by gel electrophoresis; The
surveys (Chetverin and Cramer, 1994; Pevzner and Lipshutz, 1994) give an
overview of both the technological and algorithmic aspects of the method.
The method applies the complementary Watson-Crick base pairing of DNA
molecules. A given single stranded DNA molecule will hybridize with its
complement strand. SBH is based on the use of a chip, fabricated using
photolithographic techniques. The active area of the chip is structured as
a matrix, each region of which is assigned to a specic oligonucleotide, biochemically
attached to the chip surface. A solution of
uorescently tagged
target DNA fragments are exposed to the chip. These fragments hybridize
with their complementary fragment on the chip and the hybridized fragments
can be identied using a
uorescence detector. Each hybridization (or lack
thereof) determines whether the fragment is or is not a substring of the target
string. In our formulation we will assume that the the hybridization chips
can give us an answer to a ternary query: whether a string does not occur,
occurs once or occurs more than once.
The classical sequencing chip design C(m), contains all m single stranded
oligonucleotides of some xed length m. Pevzner's algorithm (Pevzner, 1989)
for reconstruction using classical sequencing chips interprets the results of
a sequencing experiment as a subgraph of the DeBruijn graph, such that
any Eulerian path corresponds to a possible sequence. The reconstruction
therefore is not unique unless the Eulerian path is unique.
Examples can be found that show that in order to uniquely reconstruct all
members of n using a classical sequencing chip, C(m), m needs to be greater
than n(see (Skiena and Sundaram, 1995)). Pevzner et al. (1991) show experimentally
that the classical C(8) chip which contains 65.536 oligonucleotides
su-ces to reconstruct 200 nucleotide sequences in only 94 out of 100 cases.
Dyer et al. (1994) and Arratia et al. (1996) have shown independently that
for C(m) to be eective on random strings of length n, m needs to be chosen
greater than 2 log n. In other words, for there to be a constant probability
that we can reconstruct a string of length n drawn uniformly at random
from n using a classical hybridization chip, the chip must contain at least
substrings. This compares to the information theoretical lower bound on
the number of ternary queries needed to distinguish between n elements of
n).
A variety of dierent methods have been suggested to overcome these
negative results for the classical SBH chips. Using the assumption that universal
DNA bases can be synthesized Preparata et al. (Preparata et al., 1999;
Preparata and Upfal, 2000) give a scheme for which the size of the chip is
optimal i.e. O(n). Broude et al. (1994) suggest generating positional information
along with the hybridization information (PSBH). PSBH was analyzed
algorithmically by Hannenhalli et al. (Hannenhalli et al., 1996) which
show NP-hardness of the general problem and give an e-cient algorithm if
the positional information is only o by a constant. This model was further
analyzed by Ben-Dor et al. (Ben-Dor et al., 1999). Drmanac et al. (1989)
suggested sequencing large sequences by obtaining spectra of many overlapping
fragments. This model was analyzed algorithmically by Arratia et
al. (1996) giving bounds on the probability of unique reconstruction. Shamir
and Tsur (2001) recently improved on the analysis of Arratia et al. (1996)
and furthermore gave an algorithm for the case when false negative errors
occurred in the hybridization.
Sequencing by hybridization in rounds or interactive sequencing by hybridization
was rst considered by Margaritis and Skiena (1995). The
assumption here is that the sequencing queries can be done adaptively and
once the results of one hybridization round are known a new chip can be
constructed. In their original paper Margaritis and Skiena give a number
of upper and lower bounds on the number of rounds needed dependent on
the number of probes allowed in each round. Among their results was an
algorithm that reconstructs a sequence with high probability using O(log n)
chips each containing O(n) queries. The main result of this paper improves
on this result. A few other papers have been written on ISBH. Skiena and
Sundaram (1995) showed that if only a single query could be asked in each
round nrounds are necessary and ( 1)n rounds are su-cient for string
reconstruction. Kruglyak (Kruglyak, 1998) gave an algorithm with a worst
case performance guarantee which shows that O(log n) rounds are su-cient
queries are placed on a chip in each round.
The following theorem is the main result of this paper.
Theorem 1.1 With high probability a string s drawn uniformly at random
from n can be reconstructed by sequentially using seven hybridization chips
each containing O(n) substring queries.
Notice that this result is optimal for the number of queries in the information
theoretical sense, up to a constant multiple. Our algorithm proceeds
in the following manner. In its initial step we ask substring queries corresponding
to the classical SBH chip. We then construct the DeBruijn graph
in the way suggested by Pevzner. We then proceed to ask targeted queries
in order to unravel the string.
The main result of this paper is mathematical, although it may eventually
have some practical relevance. Sequencing chips similar to the classical
chips are already in production by Hyseq Inc., which holds several patents on
the procedure (Drmanac, Crkvenjakov, 1993). These chips have been used
for successfully for De-Novo sequencing (Drmanac et al., 1993) (sequenc-
ing when sequence is unknown). Given that many organisms have been
sequenced another problem of practical importance is resequencing by hybridization
(Drmanac et al., 1989; Pe'er and Shamir, 2000), in this problem
a template sequence is known and the goal of the sequencing is to determine
the specic mutational variants of the sequence. Machines for producing
oligonucleotide arrays using ink-jet printer technology have been pioneered
by Blanchard et al. (1996) and are currently being manufactured by Agilent
Technologies. This technology may prove to be particularly useful for interactive
sequencing by hybridization. For a review of dierent technologies
for DNA array manufacturing see (Blanchard, 1998; Schena, 1999). Other
relevant technologies include Aymetrix type arrays (Lockhart et al., 1996;
Fodor et al., 1991) and Southern Array Makers developed by Oxford Gene
Technologies (Southern, 1996). We note that technological constraints need
to be considered before a practical implementation of the method developed
in this paper. In particular, the more realistic case when false positive and
negative errors occur in the experiments needs to be considered.
The organization of the paper is as follows. In the next section we will give
overview of previous work. This will motivate our algorithm and we will give
a simplied version of it. In Section 3 we will give our complete algorithm
and verify its correctness. In Section 4 we will prove our complexity result
and in Section 5 we demonstrate our computational experience.
CTG#
Figure
1: The DeBruijn-graph constructed if the the substrings of s of length
3 are AGC, GCT , TGC, CTG, GCA and CAT and each one occurs once.
Motivation and basic algorithm
2.1 DeBruijn graphs
In this section we review the DeBruijn graph construction rst considered
by Pevzner s be our unknown target string. Given the answer of
a ternary query for all strings r of length m, whether r occurs once in s,
more than once, or not at all, we can construct an associated edge-labeled
digraph, D s
m , in the following manner. The vertex set of D s
m consists of
There is no edge from x 1
2g in which case the edge is labeled 1 if x 1
occurs once and 2 if x 1 occurs more than once in s. In
what follows we will call this the DeBruijn graph of s. Figure 1 shows the
construction of a DeBruijn graph.
We will also label the nodes of D s
m , a node x will be labeled 0 if it has no
in- or out-edges, it will be labeled 1 if it has at most one in-edge and at most
one out-edge both labeled 1, and labeled 2 otherwise. Let (x) denote the
label of a node/edge x.
We note that there is a unique path in D s
m for any substring x
of s where k m 1, namely the path starting at x 1 ending
at x k m+2 x that traverses all of the edges x i x
1g. We will denote this path by P(x) and refer to it as the
path in D s
m corresponding to x. In the special case where k is m 1 or m we
will refer to it as the node/edge corresponding to x.
Pevzner showed that s corresponds to an Eulerian path in this graph.
Where we dene an Eulerian path in this graph to be any walk that traverses
the edges with label 1 exactly once and the edges with label 2 at least
twice. From the graph in Figure 1 we can tell that the original string s is
AGCTGCAT.
GC GC
GCA GC GCA
GCAT GC GCA
Figure
2: Examples of the mapping P(x), when
CTG#
Figure
3: The DeBruijn-graph constructed if the substrings of s of length 3
are AGC, GCT , CGC, GCG, TGC, CTG, GCA and CAT and each one
occurs once.
The DeBruijn graph may however have more than one Eulerian path. In
this case the construction of string s is ambiguous (see Figure 3).
2.2 Simplied algorithm
Our algorithm proceeds by constructing the DeBruijn graph for all substrings
of some xed length m. We then use information from that graph to construct
a set of substring queries that enable us to determine all substrings of s of
length number larger than m. We then iterate this
process noticing that the probability that D s
m is a path increases with m.
I.e. we will attempt to elongate the strings corresponding to the nodes in the
DeBruijn graph.
To motivate our algorithm let us look at where the ambiguities are in this
elongation process. Notice that if a node x in the DeBruijn graph has label
1, then each of the strings corresponding to the in- and out-edges occurs
only once in s. The elongation of the string corresponding to the in-edge
of x is hence unambiguous and can be determined by appending to it the
last character of the string corresponding to the out-edge. For example from
the graph
# TAGT we know that the string
CATAGT occurs in s. However if a node x has more than one in- or out-edge
we need to pair the in-edges with the out-edges.
ATAC
Figure
4: A node with two in- and out-edges
From the graph in Figure 4 we can tell that two of the strings CATACA,
CATACG, GACACA, GACACG occur in s but not which ones. To determine
all substrings of length six we would ask substring queries for each of these
strings. The central question in the remainder of this paper is to determine
conditions such that the number of queries generated in this way is not too
large.
If each of the edges in this graph was labeled 1 they would have a unique
elongation. To determine the elongation of the string CATAC by k characters
it would be su-cient to determine whether CATAC elongates to CATACA
or CATACG and then determine the elongation of ATACA or ATACG by
characters.
However if CATAC elongates to CATACG and ATACG occurs two or
more times in s then ATACG will have two elongations of length k 1 and
we cannot determine which is the elongation of CATAC. We see that to
determine the elongation of a particular edge e by k characters it is su-cient
to determine all paths from e that either (1) have length k or (2) are shorter
and end at an edge of multiplicity one.
This motivates the denition of a cluster, a collection of nodes and edges
that all have label 2 . Only when the Eulerian path passes through those
nodes and edges that have label 2 is the determination of the string am-
biguous. The set of clusters in the graph is the set of ambiguous parts of the
graph.
Denition 2.1 The cluster containing x, Cl(x), is the maximal connected
subgraph of D s
containing x such that nodes and edges
Our task is to determine s which can be thought of as determining an
Eulerian path from the start node of D s
m to the end node of D s
. Notice that
any internal node or edge labeled 1 has a unique occurrence in s, and will
therefore have a unique elongation. Assuming that we know which nodes
are the start node and the end, node we can reconstruct s by determining a
path from the start node to an edge labeled 1. We can then determine the
continuation of the Eulerian path from that edge by its unique elongation to
either another edge of label 1 or to the end node.
This motivates the following algorithm for reconstructing s. Let c be a
positive constant and Q be the set of queries, to be placed on the DNA chip.
Algorithm 1
Step 1. Classical SBH chip.
ne
Ask the queries in Q and construct D s
.
Step 2. Resolve ambiguities.
Let C s
While C s
choose a node x from C s
Let C Cl(x). C s
Let Q be the set of strings
1g.
or
Step 3. Reconstruct s from the DeBruijn graph
and the answers to the queries Q.
2.3 Potential pitfalls
Let us look at the complications we may face in the analysis of the algorithm.
If there is a cycle in the DeBruijn graph we cannot determine whether a given
string passes through the cycle or past it and we will add queries for both
possible strings. This may cause us to ask a large number of queries for each
such cycle.
As an example of this in Figure 5 we have a loop in AAA and the edge
from AAA to AAC has label 2 . We cannot determine from the graph
AAAA#
Figure
5: Example of a cluster along with its in and out edges, the cluster
being those nodes and edges having label 2 .
which two of the strings TAAACA, TAAACT, TAAAACA, TAAAACT,
CAAACA, CAAACT, CAAAACA or CAAAACT occur in s. The algorithm
will add the queries TAAAA, CAAAA, TAAACA, TAAACT, CAAACA,
TAAACA, AAAACA, AAAACT. As the edge AAAA occurs only once in
the graph we can rst determine whether TAAAA or CAAAA occurs in s
and then determine the occurrence of TAAAACA, TAAAACT, CAAAACA
and TAAAACT from which of the strings AAAACA or AAAACT occurs in
s.
If the cluster contains no cycles the number of queries generated by this
algorithm will grow as the number of in-edges times the number of out-edges
of the cluster. If the cluster contains cycles we may not be able to determine
how often a given path traverses the cycle. If it contains multiple cycles the
same holds true for each one of them, the number of queries generated by
the algorithm may therefore grow exponentially with the number of cycles
in the cluster. Notice that the occurrence in s of a string corresponding to
a node in the graph is highly correlated with the occurrence of the strings
corresponding to its neighbors. This interdependence makes the algorithm
di-cult to analyze. Complex clusters that require a large number of queries
have a reasonable probability of occurring and the average number of queries
generated by the algorithm may in fact be large.
Modied Algorithm
We modify the previous algorithm so that we only make a limited number
of queries initiating at any given node in the graph. Using two rounds of
queries we may hence not be able to determine s, but we will show that with
high probability seven rounds will be su-cient. We will use the following
modication of Step 2. Notice also that this modied version doesn't assume
prior knowledge of the start and end nodes as we will add queries starting at
any node in the cluster and terminating at any node. Let k
times
Let Q be the set of strings
Ask the queries in Q.
Construct D s
.
3.1 Correctness of algorithm
Let us clarify the statement of Theorem 1.1.
Denition 3.1 We say that an event occurs with high probability (whp) if
it occurs with probability 1 o(1) as n !1.
Lemma 3.1 The number of substring queries generated by an algorithm satisfying
the conditions of Theorem 1.1 is optimal in the information theoretical
sense, up to a constant multiple.
Proof: There are n strings of length n and for there to be high probability
that we can sequence all the strings we must be able to distinguish between
strings. There are 3 m possible answers to m ternary queries.
n). Our algorithm generates O(1)
queries.
We now verify that the algorithm is correct, i.e. whp it reconstructs s.
Lemma 3.2
(a) A single iteration of Step 2 0 on D s
will allow us to construct D s
(b) After applying Step 2 0 we can whp reconstruct s.
Proof: (a) For each substring x of length m we add a query for all possible
elongations either of length k 0 or from x to a string y that has multiplicity
one, in which case the elongation of x can be determined from the unique
elongation of y. (b) follows from the result in Arratia et al. (1996) and Dyer
et al. (1994) that whp the DeBruijn graph D s
random s 2 n .
4 Complexity Analysis
We now proceed to estimate the expected number of queries in each iteration.
The main goal of this section is to prove the following lemma.
Lemma 4.1 The expected number of queries in Q generated by a single iteration
of Step 2 0 is O(n).
We will start by dening normal nodes in Section 4.1. We will then show
that the queries generated originating at a normal node form a tree. In
Section 4.2, we will bound the number of such trees. In Section 4.3, we will
consider the relationship between trees and substrings of s. In Section 4.4,
we summarize and upper bound the expected number of queries generated
originating at normal strings. In Section 4.5 we show that it is rare that a
node is not normal, and hence prove Lemma 4.1. Finally, in Section 4.6, we
show concentration of the expectation of the number of queries.
4.1 Normal substrings
Denition 4.1 For every node x 2 D s
m we dene a subgraph L x of D s
m . Its
edge and vertex sets are the sets of edges and vertices reachable from x by
a path x 1
1g.
We say that x and L x are normal if L x is a tree that does not contain end
and all substrings y of s that P maximally maps 1 to L x occur disjointly in s.
Notice that the denition of normal refers to substrings of s, i.e. a node
will be normal depending on the string s. In the example in Figure 5 L TAA
is the graph shown except for the node CAA and edge CAAA. L TAA is not
normal since it contains a cycle. Figure 6 shows LAAC .
4.2 Counting the number of trees
We will now bound the expected number of queries in Q whose initial string
is x if L x is normal. Notice that in this case the algorithm will in Step 2 0
generate one query with initial string x for every node in L x , other than x
1 I.e. the collection fy s
substring relation.
AACA# ACA
ACT
Figure
example in Figure 5. We consider LAAC to be normal if s
is of the form *AACA*AACT*, where * denotes any string and not normal
if s is of the form *AACAACT* since here the two strings overlap.
and its children. Also notice (by Denition 4.1) when L x is normal it does
not contain a cycle and must therefore be a tree.
Denition 4.2 We say that an -ary tree (each node has at most children)
is a (b; i; l)-tree if it has b branching nodes (nodes with more than one child),
single child nodes and l leaves. The children of a node will be considered
to be ordered and we will make a distinction between two children of a node
based on their ordering.
We can now count the expected number of queries whose initial string is
x by counting the number of dierent (b; i; l)-trees and then in Section 4.3
estimating the probability that L x is such a tree.
Lemma 4.2 The number of distinct (b; i; l)-trees is at
Proof: The number of trees with b branching nodes and l leaves is at most( 1)k+1
since it is less than the number of -ary trees
of size k (see (Knuth, 1968), Ex. 2.3.4.4.11). We now insert the internal
non-branching nodes into the tree by subdividing one of the existing edges
in the tree or adding a new single child root node. The choice of where to
put the internal nodes can be done in i+k 1
ways and the out-edges of the
internal nodes can be chosen in i ways.
4.3 From trees to strings
We will now estimate the probability that L x takes the form of a tree T ,
and multiply by the number of queries generated if L x takes this form. To
avoid signicant over-counting of the number of queries with initial string
x, we count only the number of queries terminating at leaf nodes of T ; The
queries with initial string x that terminate at internal nodes of T can then
be counted when estimating the number of queries generated when L x takes
the form of one of the subtrees of T .
We will now dene a partial ordering of trees. This partial ordering
ensures that we count all the queries to the internal nodes as well.
Denition 4.3 A subtree T 0 of a labeled tree T , is an incubating subtree if,
for each node v in T , either all or none of the children of v occur in T 0 .
Note that by this denition all children of a given node must be removed
at the same time. Rephrasing, T descends from T 0 through a series of incubation
operations where all children of any given node appear in the same
operation. For the purpose of our proof, the important thing to note here is
that each node in T is a leaf node of some incubating subtree of T .
The following observation is immediate from Denition 4.1 of normal and
Denition 4.3.
Lemma 4.3 Given a collection C of nodes of a normal L x such that for all
y is not a predecessor z then there exist disjoint substrings
of s corresponding to each of the nodes in C. In particular the collection of
leaves of any (incubating) subtree of L x is such a collection.
We will now relate the trees L x to substrings of s. The following definition
gives a minimal requirement on the occurrence in s of the strings
corresponding to the nodes of L x .
Denition 4.4 We say that a collection C of strings is a string decomposition
of a tree T if each string corresponding to an edge occurs as a substring
in C and the strings corresponding to the edges that are not incident to the
root or the leaves occur twice as substrings in C.
A string requirement labeling of a tree T is dened by labeling
if e is a root edge or an edge between a branching node and a leaf, otherwise
2.
e l(e) and
e out of b l(e):
We now show an upper bound on the probability that L x takes form T in
four steps. First we give an algorithm that returns a particular type of string
decomposition. Then we bound the number of possible string decompositions
generated by the algorithm for any xed tree T . We then go on to bound
the probability that s contains disjointly a given string decomposition of s.
Finally we show that when L x is normal and takes form T or has T as an
incubating subtree then s contains disjointly substrings that form one of the
string decompositions of T that are generated by the algorithm.
Let us now x a node x in D s
m and x T to be some given (b;
for some xed integers b; i and l. Furthermore, let us dene mapping from T
to a subgraph of the complete DeBruijn graph. We will name the root node
of T x and let us name all the nodes of the graph T , as follows. Now notice
that in our denition of (b; i; l)-trees that each node has at most children
and we make a distinction between two based on a predened ordering of the
children, we can therefore talk about the k th child of a node, where k is some
number between 1 and , possibly greater than the number of x's children. If
z is the k th child of a node named y 1 and the
edge between them y 1 is the k th letter of the alphabet
.
The following algorithm generates a string decomposition of T .
Algorithm 2
Label T using the string requirement labeling of Def. 4.4
Preorder the nodes of T.
Initialize C as an empty collection.
While 9 e such that l(e) > 0
Let v be the lowest ordered node of T with a
positively labeled out-edge, e.
Choose y such that
While v is non-leaf
Choose e as one of the out-edges of v.
Append to y the character corresponding to e.
Add y to the collection C.
Return C.
We will rst upper bound the number of string decompositions generated
by this algorithm.
Lemma 4.4 The number of possible string decompositions generated by Algorithm
2 is bounded above by R T 2i+1 , where i is the number of internal
non-branching nodes in T and R T is dened in Denition 4.4.
Proof: Let us count the number of choices made by the algorithm. Let v
be some branching node and k be the sum of the labels of v's out-edges.
Algorithm 2 will arrive at most k times at v. Whenever v is chosen in
the outer loop one of v's out-edge labels gets decreased by one. Since the
in-edge of v has label at most two we will arrive at most twice at v in the
inner loop. The rst time we arrive at v in the inner loop all out-edges of v
have positive labels and one of them will be decreased. The second time we
arrive at v in the inner loop we may choose an edge we have chosen before.
The last time we arrive at v in the outer loop we have no choice of which
edge to traverse. All other times we have some choice of edge to traverse,
the number of choices clearly being less than . When we arrive at a node
that has a single child we have no choice as to which edge to traverse next.
The number of string decompositions is therefore at most
where B is the set of branching nodes. The inequality can be veried by
noting that the labels of out-edges of nodes that have one child are always
two except in the case when the root has only one child.
We will now upper bound the probability of any string decomposition.
Lemma 4.5 If C is a collection of strings generated by Algorithm 2 then the
probability that the strings of C occur disjointly as substrings of s is upper
bounded by l R T , where
Proof: Let D be our set of strings. By Lemma 4.3 jDj l. When originally
chosen each of the strings in D has length m 1. Furthermore we will append
to them at least R T extra characters. Note that the probability of a string
of length j occurring in s is at most n
. We then have that the probability
of all of the strings in D occurring in s is bounded by:
x2D
jxj l x2D m 1jxj l R T
We now relate L x to the string s.
Lemma 4.6 If T is an incubating subtree of a normal L x then s contains
disjointly one of the collections generated by Algorithm 2.
Proof: Based on the actual string s we will show how the Algorithm 2 can
be made to construct a collection of disjoint substring s. By the denition
of normal, if T is an incubating subtree of L x and L x is normal and e is an
edge of T labeled k by Algorithm 2 then s must contain at least k disjoint
substrings corresponding to e. In the outer loop of Algorithm 2 we can hence
always choose y to be some substring of s disjoint from those previously
chosen. Now let s correspond to y. In the inner loop we make the
choice of e based on the next characters (s j+1 and onwards) of s, i.e. if s j+1
is k then e will be chosen as the k-th child of v. Since L x is normal the end
string will not occur in L x and the choice of the child is hence always well
dened. Since T is an incubating subtree of L x this choice will never return
an edge not in T and will terminate at a leaf node of T .
4.4 Number of normal queries
Combining the results of Lemmas 4.2, 4.4 and 4.5 gives the following lemma.
Lemma 4.7 The probability that L x contains a (b; i; l)-tree as an incubating
subtree is at
l i+1
l and
Lemma 4.8 Let that m is large enough so that
e
< 1. Then the expected size of the set of queries QN added to Q
corresponding to normal strings x is bounded by
ne
Proof: We can estimate the expected number of queries by multiplying the
number of nodes x, in D s
m with the probability that L x has a (b;
as an incubating subtree and with the number of leaves l and then summing
over all (b;
notice that l b + 1. To get the expected number
of queries in Q we multiply the number of nodes in D s
m with the sum over
all possible k's and i's of the number of queries added for each tree (b
multiplied with the probability of the tree over all possible k's and i's and
the number of ways to choose the initial node x.
E(jQN
1X
ne
The second inequality follows from a series of algebraic manipulations and
noting k
(e) k . The second equality is a well known identity for geometric
series. The rst equality is less well known but can be observed by
dierentiating the identity for geometric series k times (Slomson, 1991).
We have now shown that only O(n) queries are generated for nodes x in
the graph where L x is normal.
4.5 Remaining Cases
We will now show that it is unlikely that L x is not normal and then we can
use the fact that the maximum number of queries generated by the algorithm
with initial string corresponding to any given node in the graph is bounded
by
Let us introduce some terminology.
Denition 4.5 1. We say that a string,
2. The core of a string that corresponds to a node (node string)
is the substring of x that occurs as a substring in all node
strings of L x , i.e. x k 0
The cases when L x is not normal are when end occurs in L x , L x has a
cycle or the string decomposition of L x consists of strings that are non-disjoint
in s.
End will occur in L x for at most
x. The number
of queries containing the string corresponding to end as a substring is hence
bounded by O(n 2
As the depth of L x is bounded by k contains a cycle the
period of the cycle must also be bounded by k 0 + 1. The core of x must
therefore have period at most k occurs in all the node strings of
L x . If the core of x has period less than k call x a low-periodic
core string. To simplify presentation we will also consider strings which are
periodic with period less than d 1log ne to be low-periodic core strings.
All extensions of a node string x will appear disjointly if the core of x does
not occur twice in s, starting at positions i and j, where ji jj m+k 0 + 1.
Here m 1 is the length of the node strings in the current iteration of Step
the condition may therefore be rewritten as ji jj O(log n). If this
happens we say the core of x is self-repetitive.
To count the expected number of queries in Q we will consider four cases.
First we will count the expected number of queries stemming from strings
with a low-periodic core. Then the expected number of elongations of strings
with a self-repetitive core given that the core is not low-periodic. The remainder
of the strings are normal or have the terminal string of s occurring
in L x .
We will now count the number of queries in Q that originate at low-
periodic core strings. The number of node strings with a core of period k is
determined by the degrees of freedom outside the core (k 0 +1) plus the degrees
of freedom inside the core ( k) and is therefore k 0 +k+1 . The number of
extensions of low-periodic core node string is hence at most
d 1log ne
d 1log ne+k+1 O(n 1
Now we look at the number of query strings originating at node strings
whose core is self-repetitive but not low-periodic. The expected number of
cores that are self-repetitive and not low-periodic is at most
(number of places i for the rst core to start)
(number of places for second core to start)
rst core)
which is bounded above by n O(log n) d 1log ne
log n). The
expected number of strings with a self-repetitive core is hence bounded by
O(n 7
log n). The expected number of queries added to Q in this case is
hence bounded by O(n 9
Using Lemma 4.8 have now shown Lemma 4.1. The expected number of
queries generated in Step 2 0 is bounded by
ne
4.6 Concentration of Expectation
We now use Azuma's inequality (see (Alon and Spencer, 1992)) to show that
with high probability Step 2 0 has only a linear number of queries.
Lemma 4.9 With high probability no more than O(n) queries are added to
Q in each iteration of Step 2 0 .
Proof:
We can view s to be a sequence of n independent random trials, one for
each of its characters. We want to bound the number of queries that may
be added to or removed from Q if we change one character. The changing
of one character may eect at most (m ne
5 log n) strings, where m denotes the length of the node strings in the
current iteration of Step 2 0 . This can be seen by rst choosing the position
of the character change in the query string and noting that the length of
each query string is at most m c be the character that
was changed in s and let r m+2 r be the characters immediately
preceding and following c with q be a query string that is
aected by the character change and the length of q. The path in
m that q corresponds to must pass through one of the nodes corresponding
to substrings of length m 1 containing c. In particular if the character
change occurred in the jth position of q then if j m 1, q has characters
positions the characters in
position can then be any character. If
in its rst positions
and the characters can be any character. In both cases there are
at most
strings that can be aected.
By Azuma's equality we have
5 log n)) 2
Putting completes the proof.
Computational Results
The choice of k 0 as d 1log ne in the previous section was done for ease of
presentation and may be chosen slightly larger to decrease the number of
rounds. To test the practicality of our method we implemented a variant of
the algorithm presented that is not as stringent as the algorithm analyzed
closer to the original algorithm with Step 2. In this variant,
we limited the length of the queries to the largest l such the total number of
queries in each round is limited to O(n), instead of limiting their length to a
xed k 0 . In other words Step 2 is modied to:
m is a line.
Let Q be the set of strings
v) l is chosen as large as possible with
Ask the queries in Q.
.
Table
1 shows the number of SBH chips used when the base pairs are
generated randomly. The number of query rounds is signicantly lower than
the number of rounds guaranteed by the worst-case performance guarantee of
the algorithm. We see that if we initially use a classical SBH chip containing
all oligonucleotides of size dlog 4 ne 1 we can nish the sequencing of the
DNA in less than 4n extra queries, using a single chip, for all of our examples.
Table
2 shows the number of SBH chips used to sequence arbitrarily
chosen virus sequences. For all but one of our examples our algorithm will
sequence their DNA by using the classical chip containing all strings of length
log then an extra round of at most 4n queries.
6
Acknowledgements
The authors would like to than R.Ravi, Magnus M. Halldorsson, Dan iel F.
Gudbjartsson and the anonymous referees for reviewing this paper. Alan
M. Frieze was supported in part by NSF grant CCR9818411. Bjarni V.
Halldorsson was supported by a Merck Computational Biology and Chemistry
Program Graduate Fellowship from the Merck Company Foundation.
--R
The probabilistic method.
Poisson process approximation for sequence repeats
A novel method for nucleic acid sequence determination.
On the complexity of positional sequencing by hybridization
and Paterson
Synthetic DNA arrays.
Oligonucleotide arrays: New concepts and possibilities.
DNA sequnence determination by hybridization: A strategy for e-cien large scale sequencing
The probability of unique solutions of sequencing by hybridization.
Positional sequencing by hybridization.
The Art of Computer Programming: Fundamental Algo- rithms
Multistage sequencing by hybridization.
Expression monitoring by hybridization to high-density oligonucleotide arrays
Spectrum alignment: E-cient resequencing by hybridization
Towards DNA-sequencing by hybridiza- tion
Improved chips for sequencing by hybridization.
Journal of Biomolecular Structure and Dynamics
Optimal reconstruction of a sequence from its probes.
DNA Microarrays.
Large scale sequencing by hybridization
Reconstructing strings from substrings.
Journal of Computational Biology
DNA chips: analyzing sequence by hybridization to oligonucleotides on a large scale.
--TR
The art of computer programming, volume 1 (3rd ed.)
Sequencing-by-hybridization at the information-theory bound
Towards DNA Sequencing Chips
Reconstructing strings from substrings in rounds
--CTR
Steven Skiena , Sagi Snir, Restricting SBH ambiguity via restriction enzymes, Discrete Applied Mathematics, v.155 n.6-7, p.857-867, April, 2007
Eran Halperin , Shay Halperin , Tzvika Hartman , Ron Shamir, Handling long targets and errors in sequencing by hybridization, Proceedings of the sixth annual international conference on Computational biology, p.176-185, April 18-21, 2002, Washington, DC, USA | sequencing by hybridization;probabilistic analysis;DNA sequencing |
369220 | Fast and simple character classes and bounded gaps pattern matching, with application to protein searching. | The problem of fast searching of a pattern that contains Classes of characters and Bounded size Gaps (CBG) in a text has a wide range of applications, among which a very important one is protein pattern matching (for instance, one PROSITE protein site is associated with the CBG [RK] x(2, where the brackets match any of the letters inside, and x(2, 3) a gap of length between 2 and 3). Currently, the only way to search a CBG in a text is to convert it into a full regular expression (RE). However, a RE is more sophisticated than a CBG, and searching it with a RE pattern matching algorithm complicates the search and makes it slow. This is the reason why we design in this article two new practical CBG matching algorithms that are much simpler and faster than all the RE search techniques. The first one looks exactly once at each text character. The second one does not need to consider all the text characters and hence it is usually faster than the first one, but in bad cases may have to read the same text character more than once. We then propose a criterion based on the form of the CBG to choose a-priori the fastest between both. We performed many practical experiments using the PROSITE database, and all them show that our algorithms are the fastest in virtually all cases. | Introduction
This paper deals with the problem of fast searching of patterns that contain Classes of characters
and Bounded size Gaps (CBG) in texts. This problem occurs in various elds, like information
retrieval, data mining and computational biology. We are particularly interested in the latter one.
In computational biology, this problem has many applications, among which the most important
is protein matching. These last few years, huge protein site pattern databases have been developed,
like PROSITE [7, 11]. These databases are collections of protein site descriptions. For each protein
site, the database contains diverse information, notably the pattern. This is an expression formed
with classes of characters and bounded size gaps on the amino acid alphabet (of size 20). This
pattern is used to search a possible occurrence of this protein in a longer one. For example, the
protein site number PS00007 has as its pattern the expression [RK] x(2;
where the brackets mean that the position can match any of the letters inside, and x(2; means
a gap of length between 2 and 3.
Dept. of Computer Science, University of Chile. Blanco Encalada 2120, Santiago, Chile.
gnavarro@dcc.uchile.cl. Work developed while the author was at postdoctoral stay at the Institut Gaspard Monge,
Univ. de Marne-la-Vallee, France, partially supported by Fundacion Andes and ECOS/Conicyt.
y Equipe genome, cellule et informatique, Universite de Versailles, 45 avenue des Etats-Unis, 78035 Versailles Cedex,
E-mail: raffinot@monge.univ-mlv.fr. The work was done while the author was at the Institut Gaspard-Monge,
Cite Descartes, Champs-sur-Marne, 77454 Marne-la-Vallee Cedex 2, France.
Currently, these patterns are considered as full regular expressions (REs) over a xed alphabet
, i.e generalized patterns composed of (i) basic characters of the alphabet (adding the empty
word " and also a special symbol x that can match all the letters of ), (ii) concatenation (denoted
closure (). This latter operation L on a set of words L means
that we accept all the words made by a concatenation of words of L. For instance, our previous
pattern can be considered as the regular expression (RjK) x x (xj") (DjE) x x (xj") Y .
We note jREj the length of an RE, that is the number of symbols in it. The search is done with
the classical algorithms for RE searching, that are however quite complicated. The RE needs to be
converted into an automaton and then searched in the text. It can be converted into a deterministic
automaton (DFA) in worst case time O(2 jREj ), and then the search is linear in the size n of the
text, giving a total complexity of O(2 jREj + n). It can also be converted into a nondeterministic
automaton (NFA) in linear time O(jREj) and then searched in the text in O(n jREj) time, giving
a total of O(n jREj) time. We give a review of these methods in Section 3. The majority of the
matching softwares use these techniques [13, 22].
None of the presented techniques are fully adequate for CBGs. First, the algorithms are intrin-
sequely complicated to understand and to implement. Second, all the techniques perform poorly
for a certain type of REs. The \di-cult" REs are in general those whose DFAs are very large, a
very common case when translating CBGs to REs. Third, especially with regard to the sizes of the
DFAs, the simplicity of CBGs is not translated into their corresponding REs. At the very least,
resorting to REs implies solving a simple problem by converting it into a more complicated one.
Indeed, the experimental time results when applied to our CBG expressions are far from reasonable
in regard of the simplicity of CBGs and compared to the search of expressions that just contain
classes of characters [18].
This is the motivation of this paper. We present two new simple algorithms to search CBGs in
a text, that are also experimentally much faster than all the previous ones. These algorithms make
plenty use of \bit-parallelism", that consists in using the intrinsic parallelism of the bit manipulations
inside computer words to perform many operations in parallel. Competitive algorithms have
been obtained using bit parallelism for exact string matching [2, 26], approximate string matching
[2, 26, 27, 3, 17], and REs matching [15, 25, 20]. Although these algorithms generally work well
only on patterns of moderate length, they are simpler, more
exible (e.g. they can easily handle
classes of characters), and have very low memory requirements.
We performed two dierent types of time experiments, comparing our algorithms against the
fastest known for RE searching algorithms. We use as CBGs the patterns of the PROSITE database.
We rst compared them as \pure pattern matching", i.e. searching the CBGs in a compilation of
6 megabytes of protein sequences (from the TIGR Microbial database). We then compared them
as \library matching", that is search a large set of PROSITE patterns in a protein sequence of 300
amino acids. Our algorithms are by far the fastest in both cases. Moreover, in the second case,
the search time improvements are dramatic, as our algorithms are about 100 times faster than the
best RE matching algorithms.
The two algorithms we present are patented by the french Centre National de la Recherche
Scientique (CNRS) 1 .
We use the following denitions throughout the paper. is the alphabet, a word on is a
nite sequence of characters of . means the set of all the words build on . A word x 2 is
a factor (or substring) of p 2 if p can be written . A factor x of p is called a
su-x of p is a prex of p is
1 The patent number 00 11093 has been deposed by the CNRS the 08/30/00. For any information about it please
contact Sbastien CHIRIE (sebastien.chirie@st.fr), FIST, 135 Boulevard Saint Michel, 75005 Paris, FRANCE.
We note with brackets a subset of elements of : [ART ] means the subset fA; R; Tg (a single
letter can be expressed in this way too). We add the special symbol x to denote a subset that
corresponds to the whole alphabet. We also add a symbol x(a; b); a < b, for a bounded size gap of
minimal length a and maximal b. A CBG on is formally a nite sequence of symbols that can
be (i) brackets, (ii) x and (iii) bounded size gaps x(a; b). We dene m as the total number of such
symbols in a CBG.
We use the notation for the text of n characters of in which we are searching
the CBGs. A CBG matches T at position j if there is an alignment of t with the CBG,
considering that (i) a bracket matches with any text letter that appears inside brackets; (ii) an x
matches any text letter; and (iii) a bounded gap x(a; b) matches at minimum a and at maximum
b arbitrary characters of T . We denote by ' the minimum size of a possible alignment and L
the size of a maximum one. For example, [RK] x(2; matches the text
at position 11 by 3 dierent alignments (see Figure 1), l = 7 and
Y
Y
R 2Y
Figure
1: Three dierent alignments of the CBG [RK] x(2; over the text
AHLRKDEDATY at the same ending position.
Searching a CBG in a text consists in nding all the positions j of
T in which there is an alignment of the CBG with a su-x of
This paper is organized as follows. We begin in Section 2 by summarizing the two main bit-parallel
approaches that lead to fast e-cient matching algorithms for simple strings but also for
patterns that contain classes of characters. In Section 3, we explain in detail what are the approaches
to search full REs. We then present in Section 4 our new algorithm (which we call a
\forward algorithm"), that reads all the characters of the text exactly once. It is based on a new
automaton representation and simulation. We present in Section 5 another algorithm (which we
call a \backward algorithm" despite that it processes the text basically left to right), that allows
us to skip some characters of the text, being generally faster. However, it can not been used for all
types of CBGs, and it is sometimes slower than the forward one. Consequently, we give in the next
Section 6 a good experimental criterion that enables us to choose a-priori the fastest, depending on
the form of the CBG. Section 7 is devoted to the experimental results for both algorithms compared
to the fastest RE searching algorithms.
2 Bit-parallelism for simple pattern matching
In [2], a new approach to text searching was proposed. It is based on bit-parallelism [1]. This
technique consists in taking advantage of the intrinsic parallelism of the bit operations inside a
computer word. By using cleverly this fact, the number of operations that an algorithm performs
can be cut down by a factor of at most w, where w is the number of bits in the computer word.
Since in current architectures w is 32 or 64, the speedup is very signicative in practice.
Figure
2 shows a non-deterministic automaton that searches a pattern in a text. Classical
pattern matching algorithms, such as KMP [14], convert this automaton to deterministic form
and achieve O(n) worst case search time. The Shift-Or algorithm [2], on the other hand, uses
bit-parallelism to simulate the automaton in its non-deterministic form. It achieves O(mn=w)
worst-case time, i.e. an optimal speedup over a classical O(mn) simulation. For m w, Shift-Or
is twice as fast as KMP because of better use of the computer registers. Moreover, it is easily
extended to handle classes of characters.
We use some notation to describe bit-parallel algorithms. We use exponentiation to denote bit
repetition, e.g. We denote as the bits of a mask of length ', which is stored
somewhere inside the computer word of length w. We use C-like syntax for operations on the bits
of computer words, i.e. \j" is the bitwise-or, \&" is the bitwise-and, \" complements all the
bits, and \<<" moves the bits to the left and enters zeros from the right, e.g. b ' b
We can also perform arithmetic operations on the bits, such as addition and
subtraction, which operate the bits as if they formed a number, for instance
We explain now the basic algorithm and then a later improvement over it.
b a a b b a a
Figure
2: A nondeterministic automaton to search the pattern in a text.
2.1 Forward scanning
We present now the Shift-And algorithm, which is an easier-to-explain (though a little less e-cient)
variant of Shift-Or. Given a pattern the
algorithm builds rst a table B which for each character stores a bit mask . The mask
in B[c] has the i-th bit set if and only if c. The state of the search is kept in a machine
word matches the end of the text read up to
now (another way to see it is to consider that d i tells whether the state numbered i in Figure 2 is
active). Therefore, we report a match whenever dm is set.
We set originally, and for each new text character t j , we update D using the formula
The formula is correct because the i-th bit is set if and only if the (i 1)-th bit was set for
the previous text character and the new text character matches the pattern at position i. In other
words, Again, it is
possible to relate this formula to the movement that occurs in the nondeterministic automaton for
each new text character: each state gets the value of the previous state, but this happens only if
the text character matches the corresponding arrow. Finally, the \j after the shift allows a
match to begin at the current text position (this operation is saved in the Shift-Or, where all the
bits are complemented). This corresponds to the self-loop at the initial state of the automaton.
The cost of this algorithm is O(n). For patterns longer than the computer word (i.e. m > w),
the algorithm uses dm=we computer words for the simulation (not all them are active all the time),
with a worst-case cost of O(mn=w) and still an average case cost of O(n).
2.2 Classes of characters and extended patterns
The Shift-Or algorithm is not only very simple, but it also has some further advantages. The most
immediate one is that it is very easy to extend to handle classes of characters, where each pattern
position may not only match a single character but a set of characters. If C i is the set of characters
that match the position i in the pattern, we set the i-th bit of B[c] for all c 2 C
is necessary to the algorithm. In [2] they show also how to allow a limited number k of mismatches
in the occurrences, at O(nm log(k)=w) cost.
This paradigm was later enhanced to support extended patterns [26], which allow wild cards,
regular expressions, approximate search with nonuniform costs, and combinations. Further development
of the bit-parallelism approach for approximate string matching lead to some of the fastest
algorithms for short patterns [3, 17]. In most cases, the key idea was to simulate a nondeterministic
nite automaton.
Bit-parallelism has became a general way to simulate simple nondeterministic automata instead
of converting them to deterministic. This is how we use it in this paper, for the new type of extended
patterns we are focusing on.
2.3 Backward scanning
The main disadvantage of Shift-Or is its inability to skip characters, which makes it slower than
the algorithms of the Boyer-Moore [5] or the BDM [10, 9] families. We describe in this section the
BNDM pattern matching algorithm [18]. This algorithm, a combination of Shift-Or and BDM, has
all the advantages of the bit-parallel forward scan algorithm, and in addition it is able to skip some
text characters.
BNDM is based on a su-x automaton. A su-x automaton on a pattern is an
automaton that recognizes all the su-xes of P . The nondeterministic version of this automaton
has a very regular structure and is shown in Figure 3. In the original algorithm BDM [10, 9], this
automaton is made deterministic. BNDM, instead, simulates the automaton using bit-parallelism.
Just as for Shift-And, we keep the state of the search using m bits of a computer word
b a a b b a a
Figure
3: A nondeterministic su-x automaton for the pattern lines represent
"-transitions (i.e. they occur without consuming any input).
A very important fact is that this automaton can not only be used to recognize the su-xes of
P , but also factors of P . Note that there is a path labeled by x from the initial state if and only
if x is a factor of P . That is, the nondeterministic automaton will not run out of active states as
long as it has read a factor of P .
The su-x automaton is used to design a simple pattern matching algorithm. This algorithm is
time in the worst case, but optimal on average (O(n log m=m) time). Other more complex
variations such as TurboBDM [10] and MultiBDM [9, 21] achieve linear time in the worst case.
To search a pattern in a text the su-x automaton of P
(i.e the pattern read backwards) is built. A window of length m is slid along the
text, from left to right. The algorithm searches backward inside the window for a factor of the
pattern P using the su-x automaton, i.e. the su-x automaton of the reverse pattern is fed with
the characters in the text window read backward. This backward search ends in two possible forms:
1. We fail to recognize a factor, i.e we reach a window letter that makes the automaton run
out of active states. This means that the su-x of the window we have read is not anymore a
factor of P . Figure 4 illustrates this case. We then shift the window to the right, its starting
position corresponding to the position following the letter (we cannot miss an occurrence
because in that case the su-x automaton would have found a factor of it in the window).
New search
Window
Search for a factor with the DAWG
Fail to recognize a factor at .
New window
Secure shift
Figure
4: Basic search with the su-x automaton.
2. We reach the beginning of the window, therefore recognizing the pattern P since the length-m
window is a factor of P (indeed, it is equal to P ). We report the occurrence, and shift the
window by 1.
The bit-parallel simulation works as follows. Each time we position the window in the text we
initialize scan the window backward. For each new text character read in the window
we update D. If we run out of 1's in D then there cannot be a match and we suspend the scanning
and shift the window. If we can perform m iterations then we report the match.
We use a mask B which for each character c stores a bit mask. This mask sets the bits
corresponding to the positions where the reversed pattern has the character c (just as in the Shift-
And algorithm). The formula to update D is
BNDM is not only faster than Shift-Or and BDM (for 5 m 100 or so), but it can accommodate
all the extensions mentioned. Of particular interest to this work is that it can easily deal
with classes of characters by just altering the preprocessing, and it is by far the fastest algorithm
to search this type of patterns [18, 19].
Note that this type of search is called \backward" scanning because the text characters inside
the window are read backwards. However, the search progresses from left to right in the text as
the window is shifted.
3 Regular expression searching
The usual way of dealing with an expression with character classes and bounded gaps is actually
to search it as a full regular expression (RE) [13, 22]. A gap of the form x(a; b) is converted into a
letters x followed by b a subexpressions of the form (xj").
The traditional technique [23] to search an RE of length O(m) in a text of length n is to
convert the expression into a nondeterministic nite automaton (NFA) with O(m) nodes. Then,
it is possible to search the text using the automaton at O(mn) worst case time, or to convert the
NFA into a deterministic nite automaton (DFA) in worst case time O(2 m ) and then scan the text
in O(n) time.
Some techniques have been proposed to obtain a good tradeo between both extremes. In
1992, Myers [15] presented a four-russians approach which obtains O(mn= log n) worst-case time
and extra space. Other simulation techniques that aim at good tradeos based on combinations of
DFAs and bit-parallel simulation of NFAs are given in [26, 20].
There exist currently many dierent techniques to build an NFA from a regular expression R.
The most classical one is Thompson's construction [23], which builds an NFA with at most 2m
states (where m is counted as the number of letters and "'s in the RE). A second one is Glushkov's
construction, popularized by Berry and Sethi in [4]. The NFA resulting of this construction has
the advantage of having just m+ 1 states (where m is counted as the number of letters in the RE).
A lot of research on Gluskov's construction has been pursued, like [6], where it is shown that
the resulting NFA is quadratic in the number of edges in the worst case. In [12], a long time open
question about the minimal number of edges of an NFA (without -transition) with linear number
of states was answered, showing a construction with O(m) states and O(m(log m) 2 ) edges, as well
as a lower bound of O(m log m) edges. Hence, Glushkov construction is not space-optimal. Some
research has been done also to try to construct directly a DFA from a regular expression, without
constructing an NFA, such as [8].
We show in Figure 5 the Thompson and Gluskov automata for an example CBG a b c
e, which we translate into the regular expression a b c x (xj") (xj") d e.
Both Thompson and Gluskov automata present some particular properties. Some algorithms
like [15, 26] make use of Thompson's automaton properties and some others, like [20], make use of
Gluskov's ones.
Finally, some work has been pursued in skipping characters when searching for an RE. A simple
heuristic that has very variable success is implemented in Gnu Grep, where they try to nd a plain
substring inside the RE, so as to use the search for that substring as a lter for the search of the
complete RE. In [24] they propose to reduce the search of a RE to a multipattern search for all
the possible strings of some length that can match the RE (using a multipattern Boyer-Moore like
algorithm). In [20] they propose the use of an automaton that recognizes reversed factors of strings
accepted by the RE (in fact a manipulation of the original automaton) using a BNDM-like scheme
to search those factors (see Section 2).
However, none of the presented techniques seems fully adequate for CBGs. First, the algorithms
are intrinsequely complicated to understand and to implement. Second, all the techniques perform
poorly for a certain type of REs. The \di-cult" REs are in general those whose DFAs are very
large, a very common case when translating CBGs to REs. Third, especially with regard to the
sizes of the DFAs, the simplicity of CBGs is not translated into their corresponding REs. For
example, the CBG \[RK] x(2; considered in the Introduction yields a
DFA which needs about 600 pointers to be represented.
At the very least, resorting to REs implies solving a simple problem by converting it into a more
x
e
e
e
x
e
e
e
d e14 15 1611785(a) Thompson construction
a b c x x x d e
d
(b) Gluskov construction
Figure
5: The two classical NFA constructions on our example a b c x (xj") (xj") d e. We
recall that x matches the whole alphabet . The Gluskov automaton is " free, but both present
some di-culties to perform an e-cient bit-parallelism on them.
complicated one. Indeed, the experimental time results when applied to our CBG expressions are
far from reasonable in regard of the simplicity of CBGs, as seen in Section 7. As we show in that
section, CBGs can be searched much faster by designing specic algorithms for them. This is what
we do in the next sections.
4 A forward search algorithm for CBG patterns
We express the search problem of a pattern with classes of characters and gaps using a non-deterministic
automaton. Compared to the simple automaton of Section 2, this one permits the
existence of gaps between consecutive positions, so that each gap has a minimum and a maximum
length. The automaton we use does not correspond to any of those presented in Section 3, although
the functionality is the same.
Figure
6 shows an example for the pattern a b c x(1; e. Between the letters c and
d we have inserted three transitions that can be followed by any letter, which corresponds to the
maximum length of the gap. Two "-transitions leave the state where abc has been recognized and
skip one and two subsequent edges, respectively. This allows skipping one to three text characters
before nding the cd at the end of the pattern. The initial self-loop allows the match to begin at
any text position.
To build the NFA, we start with the initial state S 0 and read the pattern symbol by symbol (a
being a class of characters or a gap 2 ). We add new automaton edges and states for each
new symbol read. If after creating state S i the next pattern symbol is a class of characters C we
create a state S i+1 and add an edge labeled C from state S i to state S i+1 . On the other hand, if the
new pattern symbol is a gap of the form x(a; b), we create b states S labeled
2 Note that x and single letters can also be seen as classes of characters.
a b c x x x d e
e
e
Figure
Our non-deterministic automaton for the pattern a b c x(1;
linking state S j to S j+1 for Additionally, we create b a "-transitions from
state S i to states S . The last state created in the whole process is the nal state.
We are now interested in an e-cient simulation of the above automaton. Despite that this
is a particular case of a regular expression, its simplicity permits a more e-cient simulation. In
particular, a fast bit-parallel simulation is possible.
We represent each automaton state by a bit in a computer word. The initial state is not
represented because it is always active. As with the normal Shift-And, we shift all the bits to the
left and use a table of masks B indexed by the current text character. This accounts for all the
arrows that go from states S j to S j+1 .
The remaining problem is how to represent the "-transitions. For this sake, we chose 3 to
represent active states by 1 and inactive states by 0. We call \gap-initial" states those states S i
from where an "-transition leaves. For each gap-initial state S i corresponding to a gap x(a; b),
we dene its \gap-nal" state to be S i+b a+1 , i.e. the one following the last state reached by an
"-transition leaving S i . In the example of Figure 6, we have one gap-initial state (S 3 ) and one
gap-nal state (S 6 ).
We create a bit mask I which has 1 in the gap-initial states, and another mask F that has 1 in
the gap-nal states. Then, if we keep the state of the search in a bit mask D, then after performing
the normal Shift-And step, we simulate all the "-moves with the operation
The rationale is as follows. First, D & I isolates the active gap-initial states. Subtracting this
from F has two possible results for each gap-initial state S i . First, if it is active the result will have
1 in all the states from S i to S i+b a , successfully propagating the active state S i to the desired
target states. Second, if S i is inactive the result will have 1 only in S i+b a+1 . This undesired 1
is removed by operating the result with \& F ". Once the propagation has been done, we or
the result with the already active states in D. Note that the propagations of dierent gaps do not
interfer with each other, since all the subtractions have local eect.
Let us consider again our example of Figure 6. The corresponding I and F masks are 00000100
and 00100000, respectively (recall that the bit masks are read right-to-left). Let us also consider that
we have read the text abc, and hence our D mask is 00000100. At this point the "-transitions should
take eect. Indeed, ((F (D &
where states S 3 , S 4 and S 5 have been activated. If, on the other hand, the
propagation formula yields ((00100000 00000000) & nothing changes.
Figure
7 shows the complete algorithm. For simplicity the code assumes that there cannot be
gaps at the beginning or at the end of the pattern (which are meaningless anyway). The value
(maximum length of a match) is obtained in O(m) time by a simple pass over the pattern P ,
summing up the maximum gap lengths and individual classes (recall that m is the number of
symbols in P ). The preprocessing takes O(Ljj) time, while the scanning needs O(n) time. If
3 It is possible to devise a formula for the opposite case, but unlike Shift-Or, it is not faster.
however, we need several machine words for the simulation, which thus takes O(ndL=we)
time.
Search (P 1:::m ,T 1:::n )
Preprocessing */
maximum length of a match
for c 2 do B[c] 0 L
I 0 L , F 0 L
if P j is of the form x(a; b) then /* a gap */
I I j (1 << (i 1))
else /* P j is a class of characters */
for
final state */
Scanning */
if report a match ending at
Figure
7: The forward scanning algorithm.
5 A backward search algorithm for CBG patterns
When the searched patterns contain just classes of characters, the backward bit-parallel approach
(see Section 2) leads to the fastest algorithm BNDM [18, 19]. The search is done by sliding over the
text (in forward direction) a window that has the size of the minimum possible alignment ('). We
read the window backwards trying to recognize a factor of the pattern. If we reach the beginning
of the window, then we found an alignment. Else, we shift the window to the beginning of the
longest factor found.
We extend now BNDM to deal with CBGs. To recognize all the reverse factors of a CBG, we
use quite the same automaton built in Section 4 on the reversed pattern, but without the initial
self-loop, and considering that all the states are active at the beginning. We create an initial state
I and "-transitions from I to each state of the automaton. Figure 8 shows the automaton for the
pattern a b c x(1; read by this automaton is a factor of the CBG as long
as there exists at least one active state.
a b c x x x d e
e
e
e
e
e
e
e
ee
e
I
Figure
8: The non-deterministic automaton built in the backward algorithm to recognize all the
reversed factors of the CBG a b c x(1;
The bit-parallel simulation of this automaton is quite the same as that of the forward automaton
(see Section 4). The only modications are (a) that we build iton P r , the reversed pattern; (b)
that the the bit mask D that registers the state of the search has to be initialized with
to perform the initial "-transitions; and (c) that we do not or D with 0 L 1 1 when we shift it, for
there is no more initial loop.
The backward CBG matching algorithm shifts a window of size ' along the text. Inside each
window, it traverses backward the text trying to recognize a factor of the CBG (this is why the
automaton that recognizes all the factors has to be built on the reverse pattern P r ).
If the backward search inside the window fails (i.e. there are no more active states in the
backward automaton) before reaching the beginning of the window, then the search window is
shifted to the beginning of the longest factor recognized, exactly like in the rst case of the classic
BNDM (see Section 2).
If the begining of the window is reached with the automaton still holding active states, then
some factor of length ' of the CBG is recognized in the window. Unlike the case of exact string
matching, where all the occurrences have the same length of the pattern, this does not automatically
imply that we have recognized the whole pattern. We need a way to verify a possible alignment
(that can be much longer than ') starting at the beginning of the window. So we read the characters
again from the beginning of the window with the forward automaton of Section 4, but without the
initial self-loop. This forward verication ends when (1) the automaton reaches its nal state, in
which case we found the pattern; (2) there are no more active states in the automaton, in which
case there is no pattern occurrence starting at the window. As there is no initial loop, the forward
verication surely nishes after reading at most L characters of the text. We then shift the search
window one character to the right and resume the search.
Figure
9 shows the complete algorithm. Some optimizations are not shown for clarity, for
example many tests can be avoided by breaking loops from inside, some variables can be reused,
etc.
The worst case complexity of the backward scanning algorithm is O(nL), which is quite bad in
theory. In particular, let us consider the maximum gap length G in the CBG. If G ', then every
text window of length ' is a factor of the CBG, so we will surely traverse all the window during
the backward scan and always shift in 1, for a complexity of
n') at least. Consequently, the
backward approach we have presented must be restricted at least to CBGs in which G < '.
Backward search (P 1:::m ,T 1:::n )
maximum length of a match /* Preprocessing */
minimum length of a match
if P j is of the form x(a; b) then /* a gap */
I f I f j (1 << (i 1)) , I b I b j (1 << (L (i
do
else /* P j is a class of characters */
for do
final state for the forward scan*/
pos 0 /* Scanning */
while pos n ' do
while D b 6= 0 L and j > 0
while D f 6= 0 L and pos
if
report a match beginning at pos
(D b << 1)
Figure
9: The backward scanning algorithm.
However, on the average, the backward algorithm is expected to be faster than the forward
one. The next section gives a good experimental criterion to know in which cases the backward
algorithm is faster than the forward one. The experimental search results (see Section 7) on the
database show that the backward algorithm is almost always the fastest.
6 Which algorithm to use ?
We have now two dierent algorithms, a forward and a backward one, so a natural question is
which one should be chosen for a particular problem. We seek for a simple criterion that enables
us to choose the best algorithm.
As noted at the end of the previous section, the backward algorithm cannot be e-ciently
applied if the length G of the maximum gap in the pattern exceeds ', the minimum length of a
string that matches the pattern. This is because the backward traversal in the window will never
nish before traversing the whole window (as any string of length ' G is a factor of a possible
pattern occurrence).
This can be carried on further. Each time we position a window in the text, we know that at
least G+ 1 characters in the window will be inspected before shifting. Moreover, the window will
not be shifted by more than ' G positions. Hence the total number of character inspections across
the search is at least (G which is larger than n (the number of characters inspected
by a forward scan) whenever ' < 2G + 1.
Hence, we dene (G 1)=' as a simple parameter governing most of the performance of the
backward scan algorithm, and predict that 0.5 is the point above which the backward scanning
is worse than forward scanning. Of course this measure is not perfect, as it disregards the eect
of other gaps, classes of characters and the cost of forward checking in the backward scan, but a
full analysis is extremely complicated and, as we see in the next section, this simple criterion gives
good results.
According to this criterion, we can design an optimized version of our backward scanning
algorithm. The idea is that we can choose the \best" prex of the pattern, i.e. the prex that
1)='. The backward scanning can be done using this prex, while the forward
verication of potential matches is done with the full pattern. This could be extended to selecting
the best factor of the pattern, but the code would be more complicated (as the verication phase
would have to scan in both directions, buering would be complicated, and, as we see in the next
section, the dierence is not so large.
7 Experimental results
We have tested our algorithms over an example of 1,168 PROSITE patterns [13, 11] and a 6
megabytes (Mb) text containing a concatenation of protein sequences taken from the TIGR Microbial
database. The set had originally 1,316 patterns from which we selected the 1,230 whose L
(maximum length of a match) does not exceed w, the number of bits in the computer word of our
machine. This leaves us with 93% of the patterns. From them, we excluded the 62 (5%) for which
G ', which as explained cannot be reasonably searched with backward scanning. This leaves us
with the 1,168 patterns.
We have used an Intel Pentium III machine of 500 MHz running Linux. We show user times
averaged over 10 trials. Three dierent algorithms are tested: Fwd is the forward-scan algorithm
described in Section 4, Bwd is the backward-scan algorithm of Section 5 and Opt is the same Bwd
where we select for the backward searching the best prex of the pattern, according to the criterion
of the previous section.
A rst experiment aims at measuring the e-ciency of the algorithms with respect to the criterion
of the previous section. Figure 10 shows the results, where the patterns have been classied along
the x axis by their (G As predicted, 0.5 is the value from which Bwd starts to be
worse than Fwd except for a few exceptions (where the dierence is not so big anyway). It is also
clear that Opt avoids many of the worst cases of Bwd. Finally, the plot shows that the time of Fwd
is very stable. While the forward scan runs always at around 5 Mb/sec, the backward scan can be
as fast as 20 Mb/sec.
(G+1)/ell
secs/Mb
Bwd
(G+1)/ell
secs/Mb
Opt
Figure
10: Search times (in seconds per Mb) for all the patterns classied by their (G+ 1)=' value.
What
Figure
fails to show is that in fact most PROSITE patterns have a very low (G+ 1)='
value.
Figure
11 plots the number of patterns achieving a given search time, after removing a few
outliers (the 12 that took more than 0.4 seconds for Bwd). Fwd has a large peak because of its
stable time, while the backward scanning algorithms have a wider histogram whose main body is
well before the peak of Fwd. Indeed, 95.6% of the patterns are searched faster by Bwd than by
Fwd, and the percentage raises to 97.6% if we consider Opt. The plot also shows that there is little
statistical dierence between Bwd and Opt. Rather, Opt is useful to remove some very bad cases
of Bwd.
Our third experiment aims at comparing our search method against converting the pattern
to a regular expression and resorting to general regular expression searching. From the existing
algorithms to search for regular expressions we have selected the following.
Dfa: Builds a deterministic nite automaton and uses it to search the text.
Nfa: Builds a non-deterministic nite automaton and uses it to search the text, updating all the
states at each text position.
Myers: Is an intermediate between Dfa and Nfa [15], a non-deterministic automaton formed by a
few blocks (up to 4 in our experiments) where each block is a deterministic automaton over
a subset of the states. \(xj")" was expressed as \.?" in the syntax of this software.
Agrep: Is an existing software [26, 25] that implements another intermediate between Dfa and
Nfa, where most of the transitions are handled using bit-parallelism and the "-transitions
with a deterministic table. \(xj")" was expressed as \(.|"")" in the syntax of this software.
secs/Mb
frequency
Bwd
Opt
Figure
11: Histogram of search times for our dierent algorithms.
Grep: Is Gnu Grep with the option "-E" to make it accept regular expressions. This software
uses a heuristic that, in addition to (lazy) deterministic automaton searching, looks for long
enough literal pattern substrings and uses them as a fast lter for the search. The gaps
\x(a; b)" were converted to \.fa,bg" to permit specialized treatment by Grep.
BNDM: Uses the backward approach we have extended to CBGs, but adapted to general REs
instead [20]. It needs to build to deterministic automata, one for backward search and another
for forward verication
Multipattern: Reduces the problem to multipattern Boyer-Moore searching of all the strings of
length ' that match the RE [24]. We have used \agrep -f" as the multipattern search
algorithm.
To these, we have added our Fwd and Opt algorithms. Figure 12 shows the results. From the
forward scanning algorithms (i.e. Fwd, Dfa, Nfa and Myers, unable to skip text characters), the
fastest is our Fwd algorithm thanks to its simplicity. Agrep has about the same mean but much
more variance. Dfa suers from high preprocessing times and large generated automata. Nfa needs
to update many states one by one for each text character read. Myers suers from a combination
of both and shows two peaks that come from its specialized code to deal with small automata.
The backward scanning algorithms Opt and Grep (able to skip text characters) are faster than
the previous ones in almost all cases. Among them, Opt is faster on average and has less variance,
while the times of Grep extend over a range that surpasses the time of our Fwd algorithm for a
non-negligible portion of the patterns. This is because Grep cannot always nd a suitable ltering
substring and in that case it resorts to forward scanning. Note that BNDM and Multipattern have
been excluded from the plots due to their poor performance on this set of patterns.
Apart from the faster text scanning, our algorithms also benet from lower preprocessing times
when compared to the algorithms that resort to regular expression searching. This is barely noticeable
in our previous experiment, but it is important in a common scenario of the protein searching
problem: all the patterns from a set are searched inside a new short protein. In this case the
preprocessing time for all the patterns is much more important than the scanning time over the
(normally rather short) protein.
secs/Mb
frequency
Opt
Dfa
Nfa
Myers
Agrep
Grep
Figure
12: Histogram of search times for our best algorithms and for regular expression searching
algorithms.
We have simulated this scenario by selecting 100 random substrings of length 300 from our text
and running the previous algorithms on all the 1,168 patterns. Table 1 shows the time averaged
over the 100 substrings and accumulated over the 1,168 patterns. The dierence in favor of our new
algorithms is drastic. Note also that this problem is an interesting eld of research for multipattern
CBG search algorithms.
Algorithm Fwd Bwd Opt Dfa Nfa Myers Agrep Grep
Time
Table
1: Search time in seconds for all the 1,168 patterns over a random protein of length 300.
Conclusions
We have presented two new search algorithms for CBGs, i.e. expressions formed by a sequence
of classes of characters and bounded gaps. CBGs are of special interest to computational biology
applications. All the current approaches rely on converting the CBG into a regular expression
(RE), which is much more complex. Therefore the search cost is much higher than necessary for a
CBG.
Our algorithms are specically designed for CBGs and is based on BNDM, a combination of
bit-parallelism and backward searching with su-x automata. This combination has been recently
proved to be very eective for patterns formed by simple letters and classes of characters [18, 19].
We have extended BNDM to allow for limited gaps.
We have presented experiments showing that our new algorithms are much faster and more
predictable than all the other algorithms based on regular expression searching. In addition, we
have presented a criterion to select the best among the two that has experimentally shown to
be very reliable. This makes the algorithms of special interest for practical applications, such as
protein searching.
We plan to extend the present work by designing an algorithm able to skip characters and
that at the same time ensures a linear worst case time, and by extending the scope of the present
\optimized" algorithm so that it can select the best factor (not just the best prex) to search.
Other more challenging types of search are those allowing negative gaps and errors in the
matches (see, e.g. [16]). Our algorithms are especially easy to extend to permit errors and we are
pursuing in that direction.
--R
Text retrieval: Theory and practice.
A new approach to text searching.
Faster approximate string matching.
From regular expression to deterministic automata.
A fast string searching algorithm.
A generalized pro
From regular expression to DFA's using NFA's.
algorithms.
Speeding up two string-matching algorithms
The database
Juraj Hromkovi
Fast pattern matching in strings.
Approximate matching of network expressions with spacers.
A fast bit-vector algorithm for approximate pattern matching based on dynamic progamming
Fast and exible string matching by combining bit-parallelism and su-x automata
Fast regular expression matching.
On the multi backward dawg matching algorithm (MultiBDM).
Screening protein and nucleic acid sequences against libraries of patterns.
Regular expression search algorithm.
Taxonomies and toolkits of regular language algorithms.
Fast text searching allowing errors.
--TR
From regular expressions to deterministic automata
A Four Russians algorithm for regular expression pattern matching
A new approach to text searching
Fast text searching
Regular expressions into finite automata
Text algorithms
A fast bit-vector algorithm for approximate string matching based on dynamic programming
Programming Techniques: Regular expression search algorithm
Fast and flexible string matching by combining bit-parallelism and suffix automata
Text-Retrieval
Translating Regular Expressions into Small epsilon-Free Nondeterministic Finite Automata
Fast Regular Expression Search
A Bit-Parallel Approach to Suffix Automata
--CTR
Alberto Policriti , Nicola Vitacolonna , Michele Morgante , Andrea Zuccolo, Structured motifs search, Proceedings of the eighth annual international conference on Resaerch in computational molecular biology, p.133-139, March 27-31, 2004, San Diego, California, USA
Gonzalo Navarro , Mathieu Raffinot, Fast and flexible string matching by combining bit-parallelism and suffix automata, Journal of Experimental Algorithmics (JEA), 5, p.4-es, 2000 | bit-parallelism;information retrieval;PROSITE;pattern matching;computational biology |
369880 | Enlarging the Margins in Perceptron Decision Trees. | Capacity control in perceptron decision trees is typically performed by controlling their size. We prove that other quantities can be as relevant to reduce their flexibility and combat overfitting. In particular, we provide an upper bound on the generalization error which depends both on the size of the tree and on the margin of the decision nodes. So enlarging the margin in perceptron decision trees will reduce the upper bound on generalization error. Based on this analysis, we introduce three new algorithms, which can induce large margin perceptron decision trees. To assess the effect of the large margin bias, OC1 (Journal of Artificial Intelligence Research, 1994, 2, 132.) of Murthy, Kasif and Salzberg, a well-known system for inducing perceptron decision trees, is used as the baseline algorithm. An extensive experimental study on real world data showed that all three new algorithms perform better or at least not significantly worse than OC1 on almost every dataset with only one exception. OC1 performed worse than the best margin-based method on every dataset. | Introduction
Perceptron Decision Trees (PDT) have been introduced by a number of authors under different
names [17, 6, 7, 8, 10, 11, 27, 18]. They are decision trees in which each internal
node is associated with a hyperplane in general position in the input space. They have been
used in many real world pattern classification tasks, with good results [7, 18, 9]. Given their
high flexibility, a feature that they share with more standard decision trees such as the ones
produced by C4.5 [20], they tend to overfit the data if their complexity is not somehow kept
under control. The standard approach to controlling their complexity is to limit their size,
with early stopping or pruning procedures.
In this paper we introduce a novel approach to complexity control in PDTs, based on the
concept of the margin (namely, the distance between the decision boundaries and the training
points). The control of this quantity is at the basis of the e#ectiveness of other systems,
such as Vapnik's Support Vector Machines [12], Adaboost [24], and some Bayesian Classifiers
[13]. We prove that this quantity can be as important as the tree-size as a capacity control
parameter.
The theoretical motivations behind this approach lie in the Data-Dependent Structural Risk
Minimization [25]: the scale of the cover used in VC theory to provide a bound on the generalization
error depends on the margin and hence the hierarchy of classes is chosen in response
to the data. Of course the two complexity control criteria can be used together, combining
a pruning phase with the bias towards large margins, to obtain a better performance.
These results motivate a new class of PDT learning algorithms, aimed at producing large
margin trees. We propose three such algorithms: FAT, MOC1 and MOC2, and compare
their performance with that of OC1, one of the best known PDT learning systems. All three
large-margin systems outperform OC1 on most of the real world data-sets we have used,
indicating that overfitting in PDTs can be e#ciently combatted by enlarging the margin of
the decision boundaries on the training data.
Perceptron Decision Trees
The most common decision trees, in which each node checks the value of a single attribute,
could be defined as axis parallel, because the tests associated with each node are equivalent
to axis-parallel hyperplanes in the input space. Many variations of this simple model have
been proposed, since the introduction of such systems in the early '80s. Some of them involve
more complex tests at the decision nodes, usually testing more than one attribute.
Decision Trees whose nodes test a linear combination of the attributes have been proposed
by di#erent researchers under di#erent names: Linear Combination Trees, multivariate DT
[11], oblique DTs [18], Perceptron Decision Trees [27], etc. The first of such systems was
proposed by Breiman, who incorporated it into the package CART[10]. The tests associated
at each node are equivalent to hyperplanes in general position, and they partition the input
space into polyhedra as illustrated in Figure 1. They obviously include as a special case the
more common decision trees output by systems like C4.5.
x
x
x
x
x
x
x
x
x
x
x
x
x
x
Figure
1: A Perceptron Decision Tree and the way it splits the input space
The extreme flexibility of such systems makes them particularly exposed to the risk of
overfitting. This is why e#cient methods for controlling their expressive power (typically
pruning techniques) have always to be used in combination with the standard TopDown
growth algorithms.
The class of functions computed by PDTs is formally defined as follows.
Definition 2.1 Generalized Decision Trees (GDT). Given a space X and a set of boolean
functions the class GDT(F) of Generalized Decision Trees over
F are functions which can be implemented using a binary tree where each internal node is
labeled with an element of F , and each leaf is labeled with either 1 or 0.
To evaluate a particular tree T on input x # X, all the boolean functions associated to the
nodes are assigned the same argument x # X, which is the argument of T (x). The values
assumed by them determine a unique path from the root to a leaf: at each internal node
the left (respectively right) edge to a child is taken if the output of the function associated
to that internal node is 0 (respectively 1). This path is known as the evaluation path. The
value of the function T (x) is the value associated to the leaf reached. We say that input x
reaches a node of the tree, if that node is on the evaluation path for x.
In the following, the nodes are the internal nodes of the binary tree, and the leaves are its
external ones.
Examples.
. Given Tree (BDT) is a GDT over
. Given Tree (CDT) is a GDT over
This kind of decision tree defined on a continuous space are the output of common
algorithms like C4.5 and CART, and we will refer to them as CDTs.
. Given Tree (PDT) is a GDT over
where we have assumed that the inputs have been augmented with a coordinate of
constant value, hence implementing a thresholded perceptron.
PDTs are generally induced by means of a TopDown growth procedure, which starts from the
root node and greedily chooses a perceptron which maximizes some cost function, usually a
measure of the "impurity" of the subsamples implicitly defined by the split. This maximization
is usually hard to perform, and sometimes replaced by randomized (sub)optimization.
The subsamples are then mapped to the two children nodes. The procedure is then recursively
applied to the nodes, and the tree is grown until some stopping criterion is met. Such
a tree is then used as a starting point for a "BottomUp" search, performing a pruning of the
tree. This implies eliminating the nodes which are redundant, or which are unable to "pay
for themselves" in terms of the cost function. Generally pruning an overfitting tree produces
better classifiers than those obtained with early stopping, since this makes it possible to
check if promising directions were in fact worth exploring, and if locally good solutions were
on the contrary a dead-end. So, while the standard TopDown algorithm is an extremely
greedy procedure, with the introduction of pruning it can be possible to look-ahead: this
allows for discovery of more hidden structure.
The capacity control in PDTs is hence completely achieved by controlling the size of the tree,
that is the complexity of the overall classifier. We will propose an alternative method, which
on the contrary focuses on reducing the complexity of the node classifiers, independently
of the tree size. This will be possible thanks to a theoretical analysis of generalization
performance of the function class defined by PDTs, in the framework of VC theory.
Theoretical Analysis of Generalization
The generalization performance of a learning machine can be studied by means of uniform
convergence bounds, with a technique introduced by Vapnik and Chervonenkis [30]. The
central concept in such an analysis is the "e#ective capacity" of the class of hypotheses
accessible by the machine: the richer such a class, the higher the risk of overfitting. This
feature of a learning machine is often referred to as its flexibility or capacity. The issue of
preventing overfitting by allowing just the right amount of flexibility is therefore known as
capacity control.
The notion of e#ective cardinality of a function class is captured by its "growth function"
for Boolean classes or "covering numbers" for real valued functions. The size of the covering
numbers depends on the accuracy of the covering as well as the function class itself. The
larger the margin the less accuracy is required in the covering.
In the following we will be concerned with estimating the capacity of the class of PDTs. We
will see that the margin does a#ect the flexibility of such a hypothesis class, as does the
tree-size. This will motivate some alternative techniques for controlling overfitting which -
albeit conceptually similar to pruning - act on the complexity of the node classifiers rather
than on the complexity of the overall tree.
We begin with the definition of the fat-shattering dimension, which was first introduced in
[15], and has been used for several problems in learning since [1, 4, 2, 3].
Definition 3.1 Let F be a set of real valued functions. We say that a set of points X is
#-shattered by F relative to there are real numbers r x indexed by x # X such
that for all binary vectors b indexed by X, there is a function f b # F satisfying
The fat shattering dimension fat F of the set F is a function from the positive real numbers
to the integers which maps a value # to the size of the largest #-shattered set, if this is finite,
or infinity otherwise.
As an example which will be relevant to the subsequent analysis consider the class:
We quote the following result from [5](see also [12]).
Theorem 3.2 [5] Let F lin be restricted to points in a ball of n dimensions of radius R about
the origin. Then
The following theorem bounds the generalization of a classifier in terms of the fat shattering
dimension rather than the usual Vapnik-Chervonenkis or Pseudo dimension.
Let T # denote the threshold function at #: T # For a class
of functions F , T #
Theorem 3.3 [25] Consider a real valued function class F having fat shattering function
bounded above by the function afat : R # N which is continuous from the right. Fix # R. If
a learner correctly classifies m independently generated examples z with
such that the train error is zero and then with confidence 1 - # the
expected error of h is bounded from above by
#(m, k, #) =m
8em
The importance of this theorem is that it can be used to explain how a classifier can give
better generalization than would be predicted by a classical analysis of its VC dimension.
Essentially expanding the margin performs an automatic capacity control for function classes
with small fat shattering dimensions. The theorem shows that when a large margin is
achieved it is as if we were working in a lower VC class.
We should stress that in general the bounds obtained should be better for cases where a large
margin is observed, but that a priori there is no guarantee that such a margin will occur.
Therefore a priori only the classical VC bound can be used. In view of corresponding lower
bounds on the generalization error in terms of the VC dimension, the a posteriori bounds
depend on a favorable probability distribution making the actual learning task easier. Hence,
the result will only be useful if the distribution is favorable or at least not adversarial. In this
sense the result is a distribution dependent result, despite not being distribution dependent
in the traditional sense that assumptions about the distribution have had to be made in
its derivation. The benign behavior of the distribution is automatically estimated in the
learning process.
In order to perform a similar analysis for perceptron decision trees we will consider the set
of margins obtained at each of the nodes, bounding the generalization as a function of these
values.
It turns out that bounding the fat shattering dimension of PDT's viewed as real function
classifiers is di#cult. We will therefore do a direct generalization analysis mimicking the
proof of Theorem 3.3 but taking into account the margins at each of the decision nodes in
the tree.
Definition 3.4 Let (X, d) be a (pseudo-) metric space, let A be a subset of X and # > 0. A
set is an #-cover for A if, for every a # A, there exists b # B such that d(a, b) < #.
The #-covering number of A, N d (#, A), is the minimal cardinality of an #-cover for A (if
there is no such finite cover then it is defined to be #).
We for the #-covering number of F with respect to the # pseudo-metric
measuring the maximum discrepancy on the sample x, that is with respect to the distance
F . These numbers are bounded in the following
Lemma, which we present for historical reasons, though in fact we will require the slightly
more general corollary.
Lemma 3.5 (Alon et al. [1]) Let F be a class of functions X # [0, 1] and P a distribution
over X. Choose 0 < # < 1 and let
where the expectation E is taken w.r.t. a sample x drawn according to P m .
Corollary 3.6 [25] Let F be a class of functions X # [a, b] and P a distribution over X.
Choose
where the expectation E is over samples x drawn according to P m .
We are now in a position to tackle the main lemma which bounds the probability over
a double sample that the first half has zero error and the second error greater than an
appropriate #. Here, error is interpreted as being di#erently classified at the output of the
tree. In order to simplify the notation in the following lemma we assume that the decision
tree has K nodes and we denote fat F lin (#) by fat(#).
Lemma 3.7 Let T be a perceptron decision tree with K decision nodes with margins # 1 , # 2 , . , # K
at the decision nodes satisfying k If it has correctly classified m labeled examples
x generated independently according to the unknown (but fixed) distribution P with support
in a ball of radius R and y is a second m sample, then we can bound the following probability
to be less than #,
xy: # a tree correctly classifies x,
fraction of y misclassified > #(m, K, #)
< #,
where #(m, K,
(D log(8m)
Using the standard permutation argument (as in [30]), we may fix a sequence xy
and bound the probability under the uniform distribution on swapping permutations that
the sequence satisfies the condition stated. We consider generating minimal # k /2-covers
xy for each value of k, where # Suppose that for node i of
the tree the margin # i of the hyperplane w i satisfies . We can therefore find
xy whose output values are within # i /2 of w i . We now consider the tree T # obtained
by replacing the node perceptrons w i of T with the corresponding f i . This tree performs the
same classification function on the first half of the sample, and the margin at node i remains
larger than # i
a point in the second half of the sample is incorrectly
classified by T it will either still be incorrectly classified by the adapted tree T # or will at
one of the decision nodes i in T # be closer to the decision boundary than # k i /2. The point is
thus distinguishable from left hand side points which are both correctly classified and have
margin greater than # k i /2 at node i. Hence, that point must be kept on the right hand side
in order for the condition to be satisfied. Hence, the fraction of permutations that can be
allowed for one choice of the functions from the covers is 2 -#m . We must take the union
bound over all choices of the functions from the covers. Using the techniques of [25] the
numbers of these choices is bounded by Corollary 3.6 as follows
The value of # in the lemma statement therefore ensures
that the union bound is less than #
Lemma 3.7 applies to a particular tree with a specified number of nodes, architecture and
fat shattering dimensions for each node. In practice we will observe these quantities after
running the learning algorithm which generates the tree. Hence, to obtain a bound that can
be applied in practice we must bound the probabilities uniformly over all of the possible
architectures and dimensions that can arise. Before giving the theorem that will give this
bound we require two results. The first is due to Vapnik [28, page 168] and is the key to
bounding error probabilities in terms of the probabilities of discrepancies on a double sample.
Lemma 3.8 Let X be a set and S a system of sets on X, and P a probability measure on
X. For x
A#S
A#S
The second result gives a bound on the number of di#erent tree architectures that have a
given number of computational nodes.
Theorem 3.9 [21] The number S k of k node Decision Tree skeletons is
# .
Combining these two results with Lemma 3.7 we obtain the following theorem.
Theorem 3.10 Suppose we are able to classify an m sample of labeled examples using a
perceptron decision tree and suppose that the tree obtained contained K decision nodes with
margins # i at node i, then we can bound the generalization error with probability greater than
to be less than
where
, and R is the radius of a sphere containing the support of the distribution
We must bound the probabilities over di#erent architectures of trees and di#erent
margins. We first use Lemma 3.8 to bound the probability of error in terms of the probability
of the discrepancy between the performance on two halves of a double sample. In order to
apply Lemma 3.7 we must consider all possible architectures that can occur and for each
architecture the di#erent patterns of k i 's over the decision nodes. For a fixed value of K
Theorem 3.9 gives the number of decision tree skeletons. The largest allowed value of k i is
m and so for fixed K we can bound the number of possibilities
counts the possible labeling of the K+1 leaf nodes. Hence, there are this number
of applications of Lemma 3.7 for a fixed K. Since the largest value that K can take is m,
we can let # so that the sum
Choosing
in the applications of Lemma 3.7, ensures that the probability of any of the statements
failing to hold is less than #/2. Note that we have replaced the constant 8
order to ensure the continuity from the right required for the application of Theorem 3.3
and have upperbounded log(4em/k i ) by log(4em). Hence, applying Lemma 3.8 in each case
the probability that the statement of the theorem fails to hold is less than #
4 Experimental Results
From the theory presented in the previous section, it follows that large-margin PDTs are
more likely to generalize well. A bias toward large-margin trees can be implemented in a
number of di#erent ways, either as a post-processing phase of existing trees or as a brand new
impurity measure to determine splitting/stopping criteria in TopDown growth algorithms.
To facilitate comparisons, we have implemented three such algorithms as modifications of
one of the best-known PDT learning systems OC1 [18] of Murthy, Kasif and Salzberg, which
is freely available over the Internet. The e#ect of the large-margin bias can hence be directly
assessed, by running the margin-arbitrary version of the same algorithm on the same data.
The first such algorithm, FAT, accepts in input a PDT constructed using OC1 and outputs
a large margin version of the same tree. The other two, MOC1 and MOC2, have di#erent
impurity measures which take into consideration the margins. All three algorithms work for
multi-class data.
The three systems have been compared with OC1 on 10 benchmarking data sets. The results
confirm the predictions of the theoretical model, clearly indicating that the generalization is
improved by enlarging the margin.
The data sets we have used for the study are 6 data sets used in the original OC1 paper[18],
and 4 other data sets, which are publicly available in the UCI data repository [31]. The
data sets studied in [18] are Dim, Bright, Wisconsin Breast Cancer, Pima Indians Diabetes,
Boston Housing and Iris. The four additional data sets are Bupa, Sonar, Heart and Wisconsin
Breast Cancer Prognosis. The data sets di#er greatly from subjects, sizes and number of
attributes, the subjects of data sets range from medical to astronomical, sizes from 150 to
4192, number of attributes from 4 to 60 1 . For details of these data sets see [18, 31]. For
each data set, a single run of 10-fold cross-validation is carried out. The relevant quantity, in
this experiment, is the di#erence in the test accuracy between PDTs with arbitrary margins
constructed by OC1 and the PDTs with large margins on the same data.
Comparing learning algorithms has drawn extensive attention recently [16, 14, 23, 19]. A
single run of 10-fold cross-validation on a reasonable number of data sets is still a preferred
1 The number of (attributes, points) of each data set is as following: Bright(14,2462), Bupa(6,345), Can-
cer(9, 682), Dim(14, 4192), Heart(13, 297) Housing( 13,506), Iris(4, 150), Pima(8, 768), Prognosis(32,198),
practical approach. It is prone to detect the di#erence of two algorithms. We basically
followed the approach recommended in [23].
In the rest of this section, first we will briefly review the OC1 system, then present our three
large margin algorithms, and compare their performances with OC1.
4.1 Review of OC1
OC1 [18] is a randomized algorithm, which performs a randomized hill-climbing search for
learning the perceptrons, and builds the tree TopDown. Starting from the root node, the
system chooses the hyperplane which minimizes a predefined "impurity" measure (e.g. information
gain [20], or Gini index [10], or the Twoing Rule [10, 18], etc. The system is greedy
because at each stage it chooses the best split available, and randomized because such a
best split is not obtained by means of exhaustive search but with a randomized hill-climbing
process.
Throughout this study we use the twoing rule as the impurity measure, for OC1, FAT, and
MOC1. MOC2 uses a modified twoing rule as impurity measure. Other impurity measures
can also be applied in FAT and MOC1 without change, while MOC2 would need minor
changes.
The Twoing Rule
(1)
where
total number of instances at current node
number of classes, for two class problems,
number of instances on the left of the split, i.e. w
number of instances on the right of the split i.e. w
number of instances in category i on the the left of the split
number of instances in category i on the the right of the split
This is a goodness measure rather than an impurity one, and OC1 attempts to maximize it
at each split during the tree growth via minimizing 1/TwoingV alue. Further details about
the randomization, the pruning, and the splitting criteria can be found in [18].
4.2 Results of FAT
Description of algorithm FAT
The algorithm FAT uses the tree produced by OC1 as a starting point, and maximizes
its margins. This involves finding - for each node - the hyperplane which performs the
same split as performed by the OC1 tree but with the maximal margin. This can be done
by considering the subsample reaching each node as perfectly divided into two parts, and
feeding the data accordingly relabeled to an algorithm which finds the optimal separating
separating hyperplane with maximal margin in this now linearly separable data.
The optimal separating hyperplanes are then placed in the corresponding decision nodes and
the new tree is tested on the same test data. Note that, the PDT produced by FAT will
have the same tree structure and training accuracy as the original PDT constructed by OC1.
They will only di#er on test accuracy. We use the Support Vector Machine (SVM) algorithm
[29] to find the optimal separating hyperplane. To conform with the definition of a PDT,
no kernel is used in the SVM, the optimal separating hyperplane is constructed in the input
space.
Algorithm for FAT
1. Construct a decision tree using OC1, call it OC1-PDT.
2. Starting from root of OC1-PDT, traverses through all the non-leaf nodes. At each
node,
. Relabel the points at this node with # T x class right, the other points
at this node as class left.
. Find the perceptron (optimal separating hyperplane)
separates class right and class left perfectly with maximal margin.
. replace the original perceptron with the new one.
3. Output the FAT-PDT.
Optimal Separating Hyperplane - SVM algorithm for the linearly separable case
The following problems are solved at each node, to find the optimal separating hyperplane
for linearly separable data [29].
min
subject to y i (w T x
corresponds to class right and y corresponds to class left and # is the
number of points reaching the decision node.
For computational reason we usually solve the dual problem of (2):
min
subject to #
FAT-PDT has a generalization error bounded by theorem 3.10. We observed that FAT completely
relied on and was restricted by the perceptron decision tree induced by OC1. In
many cases, the margins in the splits found by OC1 are very small, so FAT has little scope
for optimization. In general, if there is a big margin in the top split at the root node, FAT
will generalize much better. It implies that the greedy algorithm OC1 is not a good tree
inducer for FAT, in the sense of the margin. We need to find a better non-greedy tree inducer
for FAT. On the other hand, FAT provides a new approach to applying the Support Vector
Machine for multi-class classification tasks.
Comparison of FAT and OC1
For each dataset, 10-fold cross-validation is used to measure the learning ability of the
algorithm FAT and OC1. A paired t-test is used to test the di#erence of the means of FAT
and OC1.
10-fold cross-validation results: FAT vs OC1
OC1 10-fold CV average accuracy
FAT
10-fold
average
accuracy
significant
not significant
x=y
Figure
2: Comparison of the 10-fold CV results of FAT versus OC1. If the point is above
the line, it indicates the 10-fold CV mean of FAT is higher than that of OC1, and vice versa.
The figure shows that FAT outperforms OC1 on 9 out of 10 data sets and is outperformed
only on 1 data set.
From
Figure
2, we can see that, FAT outperforms OC1 on 9 out of the 10 data sets, and
outperforms OC1 on all the 6 data sets studied in [18]. The 10-fold cross-validation mean
di#erences of FAT and OC1 on those 9 data sets are all significant when a paired t-test is
applied. On one data set Prognosis, OC1 outperforms FAT and the di#erence is significant.
We also observed that, except in one case (Prognosis), FAT performs as good as or better than
OC1 in every fold of 10-fold cross-validation. So when FAT has a higher mean than OC1, it
is significant at a small # level for the paired t-test even though the di#erence is small. This
is a strong indication that Perceptron Decision Trees with large margins generalize better.
The 10-fold cross-validation means and p values are summarized in Table 2.
4.3 Results of MOC1
Description of MOC1
MOC1 (Margin OC1) is a variation of OC1, which modifies the objective function of OC1
to consider the size of the margin. The underlying philosophy is to find a separating plane
with a tradeo# between training accuracy and the size of margin at each node. This idea
is motivated by the Support Vector Machine for the linearly non-separable case, which
minimizes the classification error and maximizes the margin at the same time. SVM with
soft margin minimizes the sum of misclassification errors and a constant C multiplying the
reciprocal of the soft margin. SVM tries to find a split with high classification accuracy
and large soft margin. Analagously, SVM minimizes the sum of the impurity measure and a
constant times the reciprocal of the hard margin.
The MOC1 algorithm minimizes the following objective function:
where
Objective is the impurity measure of OC1, in this study, the default twoing
rule is used as impurity measure.
current margin is the sum of perpendicular distances to the hyperplane of two
nearest points on the di#erent side of the current separating hyperplane.
- # is a scalar weight, # [0, 1]
of points at current node)}
# determines how much the large margin is weighted in selecting the split. Tuning # could
improve the performance. When determining the weight of the margin, we also take the
number of points at the current node into consideration. The idea is that a constant weight
of margin for all nodes is not good. The weight should be able to adapt to the position of
current node and size of training examples at the current node. Since we are not particularly
interested in finding the tree with highest possible accuracy, but rather demonstrating that
large margins can improve the generalization, we did not tune the # for each data set to
achieve the highest possible accuracy. We set data sets. In other words, the
results of MOC1 presented below are not the best results possible.
Comparison of MOC1 and OC1
As in the previous section, we use 10-fold cross-validation to measure the learning ability
of the algorithm MOC1 and OC1. To test the di#erence between the means of MOC1 and
OC1, here again a paired t-test is used.
From figure 3, we can see that MOC1 has higher 10-fold cross-validation mean than that
of OC1 on 8 of the 10 data sets, and 5 of them are significantly higher; OC1 has higher
means on the other two data sets (Cancer, Prognosis), the di#erences are tiny and both are
not significant. Overall, MOC1 outperforms OC1 on 6 of the 10 data sets and as good as
OC1 on the other four. Of the six data sets studied in [18], MOC1 outperforms OC1 on five
of them and performs as well as OC1 on the final one (Cancer). See table 2 for respective
means and p values.
OC1 10-CV average accuracy
MOC1
average
accuracy
significant
not significant
x=y
Figure
3: Comparison of the 10-fold CV results of MOC1 versus OC1. If the point is above
the line, it indicates the 10-fold CV average of MOC1 is higher than that of OC1, and vice
versa. The figure shows that MOC1 outperforms OC1 on 6 out of 10 data sets, and performs
as good as OC1 on the other four data sets.
4.4 Results of MOC2
Description of MOC2
MOC2 uses a modified twoing rule, which directly incorporates the idea of large margin to the
impurity measure. Unlike MOC1, MOC2 uses a soft margin. It treats points falling within
the margin and outside of the margin di#erently. Only the impurity measure is altered. The
rest is same as in the standard OC1 algorithm.
The modified twoing rule
|MTR |
where
total number of instances at current node
number of classes, for two class problems
number of instances on the left of the split, i.e. w
number of instances on the right of the split i.e. w
number of instances in category i on the left of the split
number of instances in category i on the the right of the split
number of instances on the left of the split, w
|MTR | - number of instances on the right of the split w
number of instances in category i with w
number of instances in category i with w
In the modified twoing rule, our goal is, at each node, to find a split with fewer points falling
within the margin (in between accuracy outside the margin
and good overall accuracy. Here again, we try to achieve a balance of classification accuracy
and size of margin. By doing this, we want to push apart the two classes from the the
separating hyperplane as far as possible while maintaining a reasonable good classification
accuracy, hence, improve the generalization of the induced decision tree. The advantage of
MOC2 is that there are no free parameters to tune.
Comparison of MOC2 and OC1
As in previous section, 10-fold cross-validation is used to measure the learning ability of the
algorithms MOC2 and OC1. Paired t-tests are used to test the di#erence of the means of
MOC2 and OC1.
From
Figure
4 we can see that MOC2 has higher mean on 9 out of the 10 data sets, and has
slightly lower mean on only one data set (Housing). Of the 9 higher means, 5 are significantly
higher. The one lower mean is not significant. Overall, MOC2 outperforms OC1 on 5 out of
OC1 10-CV average accuracy
MOC2
average
accuracy
10-fold cross validation results: MOC2 vs OC1
significant
not significant
x=y
Figure
4: Comparison of the 10-fold CV results of MOC2 versus OC1. If the point is above
the line, it indicates the 10-fold CV mean of MOC2 is higher than that of OC1 on that data
set, and vice versa. The figure shows that MOC2 outperforms OC1 on 5 out of 10 data sets,
and performs as well as OC1 on the other 5 data sets.
the 10 data sets and performs as well as OC1 on the other 5. Of the six data sets studied in
[18], MOC2 outperformed OC1 on three of them, and perform as well as OC1 on the other
three. The respective means and p values are summarized in Table 2
The modified twoing rule opens a new way of measuring the goodness of a split, which
directly incorporates the generalization factor into the measure. In our experiments, it has
been proven to be a useful measure.
4.5 Tree Sizes
For FAT, the tree sizes are exactly the same as OC1, since FAT PDT has the same tree
structure as OC1 PDT. FAT only replaces splits at nodes of the OC1 PDT with large-
margin perceptrons which perform exactly the same splits. Of the ten data sets, MOC1
induced five smaller trees, one the same size tree, and four larger trees when compared with
Leaves Depth Leaves Depth Leaves Depth
Bright 5.40 2.80 6.20 3.20 5.70 2.90
Bupa 5.00 2.80 2.10 1.10 7.40 3.60
Cancer 2.50 1.30 4.00 2.50 2.90 1.50
Heart 6.10 2.10 3.30 2.00 2.10 1.10
Housing 10.00 4.20 7.10 3.80 6.40 3.00
Iris 3.20 2.10 3.20 2.10 3.00 2.00
Prognosis 3.60 2.00 2.30 1.20 2.20 1.10
Sonar 4.30 2.60 6.10 3.30 5.90 2.90
Table
1: 10-fold CV average tree size of OC1, FAT, MOC1 and MOC2
x (p value) -
x (p value) -
x (p value) classifier
Bright 98.46 98.62 (.05) 98.94 (.10) 98.82 (.10) MOC1
Cancer 95.89 96.48 (.05) 95.60 95.89 FAT
Heart 73.40 76.43 (.12) 75.76 (.21) 77.78 (.10) MOC2
Housing 81.03 83.20 (.05) 82.02 80.23 FAT
Iris 95.33 96.00 (.17) 95.33 96.00 FAT
Pima 71.09 71.48 (.04) 73.18 (.08) 72.53 (.23) MOC1
Prognosis 78.91 74.15 78.23 79.59 MOC2
Table
2: 10-fold CV means of OC1, FAT, MOC1 and MOC2
OC1. MOC2 induced five smaller trees and five bigger trees compared with OC1. We did
not find consistent pattern of tree sizes. Table 1 list the tree sizes of OC1, FAT, MOC1 and
MOC2.
4.6 Summary of experimental results
The theory states that maximizing the margins between the data points on each side of
separating hyperplane in the perceptron decision tree will improve the error bounds, the
perceptron decision tree will be more likely to generalize better. But the theory does not
guarantee a specific classifier has a low error rate.
From the 10-fold cross-validation results of the 10 data sets, FAT has 9 higher means than
OC1 and they are all significantly higher; MOC1 has 7 higher means, and 6 of them are
significantly higher; MOC2 has 8 higher means and 5 of them are significantly higher. Equal
or lower means only happened on 3 data sets, Cancer, MOC1 has a slightly smaller mean
than OC1 on it, MOC2 has the same mean as OC1 on it; Housing, MOC2 has slightly smaller
mean than OC1 on it; Prognosis, FAT has a significantly smaller mean on it, MOC1 also has
a slightly smaller mean but the di#erence is not significant. Of the classifiers with highest
mean, FAT produced four, MOC1 and MOC2 each produced three, and OC1 produced none.
From the experiments, we believe that PDTs with large margin are more likely to have
smaller variance of performance too. In our experiments, in most of the cases, FAT, MOC1
and MOC2 produce classifiers with smaller variances, and many of them are significantly
smaller, though very occasionally they produce classifiers with significantly larger variance.
However, we cannot draw any confident conclusion about the variances. We therefore did
not present our study on variances here.
In short, the experimental results show that finding the separating hyperplane with large
margin at each node of a perceptron decision tree can improve the error bound, hence the
PDT is more likely to have a higher average accuracy, i.e. generalizes better. Furthermore,
we believe, by improving error bounds through margin maximization, the learning algorithm
will perform more consistently, and be more likely to have smaller variance as well.
Conclusions
The experimental results presented in this paper clearly show that enlarging the margin
does improve the generalization, and that this bias can be inserted into the growth algorithm
itself, providing trees which are specifically built to minimize the theoretical bound
on generalization error. Such trees do not lose any of their other desirable features, such as
readability, ease of maintenance and updating, flexibility and speed.
Furthermore, the theoretical analysis of the algorithms shows that the dimension of the
input space does not a#ect the generalization performance: it is hence possible to conceive
of Perceptron Decision Trees in a high-dimensional feature space, which take advantage of
kernels and margin-maximization such as Support Vector Machines. This would provide
as a side e#ect a very natural approach to multi-class classification with Support Vector
Machines.
Other theoretical results exist indicating that the tree size is not necessarily a good measure of
capacity. Our analysis also shows how to take advantage of this theoretical observation, and
design learning algorithms which control hypothesis complexity by acting on the complexity
of the node-classifiers and hence that of the whole tree. All three of the proposed approaches,
the post-processing method FAT, and the two with margin based splitting criteria MOC1
and MOC2 led to significant improvement over the baseline OC1 method. It is an open
question which method is best, but maximizing margins should be a consideration of every
PDT algorithm.
--R
Function learning from interpolation
Generalization performance of Support Vector Machines and other pattern classifiers.
Robust Linear Programming discrimination of two linearly inseparable sets
Multicategory discrimination via linear program- ming
Serial and parallel multicategory discrimination
On Support Vector Decision Trees for Database Marketing
Olshen R.
Bayesian Classifiers are Large Margin Hyperplanes in a Hilbert Space
Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms.
A study of Cross-Validation and Bootstraping for Accuracy Estimation and Model Selection
Pattern Recognition via Linear Pro- gramming: Theory and application to medical diagnosis
Kasif S.
Assessing Relevance Determination Methods Using DELVE Generalization in Neural Networks and Machine Learning
Learning Decision Trees Using the Minimum Description Lenght Principle
Growing and Pruning Neural Tree Networks
On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach.
Boosting the Margin: A New Explanation for the E
Structural Risk Minimization over Data-Dependent Hierarchies
"Neural trees: a new tool for classification"
Estimation of Dependences Based on Empirical Data
The Nature of Statistical Learning Theory
On the Uniform Convergence of Relative Frequencies of Events to their Probabilities
University of California
--TR
Inferring decision trees using the minimum description length principle
C4.5: programs for machine learning
Multivariate Decision Trees
The nature of statistical learning theory
Networks
Fat-shattering and the learnability of real-valued functions
Scale-sensitive dimensions, uniform convergence, and learnability
Generalization performance of support vector machines and other pattern classifiers
Approximate statistical tests for comparing supervised classification learning algorithms
On Comparing Classifiers
Growing and Pruning Neural Tree Networks
Boosting the margin
Bayesian Classifiers Are Large Margin Hyperplanes in a Hilbert Space
Function learning from interpolation
--CTR
Volkan Vural , Jennifer G. Dy, A hierarchical method for multi-class support vector machines, Proceedings of the twenty-first international conference on Machine learning, p.105, July 04-08, 2004, Banff, Alberta, Canada
Nello Cristianini , Colin Campbell , Chris Burges, Editorial: Kernel Methods: Current Research and Future Directions, Machine Learning, v.46 n.1-3, p.5-9, 2002
Laurence Hirsch , Robin Hirsch , Masoud Saeedi, Evolving Lucene search queries for text classification, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Martin Anthony, On the generalization error of fixed combinations of classifiers, Journal of Computer and System Sciences, v.73 n.5, p.725-734, August, 2007
Martin Anthony, Generalization Error Bounds for Threshold Decision Lists, The Journal of Machine Learning Research, 5, p.189-217, 12/1/2004 | perceptron;learning theory;capacity control;decision trees;learning algorithm |
369881 | Cascade Generalization. | Using multiple classifiers for increasing learning accuracy is an active research area. In this paper we present two related methods for merging classifiers. The first method, Cascade Generalization, couples classifiers loosely. It belongs to the family of stacking algorithms. The basic idea of Cascade Generalization is to use sequentially the set of classifiers, at each step performing an extension of the original data by the insertion of new attributes. The new attributes are derived from the probability class distribution given by a base classifier. This constructive step extends the representational language for the high level classifiers, relaxing their bias. The second method exploits tight coupling of classifiers, by applying Cascade Generalization locally. At each iteration of a divide and conquer algorithm, a reconstruction of the instance space occurs by the addition of new attributes. Each new attribute represents the probability that an example belongs to a class given by a base classifier. We have implemented three Local Generalization Algorithms. The first merges a linear discriminant with a decision tree, the second merges a naive Bayes with a decision tree, and the third merges a linear discriminant and a naive Bayes with a decision tree. All the algorithms show an increase of performance, when compared with the corresponding single models. Cascade also outperforms other methods for combining classifiers, like Stacked Generalization, and competes well against Boosting at statistically significant confidence levels. | Introduction
The ability of a chosen classification algorithm to induce a good generalization depends
on the appropriateness of its representation language to express generalizations
of the examples for the given task. The representation language for a standard
decision tree is the DNF formalism that splits the instance space by axis-parallel
hyper-planes, while the representation language for a linear discriminant function is
a set of linear functions that split the instance space by oblique hyper planes. Since
different learning algorithms employ different knowledge representations and search
heuristics, different search spaces are explored and diverse results are obtained. In
statistics, Henery (1997) refers to rescaling as a method used when some classes
are over-predicted leading to a bias. Rescaling consists of applying the algorithms
in sequence, the output of an algorithm being used as input to another algorithm.
The aim would be to use the estimated probabilities derived from a
learning algorithm, as input to a second learning algorithm the purpose of which is
to produce an unbiased estimate Q(C i jW ) of the conditional probability for class
The problem of finding the appropriate bias for a given task is an active research
area. We can consider two main lines of research: on the one hand, methods that
try to select the most appropriate algorithm for the given task, for instance Schaf-
fer's selection by cross validation (Schaffer, 1993), and on the other hand, methods
that combine predictions of different algorithms, for instance Stacked Generalization
(Wolpert, 1992). The work presented here near follows the second line of
research. Instead of looking for methods that fit the data using a single representation
language, we present a family of algorithms, under the generic name of
Cascade Generalization, whose search space contains models that use different representation
languages. Cascade generalization performs an iterative composition of
classifiers. At each iteration a classifier is generated. The input space is extended
by the addition of new attributes. These are in the form of probability class distributions
which are obtained, for each example, by the generated classifier. The
language of the final classifier is the language used by the high level generalizer.
This language uses terms that are expressions from the language of low level clas-
sifiers. In this sense, Cascade Generalization generates a unified theory from the
base theories generated earlier.
Used in this form, Cascade Generalization performs a loose coupling of classi-
fiers. The method can be applied locally at each iteration of a divide-and-conquer
algorithm generating a tight coupling of classifiers. This method is referred to as
Local Cascade Generalization. In our implementation, it generates a decision tree,
which has interesting relations with multivariate trees (Brodley & Utgoff, 1995)
and neural networks, namely with the Cascade correlation architecture (Fahlman
Generalization and Local Cascade Generalization
are described and analyzed in this paper. The experimental study shows that this
methodology usually improves accuracy and decreases theory size at statistically
significant levels.
In the next Section we review previous work in the area of multiple models.
In Section 3 we present the framework of Cascade Generalization. In Section 4
we discuss the strengths and weaknesses of the proposed method in comparison
to other approaches to multiple models. In Section 5 we perform an empirical
evaluation of Cascade Generalization using UCI data sets. In Section 6 we define a
new family of multi-strategy algorithms that apply Cascade Generalization locally.
In Section 7, we empirically evaluate Local Cascade Generalization using UCI data
sets. In Section 8, we examine the behavior of Cascade Generalization providing
insights about why it works. The last Section summarizes the main points of the
work and discusses future research directions.
2. Related work on combining classifiers
Voting is the most common method used to combine classifiers. As pointed out
by Ali and Pazzani (1996), this strategy is motivated by the Bayesian learning
theory which stipulates that in order to maximize the predictive accuracy, instead
of using just a single learning model, one should ideally use all of the models in the
hypothesis space. The vote of each hypothesis should be weighted by the posterior
probability of that hypothesis given the training data. Several variants of the voting
method can be found in the machine learning literature, from uniform voting where
the opinion of all base classifiers contributes to the final classification with the same
strength, to weighted voting, where each base classifier has a weight associated, that
could change over the time, and strengthens the classification given by the classifier.
Another approach to combine classifiers consists of generating multiple models.
Several methods appear in the literature. In this paper we analyze them through
Bias-Variance analysis (Kohavi & Wolpert, 1996): methods that mainly reduce
variance, such as Bagging and Boosting, and methods that mainly reduce bias, such
as Stacked Generalization and Meta-Learning.
2.1. Variance reduction methods
Breiman (1996) proposes Bagging, that produces replications of the training set
by sampling with replacement. Each replication of the training set has the same
size as the original data but some examples do not appear in it while others may
appear more than once. From each replication of the training set a classifier is
generated. All classifiers are used to classify each example in the test set, usually
using a uniform vote scheme.
The Boosting algorithm of Freund and Schapire (1996) maintains a weight for
each example in the training set that reflects its importance. Adjusting the weights
causes the learner to focus on different examples leading to different classifiers.
Boosting is an iterative algorithm. At each iteration the weights are adjusted in
order to reflect the performance of the corresponding classifier. The weight of
the misclassified examples is increased. The final classifier aggregates the learned
classifiers at each iteration by weighted voting. The weight of each classifier is a
function of its accuracy.
2.2. Bias reduction methods
Wolpert (1992) proposed Stacked Generalization, a technique that uses learning at
two or more levels. A learning algorithm is used to determine how the outputs of
the base classifiers should be combined. The original data set constitutes the level
zero data. All the base classifiers run at this level. The level one data are the
outputs of the base classifiers. Another learning process occurs using as input the
level one data and as output the final classification. This is a more sophisticated
technique of cross validation that could reduce the error due to the bias.
Chan and Stolfo (1995) present two schemes for classifier combination: arbiter
and combiner. Both schemes are based on meta learning, where a meta-classifier
is generated from meta data, built based on the predictions of the base classifiers.
An arbiter is also a classifier and is used to arbitrate among predictions generated
by different base classifiers. The training set for the arbiter is selected from all the
available data, using a selection rule. An example of a selection rule is "Select the
examples whose classification the base classifiers cannot predict consistently". This
arbiter, together with an arbitration rule, decides a final classification based on the
base predictions. An example of an arbitration rule is "Use the prediction of the
arbiter when the base classifiers cannot obtain a majority". Later (Chan & Stolfo
1995a), this framework was extended using arbiters/combiners in an hierarchical
fashion, generating arbiter/combiner binary trees.
Skalak (1997) presents a dissertation discussing methods for combining classifiers.
He presents several algorithms most of which are based on Stacked Generalization
which are able to improve the performance of Nearest Neighbor classifiers.
Brodley (1995) presents MCS, a hybrid algorithm that combines, in a single tree,
nodes that are univariate tests, multivariate tests generated by linear machines and
instance based learners. At each node MCS uses a set of If-Then rules to perform
a hill-climbing search for the best hypothesis space and search bias for the given
partition of the dataset. The set of rules incorporates knowledge of experts. MCS
uses a dynamic search control strategy to perform an automatic model selection.
MCS builds trees which can apply a different model in different regions of the
instance space.
2.3. Discussion
Results of Boosting or Bagging are quite impressive. Using 10 iterations (i.e. generating
classifiers) Quinlan (1996) reports reductions of the error rate between
10% and 19%. Quinlan argues that these techniques are mainly applicable for unstable
classifiers. Both techniques require that the learning system not be stable, to
obtain different classifiers when there are small changes in the training set. Under
an analysis of bias-variance decomposition of the error (Kohavi & Wolpert1996),
the reduction of the error observed with Boosting or Bagging is mainly due to the
reduction in the variance. Breiman (1996) reveals that Boosting and Bagging can
only improve the predictive accuracy of learning algorithms that are "unstable".
As mentioned in Kohavi and Bauer (1998) the main problem with Boosting seems
to be robustness to noise. This is expected because noisy examples tend to be mis-
classified, and the weight will increase for these examples. They present several
cases were the performance of Boosting algorithms degraded compared to the original
algorithms. They also point out that Bagging improves in all datasets used in
the experimental evaluation. They conclude that although Boosting is on average
better than Bagging, it is not uniformly better than Bagging. The higher accuracy
of Boosting over Bagging in many domains was due to a reduction of bias. Boosting
was also found to frequently have higher variance than Bagging.
Boosting and Bagging require a considerable number of member models because
they rely on varying the data distribution to get a diverse set of models from a
single learning algorithm.
Wolpert (1992) says that successful implementation of Stacked Generalization for
classification tasks is a "black art", and the conditions under which stacking works
are still unknown:
For example, there are currently no hard and fast rules saying what level 0
generalizers should we use, what level 1 generalizer one should use, what k
numbers to use to form the level 1 input space, etc.
Recently, Ting and Witten (1997) have shown that successful stacked generalization
requires the use of output class distributions rather than class predictions. In
their experiments only the MLR algorithm (a linear discriminant) was suitable for
3. Cascade generalization
Consider a learning set
a multidimensional input vector, and yn is the output variable. Since the focus of
this paper is on classification problems, yn takes values from a set of predefined
values, that is yn 2 fCl 1 ; :::; Cl c g, where c is the number of classes. A classifier
is a function that is applied to the training set D to construct a model =(D).
The generated model is a mapping from the input space X to the discrete output
variable Y . When used as a predictor, represented by =(~x; D), it assigns a y
value to the example ~x. This is the traditional framework for classification tasks.
Our framework requires that the predictor =(~x; D) outputs a vector representing
conditional probability distribution [p1; :::; pc], where p i represents the probability
that the example ~x belongs to class i, i.e. P j~x). The class that is assigned to
the example ~x is the one that maximizes this last expression. Most of the commonly
used classifiers, such as naive Bayes and Discriminant, classify examples in this way.
Other classifiers (e.g., C4.5 (Quinlan, 1993)), have a different strategy for classifying
an example, but it requires few changes to obtain a probability class distribution.
We define a constructive operator '(~x; M) where M represents the model =(D)
for the training data D, while ~x represents an example. For the example ~x the
operator ' concatenates the input vector ~x with the output probability class dis-
tribution. If the operator ' is applied to all examples of dataset D 0 we obtain a
new dataset D 00 . The cardinality of D 00 is equal to the cardinality of D 0 (i.e. they
have the same number of examples). Each example in ~x 2 D 00 has an equivalent
example in D 0 , but augmented with #c new attributes, where #c represents the
number of classes. The new attributes are the elements of the vector of class probability
distribution obtained when applying classifier =(D) to the example ~x. This
can be represented formally as follows:
Here A(=(D); D 0 ) represents the application of the model =(D) to data set D 0 and
represents, in effect, a dataset. This dataset contains all the examples that appear
in D 0 extended with the probability class distribution generated by the model =(D).
Cascade generalization is a sequential composition of classifiers, that at each
generalization level applies the \Phi operator. Given a training set L, a test set T,
and two classifiers generalization proceeds as follows. Using
generates the Level 1 data:
Level
Level
learns on Level 1 training data and classifies the Level 1 test data:
These steps perform the basic sequence of a Cascade Generalization of classifier
after classifier = 1 . We represent the basic sequence by the symbol r. The previous
composition could be represented succinctly by:
which, by applying equations 2 and 3, is equivalent to:
This is the simplest formulation of Cascade Generalization. Some possible extensions
include the composition of n classifiers, and the parallel composition of
classifiers.
A composition of n classifiers is represented by:
In this case, Cascade Generalization generates n-1 levels of data. The final model
is the one given by the =n classifier. This model could contain terms in the form
of conditions based on attributes build by the previous built classifiers.
A variant of cascade generalization, which includes several algorithms in parallel,
could be represented in this formalism by:
The run in parallel. The operator
returns a new data set L 0 which contains the same number of examples as L. Each
example in L 0 contains (n \Gamma 1) \Theta #cl new attributes, where #cl is the number of
classes. Each algorithm in the set contributes with #cl new attributes.
3.1. An illustrative example
In this example we will consider the UCI (Blake & Keogh & Merz, 1999) data
set Monks-2. The Monks data sets describe an artificial robot domain and are
quite well known in the Machine Learning community. The robots are described by
six different attributes and classified into one of two classes. We have chosen the
Monks-2 problem because it is known that this is a difficult task for systems that
learn decision trees in attribute-value formalism. The decision rule for the problem
is: "The robot is O.K. if exactly two of the six attributes have their first
value". This problem is similar to parity problems. It combines different attributes
in a way that makes it complicated to describe in DNF or CNF using the given
attributes only.
Some examples of the original training data are presented:
head, body, smiling, holding, color, tie, Class
round, round, yes, sword, red, yes, not Ok
round, round, no, balloon, blue, no, OK
Using ten-fold cross validation, the error rate of C4.5 is 32.9%, and of naive
Bayes is 34.2%. The composite model C4.5 after naive Bayes, C4:5rnaiveBayes,
operates as follows. The Level 1 data is generated, using the naive Bayes as the
classifier. Naive Bayes builds a model from the original training set. This model
is used to compute a probability class distribution for each example in the training
and test set. The Level 1 is obtained by extending the train and test set with the
probability class distribution given by the naive Bayes. The examples shown earlier
take the form of:
head, body, smiling, holding, color, tie, P(OK), P(not Ok), Class
round, round, yes, sword, red, yes, 0.135, 0.864, not Ok
round, round, no, balloon, blue, no, 0.303, 0.696, OK
where the new attribute P(OK) (P(not OK)) is the probability that the example
belongs to class OK(not OK).
C4.5 is trained on the Level 1 training data, and classifies the Level 1 test data.
The composition C4:5rNaiveBayes, obtains an error rate of 8.9%, which is substantially
lower than the error rates of both C4.5 and naive Bayes. None of the
algorithms in isolation can capture the underlying structure of the data. In this
case, Cascade was able to achieve a notable increase of performance. Figure 1
presents one of the trees generated by C4:5rnaiveBayes.
The tree contains a mixture of some of the original attributes (smiling, tie) with
some of the new attributes constructed by naive Bayes (P(OK), P(not Ok)). At
the root of the tree appears the attribute P(OK). This attribute represents a particular
class probability (Class = OK) calculated by naive Bayes. The decision
tree generated by C4.5 uses the constructed attributes given by Naive Bayes, but
redefining different thresholds. Because this is a two class problem, the Bayes rule
uses P (OK) with threshold 0.5, while the decision tree sets the threshold to 0.27.
Those decision nodes are a kind of function given by the Bayes strategy. For exam-
ple, the attribute P(OK) can be seen as a function that computes p(Class = OKj~x)
using the Bayes theorem. On some branches the decision tree performs more than
one test of the class probabilities. In a certain sense, this decision tree combines two
representation languages: that of naive Bayes with the language of decision trees.
The constructive step performed by Cascade inserts new attributes that incorporate
new knowledge provided by naive Bayes. It is this new knowledge that allows
the significant increase of performance verified with the decision tree, despite the
tie
smiling ?
smiling ?
not OK
tie
smiling
not OK
P(not Ok)
P(not Ok)
P(not Ok)
not OK
P(not Ok)
P(not Ok)
not OK
not OK
not OK
not OK
not OK
Figure
1. Tree generated by C4.5rBayes.
fact that naive Bayes cannot fit well complex spaces. In the Cascade framework
lower level learners delay the decisions to the high level learners. It is this kind of
collaboration between classifiers that Cascade Generalization explores.
4. Discussion
Cascade Generalization belongs to the family of stacking algorithms. Wolpert
(1992) defines Stacking Generalization as a general framework for combining clas-
sifiers. It involves taking the predictions from several classifiers and using these
predictions as the basis for the next stage of classification.
Cascade Generalization may be regarded as a special case of Stacking Generalization
mainly due to the layered learning structure. Some aspects that make Cascade
Generalization novel, are:
ffl The new attributes are continuous. They take the form of a probability class
distribution. Combining classifiers by means of categorical classes looses the
strength of the classifier in its prediction. The use of probability class distributions
allows us to explore that information.
ffl All classifiers have access to the original attributes. Any new attribute built
at lower layers is considered exactly in the same way as any of the original
attributes.
ffl Cascade Generalization does not use internal Cross Validation. This aspect
affects the computational efficiency of Cascade.
Many of these ideas has been discussed in literature. Ting (1997) has used probability
class distributions as level-1 attributes, but did not use the original attributes.
The possibility of using the original attributes and class predictions as level 1 attributes
as been pointed out by Wolpert in the original paper of Stacked Gener-
alization. Skalak (1997) refers that Schaffer has used the original attributes and
class predictions as level 1 attributes, but with disappointing results. In our view
this could be explained by the fact that he combines three algorithms with similar
behavior from a bias-variance analysis: decision trees, rules, and neural-networks
(see Section 8.2 for more details on this point). Chan and Stolfo (1995a) have used
the original attributes and class predictions in a scheme denoted class-attribute-
combiner with mixed results.
Exploiting all these aspects is what makes Cascade Generalization succeed. More-
over, this particular combination implies some conceptual differences.
ffl While Stacking is parallel in nature, Cascade is sequential. The effect is that
intermediate classifiers have access to the original attributes plus the predictions
of low level classifiers. An interesting possibility, that has not been explored
in this paper, is to provide the classifier n with the original attributes plus the
predictions provided by classifier
ffl The ultimate goal of Stacking Generalization is combining predictions. The
goal of Cascade Generalization is to obtain a model that can use terms in the
representation language of lower level classifiers.
ffl Cascade Generalization provides rules to choose the low level classifiers and the
high level classifiers. This aspect will be developed in the following sections.
5. Empirical evaluation
5.1. The algorithms
Ali and Pazzani (1996) and Tumer and Gosh (1995) present empirical and analytical
results that show that "the combined error rate depends on the error rate
of individual classifiers and the correlation among them". They suggest the use of
"radically different types of classifiers" to reduce the correlation errors. This was
our criterion when selecting the algorithms for the experimental work. We use three
classifiers that have different behaviors: a naive Bayes, a linear discriminant, and
a decision tree.
5.1.1. Naive Bayes Bayes theorem optimally predicts the class of an unseen
example, given a training set. The chosen class is the one that maximizes: p(C i
)=p(~x). If the attributes are independent, p(~xjCi) can be decomposed
into the product p(x 1 show that
this procedure has a surprisingly good performance in a wide variety of domains,
including many where there are clear dependencies between attributes. In our
implementation of this algorithm, the required probabilities are estimated from
the training set. In the case of nominal attributes we use counts. Continuous
attributes were discretized into equal size intervals. This has been found to produce
better results than assuming a Gaussian distribution (Domingos & Pazzani, 1997; J.
Dougherty, R. Kohavi & M. Sahami, 1995). The number of bins used is a function
of the number of different values observed on the training set:
log(nr: different values)). This heuristic was used by Dougherty et al. (1995)
with good overall results. Missing values were treated as another possible value
for the attribute. In order to classify a query point, a naive Bayes classifier uses
all of the available attributes. Langley (1996) states that naive Bayes relies on an
important assumption that the variability of the dataset can be summarized by a
single probabilistic description, and that these are sufficient to distinguish between
classes. From an analysis of Bias-Variance, this implies that naive Bayes uses a
reduced set of models to fit the data. The result is low variance but if the data
cannot be adequately represented by the set of models, we obtain large bias.
5.1.2. Linear discriminant A linear discriminant function is a linear composition
of the attributes that maximizes the ratio of its between-group variance to its
within-group variance. It is assumed that the attribute vectors for the examples
of class C i are independent and follow a certain probability distribution with a
probability density function f i . A new point with attribute vector ~x is then assigned
to that class for which the probability density function f i (~x) is maximal. This
means that the points for each class are distributed in a cluster centered at - i . The
boundary separating two classes is a hyper-plane (Michie & Spiegelhalter & Taylor,
1994). If there are only two classes, a unique hyper-plane is needed to separate the
classes. In the general case of q classes, are needed to separate
them. By applying the linear discriminant procedure described below, we get
hyper-planes. The equation of each hyper-plane is given by:
We use a Singular Value Decomposition (SVD) to compute S \Gamma1 . SVD is numerically
stable and is a tool for detecting sources of collinearity. This last aspect is
used as a method for reducing the features of each linear combination. A linear
discriminant uses all, or almost all, of the available attributes when classifying a
query point. Breiman (1996) states that from an analysis of Bias-Variance, Linear
Discriminant is a stable classifier. It achieves stability by having a limited set
of models to fit the data. The result is low variance, but if the data cannot be
adequately represented by the set of models, then we obtain large bias.
5.1.3. Decision tree Dtree is our version of a univariate decision tree. It uses
the standard algorithm to build a decision tree. The splitting criterion is the gain
ratio. The stopping criterion is similar to C4.5. The pruning mechanism is similar
to the pessimistic error of C4.5. Dtree uses a kind of smoothing process that usually
improves the performance of tree based classifiers. When classifying a new example,
the example traverses the tree from the root to a leaf. In Dtree, the example is
classified taking into account not only the class distribution at the leaf, but also all
class distributions of the nodes in the path. That is, all nodes in the path contribute
to the final classification. Instead of computing class distribution for all paths in
the tree at classification time, as it is done in Buntine (1990), Dtree computes a
class distribution for all nodes when growing the tree. This is done recursively
taking into account class distributions at the current node and at the predecessor
of the current node, using the recursive Bayesian update formula (Pearl 1988):
where P (e n ) is the probability that one example falls at node n, that can be seen as
a shorthand for P (e 2 En ), where e represents the given example and En the set of
examples in node n. Similarly P (e n+1 je n ) is the probability that one example that
falls at node n goes to node n+1, and P (e n+1 je is the probability that one
example from class C i goes from node n to node n+1. This recursive formulation,
allows Dtree to compute efficiently the required class distributions. The smoothed
class distributions influence the pruning mechanism and the treatment of missing
values. It is the most relevant difference from C4.5.
A decision tree uses a subset of the available attributes to classify a query point.
Breiman (1996) among other researchers, note that decision
trees are unstable classifiers. Small variations on the training set can cause
large changes in the resulting predictors. They have high variance but they can fit
any kind of data: the bias of a decision tree is low.
5.2. The experimental methodology
We have chosen 26 data sets from the UCI repository. All of them were previously
used in other comparative studies. To estimate the error rate of an algorithm on a
given dataset we use 10 fold stratified cross validation. To minimize the influence
of the variability of the training set, we repeat this process ten times, each time
using a different permutation of the dataset 1 . The final estimate is the mean of the
error rates obtained in each run of the cross validation. At each iteration of CV,
all algorithms were trained on the same training partition of the data. Classifiers
were also evaluated on the same test partition of the data. All algorithms where
used with the default settings.
Comparisons between algorithms were performed using paired t-tests with significance
level set at 99.9% for each dataset. We use the Wilcoxon matched-pairs
signed-ranks test to compare the results of the algorithms across datasets.
Our goal in this empirical evaluation is to show that Cascade Generalization
are plausible algorithms, that compete quite well against other well established
techniques. Stronger statements can only be done after a more extensive empirical
evaluation.
Table
1. Data Characteristics and Results of Base Classifiers.
Dataset #Classes #Examples Dtree Bayes Discrim C4.5 C5.0
Australian 2 690 14.13\Sigma0.6 14.48\Sigma0.4 14.06\Sigma0.1 14.71\Sigma0.6 14.17\Sigma0.7
Balance 3 625 22.35\Sigma0.7
Banding 2 238 21.35\Sigma1.3 23.24\Sigma1.2 23.20\Sigma1.4 23.98\Sigma1.8 24.16\Sigma1.4
Diabetes 2 768 26.46\Sigma0.7
German 2 1000 27.93\Sigma0.7
Glass 6 213 30.14\Sigma2.4
Ionosphere
Iris 3 150 4.67\Sigma0.9 4.27\Sigma0.6
Letter 26 20000
Satimage 6 6435 13.47\Sigma0.2
Segment 7
Vehicle 4 846
Votes
Table
1 presents the error rate and the standard deviation of each base classifier.
Relative to each algorithm a +(\Gamma) sign on the first column means that the error
rate of this algorithm, is significantly better (worse) than Dtree. The error rate of
C5.0 is presented for reference. These results provide evidence, once more, that no
single algorithm is better overall.
5.3. Evaluation of cascade generalization
Table
2 and 3 presents the results of all pairwise combinations of the three base
classifiers and the most promising combination of the three models. Each column
corresponds to a Cascade Generalization combination. For each combination we
have conducted paired t-tests. All composite models are compared against its components
using paired t-tests with significance level set to 99.9%. The +(\Gamma) signs
indicate that the combination (e.g. C4rBay) is significantly better than the component
algorithms (i.e. C4.5 and Bayes).
The results are summarized in Tables 4 and 5. The first line shows the arithmetic
mean across all datasets. It shows that the most promising combinations are
C4.5rDiscrim, C4.5rnaive Bayes, C4.5rDiscrimrnaive Bayes, and C4.5rnaive
Table
2. Results of Cascade Generalization. Composite models are compared against its components.
BayrBay BayrDis BayrC4.5 DisrDis DisrBay DisrC4.5
Australian 14.69\Sigma0.5 ++ 13.61\Sigma0.2
Balance 7.06\Sigma1.1
Banding 22.36\Sigma0.9 21.99\Sigma0.8 ++ 18.76\Sigma1.2 23.28\Sigma1.4 22.01\Sigma1.6
Breast
Credit 14.91\Sigma0.4 ++ 13.35\Sigma0.3 13.97\Sigma0.6 14.22\Sigma0.1 ++ 13.59\Sigma0.4 14.34\Sigma0.3
Diabetes
German
Glass
Heart
Ionosphere 9.76\Sigma0.7 ++ 9.14\Sigma0.3 ++ 8.57\Sigma0.8 13.38\Sigma0.8
Iris
Letter
Segment
Sonar 25.59\Sigma1.4 ++ 23.72\Sigma1.1 ++ 21.84\Sigma2.0 24.81\Sigma1.2
Vehicle
Votes 10.00\Sigma0.3
BayesrDiscrim . This is confirmed by the second line that shows the geometric
mean. The third line that shows the average rank of all base and cascading
algorithms, computed for each dataset by assigning rank 1 to the most accurate
algorithm, rank 2 to the second best and so on. The remaining lines compares
a cascade algorithm against the top-level algorithm. The fourth line shows the
number of datasets in which the top-level algorithm was more accurate than the
corresponding cascade algorithm, versus the number in which it was less. The fifth
line considers only those datasets where the error rate difference was significant at
the 1% level, using paired t-tests. The last line shows the p-values obtained by
applying the Wilcoxon matched-pairs signed-ranks test.
All statistics show that the most promising combinations use a decision tree as
high-level classifier and naive Bayes or Discrim as low-level classifiers. The new
attributes built by Discrim and naive Bayes express relations between attributes,
that are outside the scope of DNF algorithms like C4.5. These new attributes
systematically appear at the root of the composite models.
One of the main problems when combining classifiers is: Which algorithms should
we combine? The empirical evaluation suggests:
ffl Combine classifiers with different behavior from a Bias-Variance analysis.
Table
3. Results of Cascade Generalization. Composite models are compared against its components.
Dataset C4.5rC4.5 C4.5rDis C4.5rBay C4.5rDiscrBay C4.5rBayrDisc Stacked Gen.
Australian 14.74\Sigma0.5 13.99\Sigma0.9 15.41\Sigma0.8 14.24\Sigma0.5 15.34\Sigma0.9 13.99\Sigma0.4
Balance
Banding 23.77\Sigma1.7 21.73\Sigma2.5 22.75\Sigma1.8 21.48\Sigma2.0 22.18\Sigma1.5 21.45\Sigma1.2
Breast
Credit 14.21\Sigma0.6 13.85\Sigma0.4 15.07\Sigma0.7 14.84\Sigma0.4 13.75\Sigma0.6
Diabetes
German
Glass 32.02\Sigma2.4 36.09\Sigma1.8 33.60\Sigma1.6 34.68\Sigma1.8 35.11\Sigma2.5 31.28\Sigma1.9
Hepatitis
Ionosphere 10.21\Sigma1.3
Iris
Letter
Segment 3.21\Sigma0.2
Sonar 28.02\Sigma3.2 24.75\Sigma2.9 24.36\Sigma1.9 24.45\Sigma1.8 23.83\Sigma2.1 24.81\Sigma1.0
Votes
Table
4. Summary of results of Cascade Generalization.
Measure Bayes BayrBay BayrDis BayrC4 Disc DiscrDisc DiscrBay DiscrC4
Arithmetic Mean 17.62 17.29 16.42 15.29 17.80 17.72 16.39 17.94
Geometric Mean 13.31 12.72 12.61 10.71 13.97 13.77 12.14 14.82
Average Rank 9.67 9.46 6.63 7.52 9.06 8.77 7.23 10.29
Nr. of Wins \Gamma 14/12
Test
ffl At low level use algorithms with low variance.
ffl At high level use algorithms with low bias.
On Cascade framework lower level learners delay the final decision to the high level
learners. Selecting learners with low bias for high level, we are able to fit more
complex decision surfaces, taking into account the "stable" surfaces drawn by the
low level learners.
Table
5. Summary of results of Cascade Generalization.
Measure C4.5 C4.5rC4.5 C4.5rBay C4.5rDis C4.5rDisrBay C4.5rBayrDis
Arithmetic Mean 15.98 15.98 13.44 14.19 13.09 13.27
Geometric Mean 11.40 11.20 8.25 9.93 7.95 7.81
Average Rank 9.83 9.04 7.85 6.17 6.46 6.69
Nr. of Wins \Gamma 7/15 4/19 11/15 8/18 8/18
Test
Table
6. Summary of comparison against Stacked Generalization.
C4.5rDiscrBay vs. Stack.G. C4.5rBayrDisc vs. Stack.G.
Number of Wins 11
Significant Wins 6 / 5 6 / 4
Test
Given equal performance, we would prefer fewer component classifiers, since train-
ing, and application times will be lower for smaller number of components. Larger
number of components has also adverse affects in comprehensibility. In our study
the version with three components seemed perform better than the version with
two components. More research is needed to establish the limits of extending this
scenario.
5.4. Comparison with stacked generalization
We have compared various versions of Cascade Generalization to Stacked Gener-
alization, as defined in Ting (1997). In our re-implementation of Stacked Generalization
the level 0 classifiers were C4.5 and Bayes, and the level 1 classifier was
Discrim. The attributes for the level 1 data are the probability class distributions,
obtained from the level 0 classifiers using a 5-fold stratified cross-validation 2 . Table
3 shows, in the last column, the results of Stacked Generalization. Stacked Generalization
is compared, using paired t-tests, to C4.5rDiscrimrnaive Bayes and
C4.5rnaive BayesrDiscrim in this order. The +(\Gamma) sign indicates that for this
dataset the Cascade model performs significantly better (worse). Table 6 presents
a summary of results. They provide evidence that the generalization ability of
Cascade Generalization models is competitive with Stacked Generalization that
computes the level 1 attributes using internal cross-validation. The use of internal
cross-validation affects of course the learning times. Both Cascade models are at
least three times faster than Stacked Generalization.
Cascade Generalization exhibits good generalization ability and is computationally
efficient. Both aspects lead to the hypothesis: Can we improve Cascade Generalization
6by applying it at each iteration of a divide-and-conquer algorithm? This
hypothesis is examined in the next section.
6. Local cascade generalization
Many classification algorithms use a divide and conquer strategy that resolve a
given complex problem by dividing it into simpler problems, and then by applying
recursively the same strategy to the subproblems. Solutions of subproblems are
combined to yield a solution of the original complex problem. This is the basic
idea behind the well known decision tree based algorithms: ID3 (Quinlan, 1984),
CART (Breiman et al., 1984), ASSISTANT (Kononenko et al., 1987), C4.5 (Quin-
lan, 1993). The power of this approach derives from the ability to split the hyper
space into subspaces and fit each subspace with different functions. In this Section
we explore Cascade Generalization on the problems and subproblems that a divide
and conquer algorithm generates. The intuition behind this proposed method is
the same as behind any divide and conquer strategy. The relations that can not be
captured at global level can be discovered on the simpler subproblems.
In the following sections we present in detail how to apply Cascade Generalization
locally. We will only develop this strategy for decision trees, although it should be
possible to use it in conjunction with any divide and conquer method, like decision
lists (Rivest, 1987).
6.1. The local cascade generalization algorithm
Generalization is a composition of classification algorithms that is
elaborated when building the classifier for a given task. In each iteration of a
divide and conquer algorithm, Local Cascade Generalization extends the dataset
by the insertion of new attributes. These new attributes are propagated down to
the subtasks. In this paper we restrict the use of Local Cascade Generalization
to decision tree based algorithms. However, it should be possible to use it with
any divide-and-conquer algorithm. Figure 2 presents the general algorithm of Local
Cascade Generalization, restricted to a decision tree. The method will be referred
to as CGTree.
When growing the tree, new attributes are computed at each decision node by
applying the \Phi operator. The new attributes are propagated down the tree. The
number of new attributes is equal to the number of classes appearing in the examples
at this node. This number can vary at different levels of the tree. In general deeper
nodes may contain a larger number of attributes than the parent nodes. This could
be a disadvantage. However, the number of new attributes that can be generated
decreases rapidly. As the tree grows and the classes are discriminated, deeper nodes
also contain examples with a decreasing number of classes. This means that as the
tree grows the number of new attributes decreases.
In order to be applied as a predictor, any CGTree must store, in each node, the
model generated by the base classifier using the examples at this node. When
classifying a new example, the example traverses the tree in the usual way, but at
Input: A data set D, a classifier =
Output: A decision tree
Function CGtree(D, =)
If stopping
return a leaf with class probability distribution
Else
Choose the attribute A i that maximizes splitting criterion on D 0
For each partition of examples based on the values of attribute A i
generate a subtree: T ree
return Tree containing a decision node based on attribute Ai,
storing =(D) and descendant subtrees T ree i
EndIf
End
Figure
2. Local Cascade Algorithm based on a Decision Tree.
each decision node it is extended by the insertion of the probability class distribution
provided the base classifier predictor at this node.
In the framework of local cascade generalization, we have developed a CGLtree,
that uses the \Phi(D; A(Discrim(D); D)) operator in the constructive step. Each
internal node of a CGLtree constructs a discriminant function. This discriminant
function is used to build new attributes. For each example, the value of a new
attribute is computed using the the linear discriminant function. At each decision
node, the number of new attributes built by CGLtree is always equal to the number
of classes taken from the examples at this node. In order to restrict attention to
well populated classes, we use the following heuristic: we only consider a class i if
the number of examples, at this node, belonging to class i is greater than N times
the number of attributes 3 . By default N is 3. This implies that at different nodes,
different number of classes will be considered leading to addition of a different number
of new attributes. Another restriction to the use of the constructive operator
A(=(D); D), is that the error rate of the resulting classifier should be less than 0.5
in the training data.
In our empirical study we have used two other algorithms that apply Cascade
Generalization locally. The first one is CGBtree that uses as constructive operator
and the second one is CGBLtree that uses as constructive operator:
In all other aspects these algorithms are similar to CGLtree.
There is one restriction to the application of the \Phi(D the
induced classifier =(D) must return the corresponding probability class distribution
for each ~x 2 D 0 . Any classifier that satisfies these requisites could be applied. It
is possible to imagine a CGTree, whose internal nodes are trees themselves. For
example, small modifications to C4.5 4 enables the construction of a CGTree whose
internal nodes are trees generated by C4.5.
6.2. An illustrative example
Bayes_7
Bayes_11
.3
Bayes_7
.3
not Ok
Ok
Ok
not Ok
Figure
3. Tree generated by a CGTree using DiscrimrBayes as constructive operator.
Figure
3 represents the tree generated by a CGTree on the Monks-2 problem. The
constructive operator used is: \Phi(D; DiscrimrBayes(~x; D)). At the root of the tree
the naive Bayes algorithm provides two new attributes - Bayes 7 and Bayes 8. The
linear discriminant uses continuous attributes only. There are only two continuous
attributes, those built by the naive Bayes. In this case, the coefficients of the
linear discriminant shrink to zero by the process of variable elimination used by the
discriminant algorithm. The gain ratio criterion chooses the Bayes 7 attribute as a
test. The dataset is split into two partitions. One of them contains only examples
from class OK: a leaf is generated. In the other partition two new Bayes attributes
are built (Bayes 11, Bayes 12) and so a linear discriminant is generated based on
these two Bayes attributes and on those built at the root of the tree. The attribute
based on the linear discriminant is chosen as test attribute for this node. The
dataset is segmented and the process of tree construction proceeds.
This example illustrate two points:
ffl The interactions between classifiers: The linear discriminant contains terms
built by naive Bayes. Whenever a new attribute is built, it is considered as a
regular attribute. Any attribute combination built at deeper nodes can contain
terms based on the attributes built at upper nodes.
ffl Re-use of attributes with different thresholds. The attribute Bayes 7, built at
the root, is used twice in the tree with different thresholds.
6.3. Relation to other work on multivariate trees
With respect to the final model, there are clear similarities between CGLtree and
Multivariate trees of Brodley and Utgoff. Langley refers that any multivariate tree
is topologically equivalent to a three-layer inference network. The constructive
ability of our system is similar to the Cascade Correlation Learning architecture
of Fahlman & Lebiere (1991). Also the final model of CGBtree is related with
the recursive naive Bayes presented by Langley. This is an interesting feature of
unifies in a single framework several systems from
different research areas. In our previous work (Gama & Brazdil, 1999) we have
compared system Ltree, similar to CGLtree, with Oc1 of Murthy et al., LMDT
of Brodley et al., and CART of Breiman et al. The focus of this paper is on
methodologies for combining classifiers. As such, we only compare our algorithms
against other methods that generate and combine multiple models.
7. Evaluation of local cascade generalization
In this section we evaluate three instances of local Cascade Algorithms: CGBtree,
CGLtree, and CGBLtree. We compare the local versions against its corresponding
global models, and against two standard methods to combine classifiers: Boosting
and Stacked Generalization. All the implemented Local Cascade Generalization algorithms
are based on Dtree. They use exactly the same splitting criteria, stopping
criteria, and pruning mechanism. Moreover they share many minor heuristics that
individually are too small to mention, but collectively can make difference. At each
decision node, CGLtree applies the Linear discriminant described above, while CG-
Btree applies the naive Bayes algorithm. CGBLtree applies the Linear discriminant
to the ordered attributes and the naive Bayes to the categorical attributes. In order
to prevent overfitting the construction of new attributes is constrained to a depth
of 5. In addition, the level of pruning is greater than the level of pruning in Dtree.
Table 7a presents the results of local Cascade Generalization. Each column corresponds
to a local Cascade Generalization algorithm. Each algorithm is compared
against its similar Cascade model using paired t-tests. For example, CGLtree is
compared against C4.5rDiscrim. A +(\Gamma) sign means that the error rate of the
composite model is, at statistically significant levels, lower (higher) than the correspondent
model. Table 8 presents a comparative summary of the results between
local Cascade Generalization and the corresponding global models. It illustrates
the benefits of applying Cascade Generalization locally.
Table
7. Results of (a)Local Cascade Generalization (b)Boosting and Stacked (c) Boosting
a Cascade algorithm. The second row indicates the models used in comparison.
Dataset CGBtree CGLtree CGBLtree C5.0Boost Stacked C5BrBayes
(vs. Corresponding Cascade Models) (vs. CGBLtree) (vs. C5.0Boost)
Adult 13.46\Sigma0.4 13.56\Sigma0.3 13.52\Sigma0.4 \Gamma 14.33\Sigma0.4 13.96\Sigma0.6 14.41\Sigma0.5
Australian
Balance 5.32\Sigma1.1
Banding 20.98\Sigma1.2 23.60\Sigma1.2 20.69\Sigma1.2
Credit 15.35\Sigma0.5 14.41\Sigma0.8 14.52\Sigma0.8 13.41\Sigma0.8 13.43\Sigma0.6 13.57\Sigma0.9
Diabetes
German
Glass
Ionosphere 9.62\Sigma0.9 11.06\Sigma0.6 11.00\Sigma0.7
Iris 4.73\Sigma1.3 2.80\Sigma0.4
Letter
Mushroom
Sonar 26.23\Sigma1.7
Vehicle
Votes 3.29\Sigma0.4 4.30\Sigma0.5
System CGBLtree is compared to C5.0Boosting, a variance reduction method 5
and to Stacked Generalization, a bias reduction method. Table 7b presents the
results of C5.0Boosting with the default parameter of 10, that is aggregating over
trees, and Stacked Generalization as it is defined in Ting (1997) and described
in an earlier section. Both Boosting and Stacked are compared against CGBLtree,
using paired t-tests with the significance level set to 99.9%. A +(\Gamma) sign means
that Boosting or Stacked performs significantly better (worse) than CGBLtree. In
this study, CGBLtree performs significantly better than Stacked, in 6 datasets and
worse in 2 datasets.
7.1. A step ahead
Comparing with C5.0Boosting, CGBLtree significantly improves in 10 datasets and
loses in 9 datasets. It is interesting to note that in 26 datasets there are 19 significant
differences. This is evidence that Boosting and Cascade have different
behavior. The improvement observed with Boosting, when applied to a decision
Table
8. Summary of Results of Local Cascade Generalization.
CGBtree CGLtree CGBLtree C5.0Boost Stacked G. C5BrBayes
Arithmetic mean 13.43 13.98 12.92 13.25 13.87 11.63
Geometric mean 8.70 9.46 8.20 8.81 10.13 6.08
Average Rank 3.90 3.92 3.29 3.27 3.50 3.12
C4.5rBay C4.5rDis C4rBayrDis CGBLtree CGBLtree
vs vs vs vs vs
CGBtree CGLtree CGBLtree C5.0Boost Stacked G.
Number of Wins 10-16 12-14 7-19 13-13 15-11
Significant Wins 3-3 3-5 3-8 10-9 6-2
Test
tree, is mainly due to the reduction of the variance component of the error rate
while, with Cascade algorithms, the improvement is mainly due to the reduction
on the bias component. Table 7c presents the results of Boosting a Cascade algo-
rithm. In this case we have used the global combination C5.0 Boostrnaive Bayes.
It improves over C5.0Boosting on 4 datasets and loses in 3. The summary of the
results presented in Table 8 evidence a promising result, and we intend, in the near
future, to boost CGBLtree.
7.2. Number of leaves
Another dimension for comparisons involves measuring the number of leaves. This
corresponds to the number of different regions into which the instance space is
partitioned by the algorithm. Consequently it can be seen as an indicator of the
model complexity. In almost all datasets 6 , any Cascade tree splits the instance
space into half of the regions needed by Dtree or C5.0. This is a clear indication
that Cascade models capture better the underlying structure of the data.
7.3. Learning times
Learning time is the other dimension for comparing classifiers. Here comparisons
are less clear as results may strongly depend on the implementation details as well
on the underlying hardware. However at least the order of magnitude of time
complexity is a useful indicator.
C5.0 and C5.0Boosting have run on a Sparc 10 machine 7 . All the other algorithms
have run on a Pentium 166MHz, 32 Mb machine under Linux. Table 9 presents the
average time needed by each algorithm to run on all datasets, taking the time of
naive Bayes as reference. Our results demonstrate that any CGTree is faster than
C5.0Boosting. C5.0Boosting is slower because it generates 10 trees with increased
complexity. Also, any CGTree is faster than Stacked Generalization. This is due to
the internal cross validation used in Stacked Generalization.
Table
9. Relative Learning times of base and composite models.
Bayes Discrim C4.5 BayrDis DisrDis C5.0 Dtree BayrBay DisrBay BayrC4 DisrC4
C4rDis C4rC4 C4rBay CGBtree C4rDisrBay CGLtree CGBLtree C5.0Boost Stacked
4.1 4.55 4.81 6.70 6.85 7.72 11.08 15.16 15.29
8. Why does cascade generalization improve performance?
Both Cascade Generalization and Local Cascade Generalization transforms the instance
space into a new, high-dimensional space. In principle this could turn the
given learning problem into a more difficult one. This phenomenon is known as
the curse of dimensionality. In this section we analyze the behavior of Cascade
Generalization through three dimensions: the error correlation, the bias-variance
analysis, and Mahalanobis distances.
8.1. correlation
Ali and Pazzani (1996) have shown that a desirable property of an ensemble of
classifiers is diversity. They use the concept of error correlation as a metric to
measure the degree of diversity in an ensemble. Their definition of error correlation
between two classifiers is defined as the probability that both make the same error.
Because this definition does not satisfy the property that the correlation between
an object and itself should be 1, we prefer to define the error correlation between
two classifiers as the conditional probability of the two classifiers make the same
error given that one of them makes an error. This definition of error correlation
lies in the interval [0 : 1] and the correlation between one classifier and itself is 1.
The formula that we use provides higher values than the one used by Ali and
Pazzani. As it was expected the lowest degree of correlation is between decision
trees and Bayes and between decision trees and discrim. They use very different
representation languages. The error correlation between Bayes and discrim is a
little higher. Despite the similarity of the two algorithms, they use very different
search strategies.
This results provide evidence that the decision tree and any discriminant function
make uncorrelated errors, that is each classifier make errors in different regions of
the instance space. This is a desirable property for combining classifiers.
Table
10. Error Correlation between base classifiers.
C4 vs. Bayes C4 vs.Discrim Bayes vs. Discrim
Average 0.32 0.32 0.40
8.2. Bias-variance decomposition
The bias-variance decomposition of the error is a tool from the statistics theory for
analyzing the error of supervised learning algorithms.
The basic idea, consists of decomposing the expected error into three components:
x
To compute the terms bias and variance for zero-one loss functions we use the
decomposition proposed by Kohavi and Wolpert (1996). The bias measures how
closely average guess of the learning algorithm matches the target. It is computed
as:
x =2
The variance measures how much the learning algorithm's guess ``bounces around''
for the different sets of the given size. This is computed as:
variance x =2
To estimate the bias and variance, we first split the data into training and test
sets. From the training set we obtain ten bootstrap replications used to build
ten classifiers. We ran the learning algorithm on each of the training sets and
estimated the terms of the variance equation 7 and bias 8 equation 6 using the
generated classifier for each point x in the evaluation set E. All the terms were
estimated using frequency counts.
The base algorithms used in the experimental evaluation have different behavior
under a Bias-Variance analysis. A decision tree is known to have low bias but high
variance, and naive Bayes and linear discriminant are known to have low variance
but high bias.
Our experimental evaluation has shown that the most promising combinations
use a decision tree as high level classifier, and naive Bayes or linear discriminant
as low level classifiers. To illustrate these results, we measure the bias and the
variance of C4.5, naive Bayes and C4.5rnaive Bayes in the datasets under study.
These results are shown in Figure 4. A summary of the results is presented in Table
Figure
4. Bias-Variance decomposition of the error rate for C4.5, Bayes and C4.5rBayes for
different datasets.
Table
11. Bias Variance decomposition of error
rate.
Bayes C45rBayes
Average Variance 4.8 1.59 4.72
Average Bias 11.53 15.19 8.64
11. The benefits of the Cascade composition are well illustrated in datasets like
Balance-scale, Hepatitis, Monks-2, Waveform, and Satimage. Comparison between
Bayes and C4.5rBayes shows that the latter combination obtain a strong reduction
of the bias component at costs of increasing the variance component. C4.5rBayes
reduces both bias and variance when compared to C4.5. The reduction of the error
is mainly due to the reduction of bias.
8.3. Mahalanobis distance
Consider that each class defines a single cluster 9 in an Euclidean space. For each
class i, the centroid of the corresponding cluster is defined as the vector of attribute
means - x i , which is computed from the examples of that class. The shape of the
cluster is given by the covariance matrix S i .
Using the Mahalanobis metric we can define two distances:
1. The within-class distance. It is defined as the Mahalanobis distance between an
example and the centroid of its cluster. It is computed as:
Figure
5. Average increase of between-class distance.
where ~x represents the example attribute vector, ~
denotes the centroid of the
cluster corresponding to class i, and S i is the covariance matrix for class i.
2. The between-classes distance. It is defined as the Mahalanobis distance between
two clusters. It is computed as:
pooled ( ~
where ~
denotes the centroid of the cluster corresponding to class i, and S pooled
is the pooled covariance matrix using S i and S j .
The intuition behind the within-class distance is that smaller values leads to more
compact clusters. The intuition behind the between-classes distance is that larger
values would lead us to believe that the groups are sufficiently spread in terms of
separation of means.
We have measured the between-classes distance and the within-class distance for
the datasets with all numeric attributes. Both distances have been measured in
the original dataset and in the dataset extended using a Cascade algorithm. We
observe that while the within-class distance remains almost constant, the between-
classes distance increases. For example, when using the constructive operator
DiscrimrBay the between-classes distance almost doubles. Figure 5 shows the
average increase of the between-class distance, with respect to the original dataset,
after extending it using Discrim, Bayes and DiscrimrBayes, respectively.
9. Conclusions and future work
This paper provides a new and general method for combining learning models by
means of constructive induction. The basic idea of the method is to use the learning
algorithms in sequence. At each iteration a two step process occurs. First a model is
built using a base classifier. Second, the instance space is extended by the insertion
of new attributes. These are generated by the built model for each given example.
The constructive step generates terms in the representational language of the base
classifier. If the high level classifier chooses one of these terms, its representational
power has been extended. The bias restrictions of the high level classifier is relaxed
by incorporating terms of the representational language of the base classifiers. This
is the basic idea behind the Cascade Generalization architecture.
We have examined two different schemes of combining classifiers. The first one
provides a loose coupling of classifiers while the second one couples classifiers tightly:
1. Loose coupling: base classifier(s) pre-process data for another stage
This framework can be used to combine most of the existing classifiers without
changes, or with rather small changes. The method only requires that the
original data is extended by the insertion of the probability class distribution
that must be generated by the base classifier.
2. Tight coupling through local constructive induction
In this framework two or more classifiers are coupled locally. Although in this
work we have used only Local Cascade Generalization in conjunction with decision
trees the method could be easily extended to other divide-and-conquer
systems, such as decision lists.
Most of the existing methods such as Bagging and Boosting that combine learned
models, use a voting strategy to determine the final outcome. Although this leads
to improvements in accuracy, it has strong limitations - loss in interpretability. Our
models are easier to interpret particularly if classifiers are loosely coupled. The final
model uses the representational language of the high level classifier, possibly
enriched with expressions in the representational language of the low level classi-
fiers. When Cascade Generalization is applied locally, the models generated are
more difficult to interpret than those generated by loosely coupled classifiers. The
new attributes built at deeper nodes, contain terms based on the previously built
attributes. This allows us to built very complex decision surfaces, but it affects
somewhat the interpretability of the final model. Using more powerful representations
does not necessarily lead to better results. Introducing more flexibility can
lead to increased instability (variance) which needs to be controlled. In local Cascade
Generalization this is achieved by limiting the depth of the applicability of the
constructive operator and requiring that the error rate of the classifier used as constructive
operator should be less than 0.5. One interesting feature of local Cascade
Generalization is that it provides a single framework, for a collection of different
methods. Our method can be related to several paradigms of machine learning.
For example there are similarities with multivariate trees (Brodley & Utgoff, 1995),
neural networks (Fahlman & Lebiere, 1990), recursive Bayes (Langley, 1993), and
multiple models, namely Stacked Generalization (Wolpert, 1992). In our previous
work (Gama & Brazdil, 1999) we have presented system Ltree that combines a decision
tree with a discriminant function by means of constructive induction. Local
Cascade combinations extend this work. In Ltree the constructive operator was a
single discriminant function. In Local Cascade composition this restriction was re-
laxed. We can use any classifier as constructive operator. Moreover, a composition
of several classifiers, like in CGBLtree, could be used.
The unified framework is useful because it overcomes some superficial distinctions
and enables us to study more fundamental ones. From a practical perspective
the user's task is simplified, because his aim of achieving better accuracy can be
achieved with a single algorithm instead of several ones. This is done efficiently
leading to reduced learning times.
We have shown that this methodology can improve the accuracy of the base
classifiers, competing well with other methods for combining classifiers, preserving
the ability to provide a single, albeit structured model for the data.
9.1. Limitations and future work
Some open issues, which could be explored in future, involve:
ffl From the perspective of bias-variance analysis the main effect of the proposed
methodology is a reduction on the bias component. It should be possible to
combine the Cascade architecture with a variance reduction method, like Bagging
or Boosting.
ffl Will Cascade Generalization work with other classifiers? Could we use neural
networks or nearest neighbors? We think that the methodology presented will
work for this type of classifier. We intend to verify it empirically in future.
Other problems that involve basic research include:
ffl Why does Cascade Generalization improve performance?
Our experimental study suggests that we should combine algorithms with complementary
behavior from the point of view of bias-variance analysis. Other
forms of complementarity can be considered, for example the search bias. So,
one interesting issue to be explored is: given a dataset, can we predict which
algorithms are complementary?
ffl When does Cascade Generalization improve performance?
In some datasets Cascade was not able to improve the performance of base
classifiers. Can we characterize these datasets? That is, can we predict under
what circumstances Cascade Generalization will lead to an improvement in
performance?
ffl How many base classifiers should we use?
The general preference is for a smaller number of base classifiers. Under what
circumstances can we reduce the number of base classifiers without affecting
performance?
ffl The Cascade Generalization architecture provides a method for designing algorithms
that use multiple representations and multiple search strategies within
the induction algorithm. An interesting line of future research should explore
flexible inductive strategies using several diverse representations. It should be
possible to extend Local Cascade Generalization to provide a dynamic control
and this make a step in this direction.
Acknowledgments
Gratitude is expressed to the financial support given by the FEDER and PRAXIS
XXI, project ECO, the Plurianual support attributed to LIACC, and Esprit LTR
project. Thanks also to Pedro Domingos, all anonymous reviewers, and
my colleagues from LIACC for the valuable comments.
Notes
1. Except in the case of Adult and Letter datasets, where a single 10-fold cross-validation was
used.
2. We have also evaluated Stacked Generalization using C4.5 at top level. The version that we
have used is somewhat better. Using C4.5 at top level the average mean of the error rate is
15.14.
3. This heuristic was suggested by Breiman et al. (1984).
4. Two different methods are presented in Ting (1997) and Gama (1998).
5. We have preferred C5.0Boosting (instead of Bagging) because it is available for us and allows
cross-checking of the results. There are some differences between our results and those previous
published by Quinlan. We think that this may be due to the different methods used to estimate
the error rate.
6. Except on Monks-2 dataset, where both Dtree and C5.0 produce a tree with only one leaf.
7. The running time of C5.0 and C5.0Boosting were reduced by a factor of 2 as suggested in:
www.spec.org.
8. The intrinsic noise in the training dataset will be included in the bias term.
9. This analysis assumes that there is a single dominant class for each cluster. Although this may
not always be satisfied, it can give insights about the behavior of Cascade composition.
--R
reduction through learning multiple descriptions.
An empirical comparison of voting classification algorithms: Bagging
UCI repository of Machine Learning databases.
Arcing classifiers.
Classification and Regression Trees.
Wadsworth International Group.
Recursive automatic bias selection for classifier construction.
Multivariate decision trees.
Multivariate Analysis
On the optimality of the simple Bayesian classifier under zero-one loss
Supervised and unsupervised discretization of continuous features.
The recurrent cascade-correlation architecture
Experiments with a new boosting algorithm.
Combining classifiers with constructive induction.
Linear tree.
Combining classification procedures.
Induction of recursive bayesian classifiers.
Elements of Machine Learning.
Machine Learning
Machine Learning.
A system for induction of oblique decision trees.
Journal of Artificial Intelligence Research.
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
Learning decision lists.
Selecting a classification method by cross-validation
Prototype Selection For Composite Nearest Neighbor Classifiers.
Stacked generalization: when does it work?
correlation and error reduction in ensemble classifiers.
Connection Science
Stacked generalization.
--TR
Probabilistic reasoning in intelligent systems: networks of plausible inference
The recurrent cascade-correlation architecture
Stacked generalization
C4.5: programs for machine learning
<b><i>Technical Note</i></b>
Multivariate Decision Trees
Elements of machine learning
Recursive Automatic Bias Selection for Classifier Construction
reduction through learning multiple descriptions
Prototype selection for composite nearest neighbor classifiers
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning
An Empirical Comparison of Voting Classification Algorithms
Learning Decision Lists
Induction of Decision Trees
Induction of Recursive Bayesian Classifiers
Combining Classifiers by Constructive Induction
--CTR
Robert Munro , Daren Ler , Jon Patrick, Meta-learning orthographic and contextual models for language independent named entity recognition, Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003, p.192-195, May 31, 2003, Edmonton, Canada
Csar Ferri , Peter Flach , Jos Hernndez-Orallo, Delegating classifiers, Proceedings of the twenty-first international conference on Machine learning, p.37, July 04-08, 2004, Banff, Alberta, Canada
Saddys Segrera , Mara N. Moreno, An experimental comparative study of web mining methods for recommender systems, Proceedings of the 6th WSEAS International Conference on Distance Learning and Web Engineering, p.56-61, September 22-24, 2006, Lisbon, Portugal
Ljupo Todorovski , Sao Deroski, Combining Classifiers with Meta Decision Trees, Machine Learning, v.50 n.3, p.223-249, March
Joo Gama, Functional Trees, Machine Learning, v.55 n.3, p.219-250, June 2004
Huimin Zhao , Sudha Ram, Entity identification for heterogeneous database integration: a multiple classifier system approach and empirical evaluation, Information Systems, v.30 n.2, p.119-132, April 2005
An Zeng , Dan Pan , Jian-Bin He, Prediction of MHC II-binding peptides using rough set-based rule sets ensemble, Applied Intelligence, v.27 n.2, p.153-166, October 2007
S. B. Kotsiantis , I. D. Zaharakis , P. E. Pintelas, Machine learning: a review of classification and combining techniques, Artificial Intelligence Review, v.26 n.3, p.159-190, November 2006 | combining classifiers;multiple models;merging classifiers;constructive induction |
369882 | Markov Processes on Curves. | We study the classification problem that arises when two variablesone continuous (x), one discrete (s)evolve jointly in time. We suppose that the vector x traces out a smooth multidimensional curve, to each point of which the variable s attaches a discrete label. The trace of s thus partitions the curve into different segments whose boundaries occur where s changes value. We consider how to learn the mapping between the trace of x and the trace of s from examples of segmented curves. Our approach is to model the conditional random process that generates segments of constant s along the curve of x. We suppose that the variable s evolves stochastically as a function of the arc length traversed by x. Since arc length does not depend on the rate at which a curve is traversed, this gives rise to a family of Markov processes whose predictions are invariant to nonlinear warpings (or reparameterizations) of time. We show how to estimate the parameters of these modelsknown as Markov processes on curves (MPCs)from labeled and unlabeled data. We then apply these models to two problems in automatic speech recognition, where x are acoustic feature trajectories and s are phonetic alignments. | Introduction
The automatic segmentation of continuous trajectories poses a challenging
problem in machine learning. The problem arises whenever a multi-dimensional
trajectory fx(t)jt 2 [0; ]g must be mapped into a sequence
of discrete labels segmentation performs this mapping by
specifying consecutive time intervals such that
and attaching the labels sk to contiguous arcs along the trajectory. The
learning problem is to discover such a mapping from labeled or unlabeled
examples.
In this paper, we study this problem, paying special attention to the
fact that curves have intrinsic geometric properties that do not depend on
the rate at which they are traversed (do Carmo, 1976). Such properties
include, for example, the total arc length and the maximum distance
between any two points on the curve. Given a multidimensional trajectory
these properties are invariant to reparameterizations
monotonic function that maps the interval [0;
into itself. Put another way, the intrinsic geometric properties of the curve
are invariant to nonlinear warpings of time.
The study of curves requires some simple notions from differential
geometry. As a matter of terminology, we refer to particular parameterizations
of curves as trajectories. We regard two trajectories x1(t) and
x2 (t) as equivalent to the same curve if there exists a monotonically increasing
function f for which x1 be precise, we mean
the same oriented curve: the direction of traversal matters.) Here, as in
what follows, we adopt the convention of using x(t) to denote an entire
trajectory as opposed to constantly writing out fx(t)jt 2 [0; ]g. Where
necessary to refer to the value of x(t) at a particular moment in time, we
use a different index, such as x(t1 ).
Let us now return to the problem of automatic segmentation. Consider
two variables-one continuous (x), one discrete (s)-that evolve jointly in
time. We assume that the vector x traces out a smooth multidimensional
curve, to each point of which the variable s attaches a discrete label. Note
that the trace of s yields a partition of the curve into different components;
in particular, the boundaries of these components occur at the points
where s changes value. We refer to such partitions as segmentations and
to the regions of constant s as segments; see figure 1.
Our goal in this paper is to learn a probabilistic mapping between trajectories
x(t) and segmentations s(t) from labeled or unlabeled examples.
Consider the random process that generates segments of constant s along
the curve traced out by x. Given a trajectory x(t), let Pr[s(t) j x(t)] denote
the conditional probability distribution over possible segmentations.
Suppose that for any two equivalent trajectories x(t) and x(f(t)), we have
the identity:
Eq. (1) captures a fundamental invariance-namely, that the probability
that the curve is segmented in a particular way is independent of the
rate at which it is traversed. In this paper, we study Markov processes
with this property. We call them Markov processes on curves (MPCs)
because for these processes it is unambiguous to write Pr[s j x] without
providing explicit parameterizations for the trajectories, x(t) or s(t). The
distinguishing feature of MPCs is that the variable s evolves as a function
of the arc length traversed along x, a quantity that is manifestly invariant
to nonlinear warpings of time.
Invariances and symmetries play an important role in statistical pattern
recognition because they encode prior knowledge about the problem
domain (Duda & Hart, 1973). Most work on invariances has focused on
spatial symmetries in vision. In optical character recognition, for example,
researchers have improved the accuracy of automatically trained classifiers
by incorporating invariances to translations, rotations, and changes of
scale (Simard, LeCun, & Denker, 1993). This paper focuses on an invariance
associated with pattern recognition in dynamical systems. Invariance
to nonlinear warpings of time arises naturally in problems involving the
segmentation of continuous trajectories. For example, in pen-based hand-writing
recognition, this invariance captures the notion that the shape of
start
t=t
Figure
1: Two variables-one continuous (x), one discrete (s)-evolve jointly
in time. The trace of s partitions the curve of x into different segments whose
boundaries occur where s changes value. Markov processes on curves model the
conditional distribution, Pr[sjx].
a letter does not depend on the rate at which it is penned. Likewise, in
automatic speech recognition (Rabiner & Juang, 1993), this invariance
can be used to model the effects of speaking rate. Thus, in addition to
being mathematically interesting in its own right, the principled handling
of this invariance has important consequences for real-world applications
of machine learning.
The main contributions of this paper are: (i) to postulate eq. (1) as
a fundamental invariance of random processes; (ii) to introduce MPCs
as a family of probabilistic models that capture this invariance; (iii) to
derive learning algorithms for MPCs based on the principle of maximum
likelihood estimation; and (iv) to compare the performance of MPCs for
automatic speech recognition versus that of hidden Markov models (Ra-
biner & Juang, 1993). In terms of previous work, our motivation most
closely resembles that of Tishby (1990), who several years ago proposed
a dynamical system approach to speech processing.
The organization of this paper is as follows. In section 2, we begin
by reviewing some basic concepts from differential geometry. We then
introduce MPCs as a family of continuous-time Markov processes that
parameterize the conditional probability distribution, Pr[s j x]. The processes
are derived from a set of differential equations that describe the
pointwise evolution of s along the curve traced out by x.
In section 3, we consider how to learn the parameters of MPCs in
both supervised and unsupervised settings. These settings correspond to
whether the learner has access to labeled or unlabeled examples. Labeled
examples consist of trajectories x(t), along with their corresponding segmentations
The ordered pairs in eq. (2) indicate that s(t) takes the value sk between
times the start and end states are used to mark endpoints.
Unlabeled examples consist only of the trajectories x(t) and the boundary
values:
f(start;
Eq. (3) specifies only that the Markov process starts at time
terminates at some later time . In this case, the learner must infer its own
target values for s(t) in order to update its parameter estimates. We view
both types of learning as instances of maximum likelihood estimation and
describe an EM algorithm for the more general case of unlabeled examples.
In section 4, we describe some simple extensions of MPCs that significantly
increase their modeling power. We also compare MPCs to other
probabilistic models of trajectory segmentation, such as hidden Markov
models. We argue that MPCs are distinguished by two special proper-
ties: the natural handling of invariance to nonlinear warpings of time,
and the emphasis on learning a segmentation model Pr[sjx], as opposed
to a synthesis model Pr[xjs].
Finally, in section 5, we apply MPCs to the problem of automatic
speech recognition. In this setting, we identify the curves x with acoustic
feature trajectories and the segmentations s with phonetic alignments.
We present experimental results on two tasks-recognizing New Jersey
town names and connected alpha-digits. On these tasks, we find that
MPCs generally match or exceed the performance of comparably trained
hidden Markov models. We conclude in section 6 by posing several open
questions for future research.
Markov processes on curves
Markov processes on curves are based fundamentally on the notion of
arc length. After reviewing how to compute arc lengths along curves, we
show how they can be used to define random processes that capture the
invariance of eq. (1).
2.1 Arc length
Let g(x) define a D \Theta D matrix for each point x 2 R D ; in other words,
to each point x, we associate a particular D \Theta D matrix g(x). If g(x)
is non-negative definite for all x, then we can use it as a metric to compute
distances along curves. In particular, consider two nearby points
separated by the infinitesimal vector dx. We define the squared distance
between these two points as:
Arc length along a curve is the non-decreasing function computed by
integrating these local distances. Thus, for the trajectory x(t), the arc
length between the points x(t1) and x(t2) is given by:
dt
\Theta
where
dt [x(t)] denotes the time derivative of x. Note that the arc
length between two points is invariant under reparameterizations of the
smooth monotonic function
of time that maps the interval
In the special case where g(x) is the identity matrix for all x, eq. (5)
reduces to the standard definition of arc length in Euclidean space. More
generally, however, eq. (4) defines a non-Euclidean metric for computing
arc lengths. Thus, for example, if the metric g(x) varies as a function of
x, then eq. (5) can assign different arc lengths to the trajectories x(t) and
is a constant displacement.
2.2 States and lifelengths
The problem of segmentation is to map a trajectory x(t) into a sequence
of discrete labels s1 these labels are attached to contiguous arcs
along the curve of x, then we can describe this sequence by a piecewise
constant function of time, s(t), as in figure 1. We refer to the possible
values of s as states. In what follows, we introduce a family of random
processes that evolve s as a function of the arc length traversed along the
curve traced out by x. These random processes are based on a simple
premise-namely, that the probability of remaining in a particular state
decays exponentially with the cumulative arc length traversed in that state.
The signature of a state is the particular way in which it computes arc
length.
To formalize this idea, we associate with each state i the following
quantities: (i) a metric g i (x) that can be used to compute arc lengths,
as in eq. (5); (ii) a decay parameter i that measures the probability
per unit arc length that s makes a transition from state i to some other
state; and (iii) a set of transition probabilities a ij , where a ij represents the
probability that-having decayed out of state i-the variable s makes a
transition to state j. Thus, a ij defines a stochastic transition matrix with
zero elements along the diagonal and rows that sum to one: a
1. Note that all these quantities-the metric g i (x), the decay
parameter i , and the transition probabilities a ij -depend explicitly on
the state i with which they are associated.
Together, these quantities can be used define a Markov process along
the curve traced out by x. In particular, let p i (t) denote the probability
that s is in state i at time t, based on its history up to that point in time.
A Markov process is defined by the set of differential equations:
dt
\Theta
\Theta
The right hand side of eq. (6) consists of two competing terms. The first
term computes the probability that s decays out of state i; the second
computes the probability that s decays into state i. Both probabilities
are proportional to measures of arc length, and combining them gives the
overall change in probability that occurs in the time interval [t; t
The process is Markovian because the evolution of p i depends only on
quantities available at time t; thus the future is independent of the past
given the present.
Eq. (6) has certain properties of interest. First, note that summing
both sides over i gives the identity
This shows that p i
remains a normalized probability distribution: i.e.,
times.
Second, suppose that we start in state i and do not allow return visits: i.e.,
j. In this case, the second term of eq. (6) van-
ishes, and we obtain a simple, one-dimensional linear differential equation
for It follows that the probability of remaining in state i decays exponentially
with the amount of arc length traversed by x, where arc length
is computed using the matrix g i (x). The decay parameter, i , controls
the typical amount of arc length traversed in state i; it may be viewed
as an inverse lifetime or-to be more precise-an inverse lifelength. Fi-
nally, noting that arc length is a reparameterization-invariant quantity, we
therefore observe that these dynamics capture the fundamental invariance
of eq. (1).
2.3 Inference
Let a0i denote the probability that the variable s makes an immediate
transition from the start state-denoted by the zero index-to state i;
put another way, this is the probability that the first segment belongs to
state i. Given a trajectory x(t), the Markov process in eq. (6) gives rise
to a conditional probability distribution over possible segmentations, s(t).
Consider the segmentation in which s(t) takes the value sk between times
and tk , and let
dt
\Theta
denote the arc length traversed in state sk . From eq. (6), we know that the
probability of remaining in a particular state decays exponentially with
this arc length. Thus, the conditional probability of this segmentation is
given by:
Y
Y
as k s k+1
where we have used s0 and sn+1 to denote the start and end states of the
Markov process. The first product in eq. (8) multiplies the probabilities
that each segment traverses exactly its observed arc length. The second
product multiplies the probabilities for transitions between states sk and
sk+1 . The leading factors of s k
are included to normalize each state's
duration model.
There are many important quantities that can be computed from the
distribution, Pr[sjx]. Of particular interest is the most probable segmentation
Given a particular trajectory x(t), eq. (9) calls for a maximization over all
piecewise constant functions of time, s(t). In practice, this maximization
can be performed by discretizing the time axis and applying a dynamic
programming (or forward-backward) procedure. The resulting segmentations
will be optimal at some finite temporal resolution, \Deltat. For example,
let ff i (t) denote the log-likelihood of the most probable segmentation, ending
in state i, of the subtrajectory up to time t. Starting from the initial
condition
\Theta
is the discrete delta function. Also, at each time step, let \Psi j (t+
\Deltat) record the value of i that maximizes the right hand side of eq. (10).
Suppose that the Markov process terminates at time . Enforcing the
endpoint condition s we find the most likely segmentation by
back-tracking:
These recursions yield a segmentation that is optimal at some finite temporal
resolution \Deltat. Generally speaking, by choosing \Deltat to be sufficiently
small, one can minimize the errors introduced by discretization. In prac-
tice, one would choose \Deltat to reflect the time scale beyond which it is
not necessary to consider changes of state. For example, in pen-based
handwriting recognition, \Deltat might be determined by the maximum pen
velocity; in automatic speech recognition, by the sampling rate and frame
rate.
Other types of inferences can be made from the distribution, eq. (8).
For example, one can compute the marginal probability, Pr
that the Markov process terminates at precisely the observed time. Simi-
larly, one can compute the posterior probability,
end], that at an earlier moment in time, t1 , the variable s was in state i.
These inferences are made by summing the probabilities in eq. (8) over all
segmentations that terminate precisely at time . This sum is performed
by discretizing the time axis and applying a forward-backward procedure
similar to eqs. (10-11). These algorithms have essentially the same form
as their counterparts in hidden Markov models (Rabiner & Juang, 1993).
3 Learning from examples
The learning problem in MPCs is to estimate the parameters f i
in eq. (6) from examples of segmented (or non-segmented) curves. Our
first step is to assume a convenient parameterization for the metrics, g i (x),
that compute arc lengths. We then show how to fit these metrics, along
with the parameters i and a ij , by maximum likelihood estimation.
3.1 Parameterizing the metric
A variety of parameterizations can be considered for the metrics, g i (x).
The simplest possible form is a Euclidean metric, where g i (x) does not
have any dependence on the point x. Such a metric has the virtue of
simplicity, but it is not very powerful in terms of what it can model. In
this paper, we consider the more general form:
where \Phi i (x) is a positive scalar-valued function of x, and oe i is a positive-definite
matrix with joe 1. Eq. (12) is a conformal transformation
(Wald, 1984) of a Euclidean metric-that is, a non-Euclidean metric in
which all the dependence on x is captured by a scalar prefactor.
conformal transformation is one that locally preserves angles, but not
distances.) Eq. (12) strikes one possible balance between the confines
of Euclidean geometry and the full generality of Riemannian manifolds.
The determinant constraint joe imposed to avoid the degenerate
solution 0, in which every trajectory is assigned zero arc length.
Note that we have defined the metric g i (x) in terms of the inverse of oe
this turns out to simplify the parameter reestimation formula for oe i , given
later in the section.
The form of the metric determines the nature of the learning problem
in MPCs. For the choice in eq. (12), one must estimate the functions
the matrices oe i , the decay parameters i and the transition
probabilities a ij . In this section, we will consider the functions \Phi i (x) as
fixed or pre-determined, leaving only the parameters oe i , i , and a ij to be
estimated from training data. Later, in section 4.2, we will suggest a particular
choice for the functions \Phi i (x) based on the relationship between
MPCs and hidden Markov models.
3.2 Labeled examples
Suppose we are given examples of segmented trajectories, fx ff (t); s ff (t)g,
where the index ff runs over the examples in the training set. As short-
hand, let I iff (t) denote the indicator function that selects out segments
associated with state i:
I iff
Also, let ' iff denote the total arc length traversed by state i in the ffth
example:
Z
dt I iff (t)
\Theta
In this paper we view learning as a problem in maximum likelihood esti-
mation. Thus we seek the parameters that maximize the log-likelihood:
ff
iff
is the overall number of observed transitions from state i to
state j. Eq. (15) follows directly from the distribution over segmentations
in eq. (8). Note that the first two terms measure the log-likelihood
of observed segments in isolation, while the last term measures the log-likelihood
of observed transitions.
Eq. (15) has a convenient form for maximum likelihood estimation.
In particular, for fixed oe i , there are closed-form solutions for the optimal
values of i and a ij ; these are given by:
ff
These formulae are easy to interpret. The transition
probabilities a ij are determined by observed counts of transitions, while
the decay parameters i are determined by the mean arc lengths traversed
in each state.
In general, we cannot find closed-form solutions for the maximum likelihood
estimates of oe i . However, we can update these matrices in an iterative
fashion that is guaranteed to increase the log-likelihood at each step.
Denoting the updated matrices by ~ oe i , we consider the iterative scheme
(derived in the appendix):
ff
Z
dt I iff (t)
x ff
ff
\Theta
x ff2
where the constant c i is determined by the determinant constraint j~oe
1. The reestimation formula for oe i involves a sum and integral over all
segments assigned to the ith state of the MPC. In practice, the integral is
evaluated numerically by discretizing the time axis. By taking gradients
of eq. (15), one can show that the fixed points of this iterative procedure
correspond to stationary points of the log-likelihood. A proof of monotonic
convergence is given in the appendix.
In the case of labeled examples, the above procedures for maximum
likelihood estimation can be invoked independently for each state i. One
first iterates eq. (18) to estimate the matrix elements of oe i . These parameters
are then used to compute the arc lengths, ' iff , that appear in eq. (14).
Given these arc lengths, the decay parameters and transition probabilities
follow directly from eqs. (16-17). Thus the problem of learning given
labeled examples is relatively straightforward.
3.3 Unlabeled examples
In an unsupervised setting, the learner does not have access to labeled ex-
amples; the only available information consists of the trajectories x ff (t),
as well as the fact that each process terminates at some time ff . The goal
of unsupervised learning is to maximize the log-likelihood that for each
trajectory x ff (t), some probable segmentation can be found that terminates
at precisely the observed time. The appropriate marginal probability
is computed by summing Pr[s(t)jx(t)] over allowed segmentations, as
described at the end of section 2.3.
The maximization of this log-likelihood defines a problem in hidden
variable density estimation. The hidden variables are the states of the
Markov process. If these variables were known, the problem would reduce
to the one considered in the previous section. To fill in these missing val-
ues, one can use the Expectation-Maximization (EM) algorithm (Baum,
1972; Dempster, Laird, & Rubin, 1976). Roughly speaking, the EM algorithm
works by converting the maximization of the hidden variable
problem into a weighted version of the problem where the segmentations,
s ff (t), are known. The weights are determined by the posterior probabil-
ities, Pr[s ff (t)jx ff (t); s ff ( ff derived from the current parameter
estimates.
We note that eqs. (10-11) suffice to implement an extremely useful
approximation to the EM algorithm in MPCs. This approximation is to
compute, based on the current parameter estimates, the optimal segmen-
tation, s
ff (t), for each trajectory in the training set; one then re-estimates
the parameters of the Markov process by treating the inferred segmen-
tations, s
ff (t), as targets. This approximation reduces the problem of
parameter estimation to the one considered in the previous section. It
can be viewed as a winner-take-all approximation to the full EM algo-
rithm, analogous to the Viterbi approximation for hidden Markov models
(Rabiner & Juang, 1993).
Essentially the same algorithm can also be applied to the intermediate
case of partially labeled examples. In this setting, the state sequences are
specified, but not the segment boundaries; in other words, examples are
provided in the form:
The ability to handle such examples is important for two reasons: first,
because they provide more information than unlabeled examples, and sec-
ond, because complete segmentations in the form of eq. (2) may not be
available. For example, in the problem of automatic speech recognition,
phonetic transcriptions are much easier to obtain than phonetic align-
ments. As before, we can view the learning problem for such examples
as one in hidden variable density estimation. Knowledge of the state sequence
is incorporated into the EM algorithm by restricting the forward-backward
procedures to consider only those paths that pass through the
desired sequence.
4 Observations
In this section, we present some extensions to MPCs and discuss how they
relate to other probabilistic models for trajectory segmentation.
4.1 Extensions to MPCs
MPCs can accomodate more general measures of distance than the one
presented in eq. (4). For example, let
xj denote the unit tangent
vector along the curve of x, where j
simple extension of
eq. (4) is to consider d'
x, where g(x; u) depends not only
on the point x, but also on the tangent vector u. This extension enables
one to assign different distances to time-reversed trajectories, as opposed
to the measure in eq. (5), which does not depend on whether the curve
is traversed forwards or backwards. More generally, one may incorporate
any of the vector invariants
dt
into the distance measure. These vectors characterize the local geometry
at each point along the curve; in particular, eq. (20) gives the point x for
the unit tangent vector u for the local curvature for
etc. Incorporating higher-order derivatives in this way enables one to use
fairly general distance measures in MPCs.
The invariance to nonlinear warpings of time can also be relaxed in
MPCs. This is done by including time as a coordinate in its own right-
i.e., by operating on the spacetime trajectories computing
generalized arc lengths, d'
z T G(x)
z, where
G(x) is a spacetime metric-a (D+1)-dimensional square matrix for each
point x. The effect of replacing
x by
z is to allow stationary portions of
the trajectory to contribute to the integral
R
d'. The admixture of
space and time coordinates in this way is an old idea from physics, originating
in the theory of relativity, though in that context the metric is
negative-definite (Wald, 1984). Note that this extension of MPCs can
also be combined with the previous one-for instance, by incorporating
both tangent vectors and timing information into the distance measure.
4.2 Relation to hidden Markov models and previous
work
Hidden Markov models (HMMs), currently the most popular approach to
trajectory segmentation, are also based on probabilistic methods. These
models parameterize joint distributions of the form:
Y
There are several important differences between HMMs and MPCs (be-
sides the trivial one that HMMs are formulated for discrete-time pro-
cesses). First, the predictions of HMMs are not invariant to nonlinear
warpings of time. For example, consider the pair of trajectories x t and y t ,
where y t is created by the doubling operation:
ae
y
Both trajectories trace out the same curve, but y t does so at half the rate
as x t . In general, HMMs will not assign these trajectories the same like-
lihood, nor are they guaranteed to infer equivalent segmentations. This
is true even for HMMs with more sophisticated durational models (Ra-
biner & Juang, 1993). By contrast, these trajectories will be processed
identically by MPCs based on eqs. (5-6).
The states in HMMs and MPCs are also weighted differently by their
inference procedures. On one hand, in HMMs, the contribution of each
state to the log-likelihood grows in proportion to its duration in time
(i.e., to the number of observations attributed to that state). On the
other hand, in MPCs, the contribution of each state grows in proportion
to its arc length. Naturally, the weighting by arc length attaches a more
important role to short-lived states with non-stationary trajectories. The
consequences of this for automatic speech recognition are discussed in
section 5.
HMMs and MPCs also differ in what they try to model. HMMs parameterize
joint distributions of the form given by eq. (21). Thus, in HMMs,
parameter estimation is directed at learning a synthesis model, Pr[xjs],
while in MPCs, it is directed at learning a segmentation model, Pr[sjx].
The direction of conditioning on x is a crucial difference. In HMMs,
one can generate artificial trajectories by sampling from the joint distribution
Pr[s; x]; MPCs, on the other hand, do not provide a generative
model of trajectories. The Markov assumption is also slightly different
in HMMs and MPCs. HMMs observe the conditional independence
such that the state, s t+1 , is independent of
the observation, x t , given the previous state, s t . By contrast, in MPCs the
evolution of p i (t), as given by eq. (6), depends explicitly on the trajectory
at time t-namely, through the arc length [
x] 1=2 .
While MPCs do not provide a generative model of trajectories, we
emphasize that they do provide a generative model of segmentations. In
particular, one can generate a state sequence s0s1s2
the start state and sn+1 is the end state, by sampling from the transition
probabilities a ij . (Here, the sequence length n is not fixed in advance,
but determined by the sampling procedure.) Moreover, for each state
sk , one can generate an arc length 'k by sampling from the exponential
distribution,
Together, these sampled values of sk
and 'k define a segmentation that can be grafted onto any (sufficiently
long) trajectory x(t). Importantly, this interpretation of MPCs allows
them to be combined hierarchically with other generative models (such as
language models in automatic speech recognition).
Finally, we note that one can essentially realize HMMs as a special
case of MPCs. This is done by computing arc lengths along spacetime
trajectories as described in section 4.1. In this setting,
one can mimic the predictions of HMMs by setting the oe i matrices to have
only one non-zero element (namely, the diagonal element for delta-time
contributions to the arc length) and by defining the functions \Phi i (x) in
terms of the HMM emission probabilities Pr(xji) as:
This equation sets up a correspondence between the emission log-probabilities
in HMMs and the arc lengths in MPCs. Ignoring the effects of transition
probabilities (which are often negligible), an MPC initialized by
eq. (23) and this singular choice of oe i will reproduce the segmentations
of its "parent" HMM. This correspondence is important because it allows
one to bootstrap an MPC from a previously trained HMM. (Also, despite
many efforts, we have not found a more effective way to estimate the
functions \Phi i (x).)
In terms of previous work, our motivation for MPCs resembles that
of Tishby (1990), who several years ago proposed a dynamical systems
approach to speech processing. Because MPCs exploit the notion that
trajectories are continuous, they also bear some resemblance to so-called
segmental HMMs (Ostendorf, Digalakis, & Kimball, 1996). MPCs nevertheless
differ from segmental HMMs in two important respects: (i) the
treatment of arc length-particularly, the estimation of a metric g i (x) for
each hidden state of the Markov process, and (ii) the emphasis on learning
a segmentation model Pr[sjx], as opposed to a synthesis model, Pr[xjs],
that is even more complicated than the one in ordinary HMMs.
5 Automatic speech recognition
The Markov processes in this paper were conceived as models for automatic
speech recognition (Rabiner & Juang, 1993). Speech recognizers
take as input a sequence of feature vectors, each of which summarizes the
acoustic properties of a short window of speech. Acoustic feature vectors
typically have ten or more components, so that a particular sequence of
feature vectors can be viewed as tracing out a multidimensional curve.
The goal of a speech recognizer is to translate this curve into a sequence
of words, or more generally, a sequence of sub-syllabic units known as
phonemes. Denoting the feature vectors by x t and the phonemes by s t ,
we can view this problem as the discrete-time equivalent of the segmentation
problem in MPCs.
5.1 Invariances of speech
Though HMMs have led to significant advances in automatic speech recog-
nition, they are handicapped by certain weaknesses. One of these is the
poor manner in which they model variations in speaking rate (Siegler &
Stern, 1995). Typically, HMMs make more errors on fast speech than
slow speech. A related effect, occurring at the phoneme level, is that
consonants are confused more often than vowels. Generally speaking,
consonants have short-lived, non-stationary acoustic signatures; vowels,
just the opposite. Thus, at the phoneme level, we can view consonantal
confusions as a consequence of locally fast speech.
It is tempting to imagine that HMMs make these mistakes because
they do not incorporate an invariance to nonlinear warpings of time.
While this oversimplifies the problem, it is clear that HMMs have systemic
biases. In HMMs, the contribution of each state to the log-likelihood
grows in proportion to its duration in time. Thus decoding procedures
in HMMs are inherently biased to pay more attention to long-lived states
than short-lived ones. In our view, this suggests one plausible explanation
for the tendency of HMMs to confuse consonants more often than vowels.
MPCs are quite different from HMMs in how they weight the speech
signal. In MPCs, the contribution of each state is determined by its arc
length. The weighting by arc length attaches a more important role to
short-lived but non-stationary phonemes, such as consonants. Of course,
one can imagine heuristics in HMMs that achieve the same effect, such
as dividing each state's contribution to the log-likelihood by its observed
(or inferred) duration. Unlike such heuristics, however, the metrics g i (x)
in MPCs are estimated from each state's training data; in other words,
they are designed to reweight the speech signal in a way that reflects the
statistics of acoustic trajectories.
Admittedly, it is oversimplistic to model the effects of speaking rate
by an invariance to nonlinear warpings of time. The acoustic realization
(i.e., spectral profile) of any phoneme does depend to some extent on
the speaking rate, and certain phonemes are more likely to be stretched
or shortened than others. An invariance to nonlinear warpings of time
also presupposes a certain separation of time scales: on one hand, there
is the time scale at which acoustic features (such as spectral energies,
formant bandwidths, or pitch) are extracted from the speech signal; on
the other, there is the time scale at which these features tend to vary.
These time scales need to be well separated for MPCs to have a meaningful
interpretation. Whether this is true obviously depends on the choice of
acoustic features.
Despite these caveats, we feel that MPCs provide a compelling alternative
to traditional methods. While we have motivated MPCs by
appealing to the intrinsic geometric properties of curves, we emphasize
that for automatic speech recognition, it is critically important to relax
the invariance to nonlinear warpings of time. This is done by computing
arc lengths along spacetime trajectories, as described in section 4.1. This
extension allows MPCs to incorporate both movement in acoustic feature
space and duration in time as measures of phonemic evolution. Both of
these measures are important for speech recognition.
5.2 Experiments
Both HMMs and MPCs were used to build connected speech recogniz-
ers. Training and test data came from speaker-independent databases of
telephone speech. All data was digitized at the caller's local switch and
transmitted in this form to the receiver. For feature extraction, input
telephone signals (sampled at 8 kHz and band-limited between 100-3800
were pre-emphasized and blocked into 30ms frames with a frame shift
of 10ms. Each frame was Hamming windowed, autocorrelated, and processed
by a linear predictive coding (LPC) cepstral analysis to produce a
vector of 12 liftered cepstral coefficients (Rabiner & Juang, 1993). The
feature vector was then augmented by its normalized log energy value,
as well as temporal derivatives of first and second order. Overall, each
frame of speech was described by 39 features. These features were used
differently by HMMs and MPCs, as described below.
Recognizers were evaluated on two tasks. The first task was recognizing
New Jersey town names (e.g., Hoboken). The training data for this
task (Sachs et al, 1994) consisted of 12100 short phrases, spoken in the
seven major dialects of American English. These phrases, ranging from
two to four words in length, were selected to provide maximum phonetic
coverage. The test data consisted of 2426 isolated utterances of 1219 New
Jersey town names and was collected from nearly 100 speakers. Note that
the training and test data for this task have non-overlapping vocabularies.
Baseline recognizers were built using 43 left-to-right continuous-density
HMMs, each corresponding to a context-independent English phone. Phones
were modeled by three-state HMMs, with the exception of background
mixture components HMM error rate (%) MPC error rate (%)
Table
1: Error rates on the task of recognizing New Jersey town names versus
the number of mixture components per hidden state.
noise, which was modeled by a single state. State emission probabilities
were computed by Gaussian mixture models with diagonal covariance ma-
trices. Different sized models were trained using
mixture components per hidden state; for a particular model, the number
of mixture components was the same across all states. Mixture model
parameters were estimated by a Viterbi implementation of the Baum-Welch
algorithm. Transition probabilities were assigned default values; in
particular, all transitions allowed by the task grammar were assumed to
be equally probable. (This assumption simplifies the forward-backward
procedure in large state spaces.)
MPC recognizers were built using the same overall grammar. Each
hidden state in the MPCs was assigned a metric g i
(x). The
functions \Phi i (x) were initialized (and fixed) by the state emission probabilities
of the HMMs, as given by eq. (23). The matrices oe i were estimated
by iterating eq. (18). We computed arc lengths along the 14 dimensional
spacetime trajectories through cepstra, log-energy, and time. Thus each
oe i was a 14 \Theta 14 symmetric matrix applied to tangent vectors consisting
of delta-cepstra, delta-log-energy, and delta-time. Note that these MPCs
made use of both extensions discussed in section 4.1. Curiously, our best
results for MPCs were obtained by setting as opposed to estimating
the values of these decay parameters from training data. We suspect
this was due to the highly irregular (i.e., non-exponential) distribution of
arc lengths in the state representing silence and background noise. As in
the HMMs, transition probabilities were assigned default values.
Table
1 shows the results of these experiments comparing MPCs to
HMMs. The error rates in these experiments measure the percentage of
town names that were incorrectly recognized. For various model sizes (as
measured by the number of mixture components), we found the MPCs
to yield consistently lower error rates than the HMMs. The graph in
figure 2 plots these error rates versus the number of modeling parameters
per hidden state. This graph shows that the MPCs are not outperforming
the HMMs merely because they have extra modeling parameters
(i.e., the oe i matrices). The beam widths for the decoding procedures in
these experiments were chosen so that corresponding recognizers activated
roughly equal numbers of arcs.
The second task in our experiments involved the recognition of connected
alpha-digits (e.g., N Z 3 V J 2). The training and test
data consisted of 14622 and 7255 utterances, respectively. Recognizers
parameters per state
error
rate
NJ town names
Figure
2: Error rates for HMMs (dashed) and MPCs (solid) on New Jersey town
names versus the number of parameters per hidden state.
mixture components HMM error rate (%) MPC error rate (%)
Table
2: Word error rates on the task of recognizing connected alpha-digits
versus the number of mixture components per hidden state.
were built from 285 sub-word HMMs/MPCs, each corresponding to a
context-dependent English phone. The recognizers were trained and evaluated
in the same way as the previous task, except that we measured word
error rates instead of phrase error rates. The results, shown in table 2 and
figure 3, follow a similar pattern as before, with the MPCs outperforming
the HMMs.
6 Discussion
The experimental results in the previous section demonstrate the viability
of MPCs for automatic speech recognition. Nevertheless, several issues
require further attention. One important issue is the problem of feature
selection-namely, how to extract meaningful trajectories from the speech
signal. In this work, we used the same cepstral features for both MPCs
and HMMs; this was done to facilitate a side-by-side comparison. It is
doubtful, however, that cepstral trajectories (which are not particularly
smooth) provide the most meaningful type of input to MPCs. Intuitively,
one suspects that pitch contours or formant trajectories would provide
smoother, more informative trajectories than cepstra; unfortunately, these
types of features are difficult to track in noisy or unvoiced speech. Further
work in this area is needed.
Another important issue for MPCs is learning-namely, how to param-
parameters per state
error
rate
alpha-digits
Figure
3: Word error rates for HMMs (dashed) and MPCs (solid) on connected
alpha-digits versus the number of parameters per hidden state.
eterize and estimate the metrics g i (x) from trajectories x(t). We stress
that the learning problem in MPCs has many more degrees of freedom
than the corresponding one in HMMs. In particular, whereas in HMMs
one must learn a distribution Pr(xji) for each hidden state, in MPCs one
must learn a metric g i (x). The former is a scalar-valued function over
the acoustic feature space; the latter, a matrix-valued function. It is fair
to say that we do not understand how to parameterize metrics nearly as
well as probability distributions. Certainly, MPCs have the potential to
exploit more sophisticated metrics than the one studied in this paper.
Moreover, it is somewhat unsatisfactory that the metric in eq. (12) relies
on a trained HMM for its initialization.
On a final note, we emphasize that the issues of feature selection and
parameter estimation in MPCs are not independent. The cepstral front
end in today's speech recognizers is extremely well matched to the HMM
back end; indeed, one might argue that over the last decade of research,
each has been systematically honed to compensate for the other's failings.
It seems likely that future progress in automatic speech recognition will
require concerted efforts at both ends. Thus we hope that besides providing
an alternative to HMMs, MPCs also encourage a fresh look at the
signal processing performed by the front end.
Reestimation formula
In this appendix we derive the reestimation formula, eq. (18) and show
that it leads to monotonic increases in the log-likelihood, eq. (15). Recall
that in MPCs, the probability of remaining in a state decays exponentially
as a function of the arc length. It follows that maximizing the
log-likelihood in each state is equivalent to minimizing its arc length. For
the choice of metric in eqs. (12) and (23), the learning problem reduces
to optimizing the matrices oe i . For simplicity, consider the arc length of a
z
Figure
4: The square root function is concave and upper bounded by p
The bounding tangents are shown for
single trajectory under this metric:
Z dt
\Theta
Here we have written the arc length '(oe) explicitly as a function of the
matrix oe, and we have suppressed the state index for notational convenience
Our goal is to minimize '(oe), subject to the determinant constraint
1. Note that the matrix elements of oe \Gamma1 appear nonlinearly in
the right hand side of eq. (24); thus it is not possible to compute their
optimal values in closed form. As an alternative, we consider the auxiliary
Z dt
ae
x
\Theta
oe
where ae is a D \Theta D positive-definite matrix like oe. It follows directly from
the definition in eq. (25) that trivially, we
observe that Q(ae; ae) Q(ae; oe) for all positive definite matrices ae and oe.
This inequality follows from the concavity of the square root function, as
illustrated in figure 4.
Consider the value of ae which minimizes Q(ae; oe), subject to the determinant
constraint We denote this value by ~
Because the matrix elements of ae \Gamma1 appear linearly in Q(ae; oe), this minimization
reduces to computing the covariance matrix of the tangent vector
x, as distributed along the trajectory x(t). In particular, we have:
~ oe /
Z dt
x
where the constant of proportionality is determined by the constraint
To minimize '(oe) with respect to oe, we now consider the iterative
procedure where at each step we replace oe by ~
oe. We observe that:
Q(~oe; oe) due to concavity
Q(oe; oe) since ~
with equality generally holding only when ~ In other words, this
iterative procedure converges monotonically to a local minimum of the
arc length, '(oe). Extending this procedure to combined arc lengths over
multiple trajectories, we obtain eq. (18).
Acknowledgements
The authors thank F. Pereira for many helpful comments about the presentation
of these ideas.
--R
An inequality and associated maximization technique in statistical estimation for probabilistic functions of a markov process.
Maximum likelihood from incomplete data via the em algorithm.
Differential Geometry of Curves and Sur- faces
Pattern Classification and Scene Analysis.
From HMMs to segment models: a unified view of stochastic modeling for speech recognition.
Fundamentals of Speech Recognition.
United States English subword speech data.
On the effects of speech rate in large vocabulary speech recognition systems.
Efficient pattern recognition using a new transformation distance.
A dynamical system approach to speech process- ing
General Relativity.
--TR
Fundamentals of speech recognition
Efficient Pattern Recognition Using a New Transformation Distance
--CTR
Yon Visell, Spontaneous organisation, pattern models, and music, Organised Sound, v.9 n.2, p.151-165, August 2004 | automatic speech recognition;markov processes on curves;hidden Markov models |
370225 | Simplicial Properties of the Set of Planar Binary Trees. | Planar binary trees appear as the the main ingredient of a new homology theory related to dialgebras, cf.(J.-L. Loday, i>C.R. Acad. Sci. Paris 321 (1995), 141146.) Here I investigate the simplicial properties of the set of these trees, which are independent of the dialgebra context though they are reflected in the dialgebra homology.The set of planar binary trees is endowed with a natural (almost) simplicial structure which gives rise to a chain complex. The main new idea consists in decomposing the set of trees into classes, by exploiting the orientation of their leaves. (This trick has subsequently found an application in quantum electrodynamics, c.f. (C. Brouder, On the Trees of Quantum Fields, Eur. Phys. J. C12, 535549 (2000).) This decomposition yields a chain bicomplex whose total chain complex is that of binary trees. The main theorem of the paper concerns a further decomposition of this bicomplex. Each vertical complex is the direct sum of subcomplexes which are in bijection with the planar binary trees. This decomposition is used in the computation of dialgebra homology as a derived functor, cf. (A. Frabetti, Dialgebra (co) Homology with Coefficients, Springer L.N.M., to appear). | Introduction
The planar binary trees have been widely studied for their combinatorial properties, which relate
them to permutations, partition of closed strings and other finite sets. In fact, the cardinality
of the set Y n of planar binary trees with leaves and one root is the Catalan number
n!(n+1)! , which is well known to have many combinatorial interpretations [G].
In 1994, in the paper [L] written by J.-L. Loday, these trees appear as the main ingredient in
the homology of a new kind of algebras, called dialgebras , equipped with two binary associative
operations. Instead of the single copy
A\Omega n , which forms the module of Hochschild n-chains of
an associative algebra A, Loday finds out that the module of n-chains of a dialgebra D contains
c n copies of
D\Omega n . The crucial observation is that labelling each copy of
D\Omega n by an n-tree leads
to a very natural and simple definition of the face maps: the i-th face of an n-tree is obtained
by deleting its i-th leaf. Hence the set of rooted planar binary trees acquires an important
role in the simplicial context of dialgebra homology. The study of this homology leads to the
investigation of the simplicial structure of the set of trees, which is completely independent of
the dialgebra context and constitutes the content of this paper.
The set of trees can be equipped with degeneracy operators s j which satisfy all the simplicial
relations except that s i s . For such a set, which is called almost-simplicial , some of the
properties of simplicial sets still hold, for instance the Eilenberg-Zilber Theorem, c.f. [I].
Our main idea consists in decomposing the set of trees into classes, by exploiting the orientation
of their leaves. This trick is purely combinatorial (set-theoretical), and it is explained in
section 1. In section 2 we show that this decomposition is compatible with the almost-simplicial
strucure and yields a chain bicomplex whose total chain complex is that of binary trees. Conse-
quently, in the application, we obtain a canonical spectral sequence converging to the dialgebra
homology.
Our main theorem concerns a further decomposition of this bicomplex. We show that each
vertical complex is in fact the direct sum of subcomplexes, that we call towers . It turns out that
these towers are in bijection with the planar binary trees. The vertical complex in degree p is
the some of the towers indexed by the planar binary trees of order p. Again, this decomposition
is purely combinatorial. An illustrative picture of the situation is placed at the beginning of
section 3, where we define the vertical towers, by means of a new kind of degeneracy operators,
and prove the decomposition theorem (3.11).
The main application of this decomposition is the interpretation of dialgebra homology as a
Tor functor given in [F2].
1 Classes of planar binary trees
In this section we introduce some classes of planar binary trees and compute their cardinality.
1.1 - Planar binary trees. By a planar binary tree we mean an open planar graph with 3-fold
internal vertices. Among the external vertices we fix a preferred one and call it root . Usually
we draw the root at the bottom of the tree. The remaining external vertices, called leaves , are
drawn at the top of the tree:
leaves
root
For any natural number n, let Y n be the set of planar binary trees with
we label as 0; 1; :::; n from left to right. Given a tree y with leaves, we call order of y
the natural number jyj := n. Notice that the order of a tree is the number of internal vertices.
Therefore, for we consider the unique tree with one leaf, the root and no internal vertices.
Here is a picture of the sets Y n for 3:
The cardinality of the set Y n is given by the Catalan number (see [K], [A], [B] and [G])
Hence the sets Y 0 , Y 1 , Y 2 , . have cardinality 1, 1, 2, 5, 14, 42, 132 and so on.
In the sequel we abbreviate "planar binary tree" into "tree".
1.2 - Classes of trees. For any couple of natural numbers p; q, let Y p;q be the set of (p+q+1)-
trees with p leaves oriented like n (excluded the 0-th leaf), and q leaves oriented like = (excluded
the last one). The class of an n-tree is specified by the component Y p;q ae Y n , with
to which the tree belongs. For example:
For any p; q - 0, the set Y p;0 (resp. Y 0;q ) contains only one tree (resp. ), called comb.
The sets Y p;q are obviously disjoint, for different couples (p; q), and their disjoint union covers
the set Y p+q+1 . Hence we have
G
p+q+1=n
For example, Notice that the number of
classes in the set Y n is precisely ng.
1.3 - Proposition. Let c p;q be the cardinality of the set Y p;q . Then
p! q!
c p;q02q
Figure
1: The cardinality of the classes of rooted planar binary trees.
Lemma. The cardinality c p;q of the set Y p;q is c
finally, for any p; q - 1, it satisfies the relation
Proof. When there exists only one (0; 0)-tree, namely . Thus c 0;0 = 1. Similarly,
there exists only one (p; 0)-tree, namely the comb tree . The same for
When q)-tree y can have one of the following three shapes:
Thus, for any p; q - 1, c p;q is the sum of the cardinality of these three disjoint sets. 2
Proof of (1.3). We have to count the number c p;q of (p; q)-trees, for Consider the values
c p;q as coefficients of Taylor's expansion of a function of two variables x and y, around the point
(0; 0), and put
It is straightforward to show that the relations of lemma (1.4) lead us to the quadratic equation
in the indeterminate f(x; y). The solution of this equation
is the function f(x;
. By direct computations, choosing
the sign "\Gamma" before the root, we obtain the values
In fact
where g n;0 (x; y) is a polynomial with g n;0 (0;
2 is such
that \Delta(0;
@y m . Therefore the function f(x; y) has itself Taylor's
expansion
@ n+m f(0;
and the coefficients c p;q satisfy
@ n+m f(0;
Again by direct computation we obtain
@ n+m f(x; y)
where g n;m (0; Hence we get the final formula
@x p+1 @y q+1
Remark. The Catalan number c n can be given in terms of binomial coefficients, c n =n+1
. Hence the discrete convolution formula for binomial coefficients, namely
!/
evaluated at exactly the identity
!/
p! q!
Simplicial structure on the set of binary trees
In this section we recall the existence of an almost-simplicial structure on the family of planar
binary trees which was previously introduced in [L] and [F1], and show that it gives rise to an
acyclic complex.
The faces and degeneracies are compatible with the decomposition of the set Y n into the
classes Y p;q . As a result, there exists a chain bicomplex whose total chain complex is that of
planar binary trees. An important application, given in [F2], concerns the dialgebra homology
defined in [L]: the bicomplex of trees induces a non trivial spectral sequence converging to the
dialgebra homology.
Faces and degeneracies
2.1 - Pseudo and almost-simplicial sets. We recall that a pre-simplicial set E is a collection
of sets E n , one for each n - 0, equipped with face maps d
satisfying the relations
(d) d i d
Given a field k, we consider the k-linear span k[E n ] of the elements of the set E n . The faces give
rise to the boundary operator
Therefore any pre-simplicial set fE n ; d i g always gives rise to a chain complex (k[E ]; d).
We also recall that a simplicial set is equipped with degeneracy maps
any the relations
By definition, a pseudo-simplicial set is a family of sets endowed with faces and degeneracies
satisfying relations (d) and (ds) but not necessarily relations (s) (see [T-V] and [I]).
We call almost-simplicial a pseudo-simplicial set whose degeneracies satisfy relations
except for which means that s i s in general (but not necessarily)
all simplicial or almost-simplicial sets are pseudo-simplicial,
fsimplicial setsg ae falmost-simplicial setsg ae fpseudo-simplicial setsg ae fpre-simplicial setsg:
Let us consider now the set of binary trees described in section 1. Trees can be obtained one
from another by repeating two basic operations: deleting and adding leaves. The operation of
deleting leaves allows us to define face maps Y n \Gamma! Y n\Gamma1 and thus to consider the associated
chain complex k[Y ] for any given field k. The operation of adding leaves allows us to define
degeneracy maps Y n \Gamma! Y n+1 .
2.2 - Face maps on trees. For any n - 0, and any th face the map
which associates to an n-tree y the (n \Gamma 1)-tree d i (y) obtained by deleting the i th leaf from y.
For example:
2.3 - Lemma. The face maps d i satisfy the above relations (d). Hence, given a field k, the
sequence
is a chain complex, with boundary operator
Proof. In fact, since the leaf number j of the tree y is the leaf number of the tree d i (y), the
maps d i d j and d j \Gamma1 d i produce the same modification: they delete the leaves number i and j. 2
2.4 - Degeneracies on trees. For any n - 0, and any th degeneracy the
which bifurcates the j th leaf of an n-tree, i.e. which replaces the j th leaf by the branch . For
example:
2.5 - Lemma. The degeneracy maps satisfy the above relations (ds). They also satisfy (s) for
hence the module of binary trees fk[Y n ]; d is almost-simplicial.
Proof. (ds) The operations d i s j on a tree y first adds a leaf replacing the leaf number j by the
branch , and then deletes the leaf number i. So, when i ! j, it is clear that we obtain the
same tree if we first delete the i th leaf and then bifurcate the original j th leaf, which is now
labeled by j \Gamma 1.
1, the operator d i evidently brings the tree s j (y) (with branch labeled by
back to the original tree.
Finally, when i 1, we can invert the operations after having observed that the leaf number
i of the tree s j (y) is the leaf number in the tree y.
The operation s i s j on a tree y first bifurcates the leaf number j and then bifurcates
the leaf number i. So it is clear that if i ! j the same tree can be obtained performing the
two bifurcations in incerted order, observing that the j th leaf of y is the leaf number
(Notice that for replace the i th leaf with the branch , while the
operator produces the branch , hence they do not coincide.) 2
2.6 - Theorem. For any field k, the chain complex of binary trees is acyclic, that is
ae k; for
Proof. It is straightforward to check that the map
is an extra-degeneracy (i.e.
satisfies relations (s)) for the almost-simplicial set of binary trees. It follows that dh
hence the induced map h : k[Y n] \Gamma! k[Y n\Gamma1 ] is a homotopy between the maps id and 0. 2
Bicomplex of trees
The orientation of the leaves of an n-tree, given by the numbers p and q of n- and =-leaves,
permits us to define a double complex of binary trees, by considering maps which do not change
one of the two numbers p, q.
However, in general, a map defined on a set of trees can not be specified to preserve globally
one orientation, since it usually changes both values p and q acting on different trees. This
happens, in particular, to the face d when restricted to
each component Y p;q ae Y n , it takes value in one of the two components Y p\Gamma1;q , Y p;q\Gamma1 of Y
depending on the tree y of Y p;q . Consider, for example, the face d 0 restricted to the component
takes value in Y 0;1 on , and in Y 1;0
on and .
Y 1;0
Y 0;1
This motivates the following definition.
2.7 - Oriented maps. be a linear map, and consider its restriction to
each component Y p;q ae Y n . We call oriented maps of f the following
defined as
ae f(y); if f(y) 2 k[Y p\Gamma(n\Gammam);q ],
0; otherwise,
vertical defined as
ae f(y); if f(y) 2 k[Y p;q\Gamma(n\Gammam) ],
0; otherwise.
In particular, we can consider the oriented maps defined by the faces d i and the degeneracies s j .
2.8 - Bicomplex of trees. For any natural numbers p; q, take k[Y p;q ] as the module of (p; q)-
chains, and define horizontal and vertical boundary operators d
respectively as
d h :=
i and d v :=
2.9 - Lemma. The oriented boundaries defined above satisfy d h d
are chain complexes for any p; q - 0.
Proof. It suffices to show that the oriented faces d h
i and d v
still satisfy the simplicial relations
(d) of (2.1). Let us show, for instance, that d v
i for any i ! j. It suffices to prove that
is a vertical map (i.e. d i only if d j \Gamma1 d i is vertical. A face d i
deletes a =-leaf if the i th -leaf itself is oriented like =, i.e. , or if it is a n-leaf such that
. Then
it is easy to see that both d i d j and d j \Gamma1 d i delete two =-leaves only on the four combinations of
these two possibilities for the leaves i and j. 2
2.10 - Remark. By assumption, in a pre-simplicial module the faces are all non-zero maps.
Therefore, even if the horizontal (resp. vertical) faces satisfy relations (d), the horizontal families
vertical families k[Y p; ]) are not considered to be pre-simplicial modules.
2.11 - Proposition. The triple (k[Y ; ]; d v ; d h ) forms a chain bicomplex, whose total complex
is the shifted complex of binary trees (k[Y +1 ]; d).
\Gamma\Psi
oe
oe
oe
oe
oe
oe
oe
oe
oe
Figure
2: Bicomplex of rooted planar binary trees.
Proof. On any tree y, the map d i acts either as d h
(because d v
or as d v
(when
an obvious identity d
Consequently,
the boundary operator is the sum . Then we have
From (2.9) it follows that d h d v This show at
the same time that (k[Y ; ]; d h ; d v ) is a bicomplex, and that k[Y +1
2.12 - Remark. The bicomplex of trees gives rise to a spectral sequence
which is zero everywhere, since the complex of trees is acyclic and the E 1 -plane, in a similar way,
can be shown to be zero. However the peculiar structure of trees becomes interesting when k[Y n ]
appear as tensor components of some chains module, as for the chain complex of dialgebras (see
[L], [F1] and [F2]). In this case, the bicomplex of trees permits us to find a spectral sequence
which converges to the homology of the given complex.
3 Decomposition of the bicomplex of trees into towers
In this section we show a technical result which helps drastically in the computation of dialgebra
homology as a derived functor (see [F2]). The main theorem says that any vertical complex
is a direct sum of subcomplexes whose homology can be computed for some dialgebras.
At the same time, being related to intrinsical properties of the trees, this result clarifies the
simplicial structure of the bicomplex. Each subcomplex, called vertical tower and denoted by
is constructed on a single tree, provided that it has all zero vertical faces and called base
tree, by applying all possible vertical increasing maps of degree 1, i.e. by adding =-leaves in all
possible distinct ways. It turns out, due to the particular shape of planar binary trees, that
such towers are all disjoint one from each other and that they cover the whole bicomplex. This
structure yields a decomposition of the bicomplex of trees which has many regularities:
ffl The base trees arising in the vertical chain complex k[Y p; ], for fixed p - 0, are in bijection
with p-trees (see lemma (3.10)), i.e. they are counted exactly by c
ffl The vertical tower T (y), associated to a p-tree y, is a multi-complex with dimension
proposition (3.12)).
ffl The vertical tower T (y), associated to a p-tree y, is a subcomplex of k[Y shifted by
the number of =-leaves of y (excluded its last leaf). This means that if y belongs to the
class Y p 0 ;q 0 of Y p , then Tm (y) ae k[Y p;q 0 +m ] for any m - 0 (see again lemma (3.10)). A
geometrical meaning of the number q 0 is given in the appendix.
We draw in fig. 3 a summarizing picture of the vertical towers at small dimension. The
details of the definitions and proofs are given in the remaining part of this section.
-base tree
r
r
r
r
r
r
r
r
r
r
r
r
Figure
3: Decomposition of the bicomplex of trees into vertical towers.
New kind of degeneracies: grafting operators.
In order to construct a vertical complex on a given tree, we need to introduce a second kind of
increasing maps Y n \Gamma! Y n+1 , besides the usual degeneracies s j .
The operation of adding a leaf to a tree consists, more precisely, in grafting a new leaf into a
given edge of the tree. The degeneracy operators defined in (2.4), in fact, graft a new leaf into
the edge which starts from any existing leaf. Thus, to define the remaining increasing operators,
we need a rule to label the internal edges of a tree.
3.1 - Labels of internal vertices and internal edges. Any binary tree with
and one root has precisely n internal vertices. Let us choose the following rule to label them.
An internal vertex is labeled by i if it closes a descending path which starts between the leaves
number
An internal edge of the tree is the branch delimited by two adjacent vertices, included the
root. We label by i the edge whose 'upper' extreme is a vertex labeled by i. (If we extend this
rule to the external edges, each leaf is labelled as the edge which starts from it.) For instance:
@
@
@
@
@ @
r
r
r
and
@
@
@
@
@ @
@
@
\Gamma24
Labels of internal vertices. Labels of internal edges.
In conclusion, any n-tree has n+1 external edges (the leaves), labeled from 0 to n, and n internal
edges (included the one which ends with the root), labeled from 1 to n.
3.2 - Grafting operators. For any n - 0, and for any th left and right
grafting operator the maps
which graft a new leaf into the i th internal edge of a tree, respectively from the left and from
the right. For example
l
Notice that the operation of grafting a new leaf into an external edge produces the same
result whether it is performed from the left or from the right: it consists in bifurcating the leaf.
Thus, as we said, the grafting operators on external edges coincide with the degeneracies.
We wish to determine whether increasing maps are horizontal or vertical. We show in the
next lemma that the orientation of grafting operators does not depend on the index i nor on
the tree on which the map is acting. Instead, the orientation of the degeneracy s i changes with
the index depending on the particular tree on which it is acting.
3.3 - Lemma. Let p; q be natural numbers, and
1. The left grafters l i are horizontal maps, i.e. l
Similarly, the right grafters r i are vertical maps, i.e. r
2. For any (p; q)-tree y, and for any index i 2 f0; :::; ng, the degeneracy s i is horizontal on
y, i.e. s v
only if the i th leaf of y is oriented like =. Similarly, s i is vertical on y,
i.e. s h
only if the i th leaf of y is oriented like n.
Proof. 1. Any left grafter l i acts by adding a n-leaf, which will be labeled, in the (n
by an integer j 2 f0; ng. The terminal vertex of the new leaf will be consequently labeled
1. Similarly, any left grafter r i acts by adding a =-leaf, which will be labeled, in the
1)-tree, by an integer j 2 f1; 2; 1g. The terminal vertex of the new leaf will be
consequently labeled by j.
2. The map s i acts on the leaf as
, thus s i adds a n-leaf (it is horizontal). Similarly, s i
acts on the leaf as
, thus s i adds a =-leaf (it is vertical). 2
Since we wish to deal with vertical complexes k[Y p; ], throughout the remaining part of this
section we fix a p - 0, and observe (p; q)-trees for different values of q - 0.
The next lemma says weather an increasing map is distinct from any other or produces the
same tree as some other map.
3.4 - Labels of oriented leaves. Let y be a (p; q)-tree, and 1. We define a map
by assigning to the integer i the label of the i th n-leaf
of y, counting leaves from left to right and excluding the 0 th leaf.
Any n-leaf (except the first one) is grafted into a =-leaf (included the last one). Thus there
is a map b y : f1; :::; pg \Gamma! f1; :::; ng, b y
i , which assigns to the integer i the label of the
=-leaf into which the i th n-leaf is grafted, i.e.
a b
Call A(y) := fa y
ng the image of a. Since the p n-leaves of y are distinct by
assumption, the map a y is a bijection between the set f1; :::; pg and the set A(y). Thus we can
also define a ng by b(a y
. Call B(y) := fb y
ng the
image of b.
Some properties of the maps a and b are given in section A.
y be a (p; q)-tree, and
1. The degeneracy maps are all distinct from each other, i.e. for any
then s i (y) 6= s j (y). (In particular this holds for any index in the set A(y).)
2. Any right grafting map into an internal edge labeled as a =-leaf produces the same tree as
some degeneracy map or a right grafting map into an edge labeled as a n-leaf. In other words, for
any index i 2 f1; :::; ng n A(y), there exists an a 2 A(y) such that r i
3. All right grafting maps into internal edges labeled as a n-leaf are distinct from each other
and from any degeneracy map. That is, for any a 2 A(y), r a (y) 6= s a 0
(y) and r a (y) 6= r a 0
(y) for
any a 0 6= a 2 A(y).
Thus, for any (p; q)-tree y, there are precisely vertical non-zero degeneracies
acting on y, namely s
distinct vertical grafting maps, namely r a 1
Proof. The assertion 1. is obvious.
2. Suppose that an internal edge is labeled as a =-leaf, by i. Then there are two possible
shapes of the branch around the i th leaf:
In the first case, we have there is no a 0 between a and b,
we have r b Otherwise, if there are a 0 ; a 00 ; ::: 2 A(y) such that a ! a
we have r b
(y).
In the second case, we have i 2 f1; :::; ng n [A(y) [ B(y)], and the i th -leaf is grafted into
a n-leaf labeled, say, by a, so a there are no a 0 between a and i, we have
Otherwise, if there are a 0 ; a 00 ; ::: 2 A(y) such that a ! a
(y).
3. The position of an internal edge which is labeled as n-leaf is very peculiar. Suppose that
it is labeled by a. Then there must be an index a 0 ! a such that the internal edge a starts at
the intersection between the n-leaf a 0 and the =-leaf b(a). Thus there are only two possible
shapes of the branch around the a th leaf, and r a acts as follows:
a' a b=b'
a 7! r a
a' a b b'
and for
a' a b b'
a 7! r a
a' a b b'
It is then clear that r a (y) can never be obtained by bifurcating a leaf: the branch
a b
obstructs
it. So r a (y) 6= s a 00 (y) for any a 00 6= a.
Now consider the right grafter into another n-leaf. Again the internal edge is placed in a
peculiar position, such as the one labeled by a. Let us consider the 8 mutual positions of two
internal edges labeled by a and a 0 . Suppose a
a' a b=b'
a'
a
a' a b=b'
a'
a
a' a b b'
a'
a
a' a b b'
a'
a
a' b' a b
a' a
a' b' a b
a'
a
a' b' a b
a' a
a' b' a b
a' a
One can check that on these 8 trees we always have r a 6= r a 0 , so finally r a is always different
from r a 0
. 2
Since any map r i coincide with some degeneracy, for ng n A(y), we give the
commutation relations between r a and the faces d i only for a 2 A(y).
Lemma. The right grafting operators satisfy the relations
r
r a d i ; for a - i - b(a),
r a d
(r) r a r a 0
r a 0 +1 r a ; for a ! a 0 and b ! b 0 ,
r a 0 r a ; for a ! a 0 and b - b 0 ,
(Notice that the last relation involves an edge labeled as a =-leaf.)
Proof. (dr) For any a 2 A(y), the operator r a can act on the two basic trees (1) and (2) of
lemma (3.5). Relations (dr) can be checked on (1) with the help of the following observations.
a, the leaf number a of (1) is labeled by a \Gamma 1 in d i (1), hence also the internal edge
labeled by a in (1) becomes a \Gamma 1 in d i (1).
ffl If a - i - b, the edge labeled by a in (1) remains labeled by a in d i (1).
1, the face d b+1 deletes precisely the leaf which has just been added by r a .
1, the edge labeled by a in (1) remains labeled by a in d i (1), but the leaf number
deleted in r a (1) by the face d i was labeled by
The same observations hold for the tree (2).
Check directly on the 8 trees (ij) and [ij], for of lemma (3.5). 2
Decomposition of the vertical complexes into towers.
towers. Let y be a (p; q)-tree. We call vertical tower over y the family T [y],
For example, the tree
Thus
ae
oe
and so on. To simplify the notation, we use the same symbol Tm [y] to denote the subset of trees
and the k-module spanned by these trees. In general, a vertical tower is not a vertical complex.
Lemma. For any (p; q)-tree y, the vertical tower T [y] is closed for the vertical faces d v
if and only if d v
Proof. (i) Assume that d v
1. We show that if y 0 belongs to Tm [y]
such that d v
the tree d v
belongs to Tm\Gamma1 [y]. We proceed by induction on m.
ffl First assume that y by definition of vertical tower we know that y
or there exists an index i 2 f1; :::; pg such that y 0 is equal either to s a i (y) or to r a i (y). Now
consider a 2g such that d v
s a i d v
for a i possibly equal also to 0, or
k r a i
r a i
r a i d v
r a i d v
In conclusion we have that d v
ffl Assume now that for any tree y 00 2 Tm\Gamma1 [y], we have d v
We show that the same holds for any tree y
In fact y 0 must be equal either to s 0 (-y), s a i
(-y) or to r a i
(-y), for an index i 2 f1; :::; pg, with
Thus, in the first case (for a i also equal to 0)
k s a i
s a i d v
belongs to Tm\Gamma1 [y], because for inductive hypothesis d v
and in the second case
k r a i
r a i
r a i d v
belongs to Tm\Gamma1 [y] for the same reason.
(ii) Assume now that there exists an index k 2 f0; :::; p+q+1g such that d v
We show that the tower over y is not closed for the vertical faces. In fact:
0 (y) is a tree
in Y p;q which is different from y, i.e. it does not belong to T 0 [y].
a i
a i +1 (y) is a
tree in Y p;q different from y.
ffl If a
k (y) is a
tree in Y p;q different from y. 2
Base trees. For any p - 0, we call (p; )-base tree any (p; q)-tree y such that d v
for all By (3.8), the vertical tower constructed on a base tree is a vertical complex.
3.10 - Lemma-Notation. There is a bijective correspondence between the set Y p and the
set of (p; )-base trees. Therefore we denote by T (y) the tower T [~y] on the (p; )-base tree ~y
corresponding to the p-tree y. Moreover, the number of =-leaves of a p-tree y is equal to the
number of =-leaves of its associated base tree ~y.
Proof. Let
G
be the map which sends a tree y into the tree '(y) obtained by bifurcating all the =-leaves. The
map ' is clearly injective, let us show that it is well defined.
ffl If y is a p-tree, then the tree '(y) has exactely p leaves oriented like n. In fact, suppose
that the p-tree y lies in the component Y r;s of Y p , i.e. y has r internal n-leaves and s internal
=-leaves, with r p. Then the tree '(y) has the original r n-leaves, and the new s
n-leaves appearing after the bifurcation of the s =-leaves.
ffl If y is a p-tree, then the tree '(y) can have at most =-leaves. In fact, the
=-leaves of '(y) are exactely the s original =-leaves of the p-tree y belonging to the component
Y r;s , and clearly Hence '(y) belongs to the union
F
ffl Let us show that if '(y) belongs to the set Y p;s , then d v
If the index i labels a =-leaf of '(y), it comes by construction from a bifurcated =-leaf of y, thus
produces a tree with the same number of =-leaves, and a n-leaf less. When the index i labels
a n-leaf of '(y), the face d i clearly deletes a n-leaf unless the th leaf is a = leaf which is
grafted into the i th leaf, and this is impossible in the tree '(y), because by construction any
=-leaf is preceded by a n-leaf which is grafted into the =-leaf, and not the opposite.
Finally, to prove that the map ' is a bijection, we show that the surjective map
G
Y p;q \Gamma! Y p
which deletes all the =-leaves, included the last one, is inverse to ' when restricted to the subset
of trees with d v
1. The composition / ffi ' is clearly the identity
map on Y p . On the other side, let y be a (p; q)-tree, for some q - 0. By construction, the tree
'/(y) is obtained by deleting all the =-leaves from y, and then replacing all the new =-leaves
with bifurcations. Thus y and '/(y) can only differ for some =-leaf, say labeled by k, such
that the leaf labeled by k \Gamma 1 is not a n-leaf grafting into it. Any such leaf produces a vertical
non-zero face d v
k . Since the domain of / is restricted to the trees with only zero vertical faces,
the trees y and '/(y) must coincide. 2
3.11 - Theorem. For any p - 0, the vertical complex (k[Y p; ]; d v ) is the direct sum of towers,
shifted by the number q y of =-leaves of y.
Proof. By construction, the sum of the towers T (y) is direct. We show that it covers the
whole vertical complex k[Y p; ], i.e. that for any y 2 Y p;q , there exists a tree y
such that
be its (p; )-base tree. Then
y differs from ~
y 0 for some =-leaves which are not labelled by any b i , with In fact,
by definition, any tree in degree m is obtained by adding a =-leaf to a tree in degree
means of the maps s a or r a . Thus y belongs to Tm [ ~
. 2
3.12 - Proposition. The tower T (y), associated to a p-tree y, is the total complex of a multi
complex with dimension 1. Hence the number of its direct summands, at any degree
- 0, is given by the binomial coefficient
Proof. Apply definition (3.7) and remark, after (3.6) and (3.5), that 2p + 1 is precisely the
number of distinct maps which can act on a tree with p n-leaves by adding a =-leaf. 2
Drawings of vertical towers.
3.13 - Vertical tower for The vertical complex k[Y 0; ] coincides with the tower T ( )
with base , and it is pre-simplicial since all the faces are non-zero (fig. 4).
Figure
4: Vertical tower T ( ) with base .
3.14 - Vertical tower for 1. The vertical complex k[Y coincide with the tower T ( )
with base . This complex is the total of a multi-complex with dimension (fig. 5).
6 I
\Phi-
III
\Gammad 3
Figure
5: Vertical tower T ( ) with base .
3.15 - Vertical towers for 2. The set Y 2 contains two trees, and , associated
respectively to the base trees and . Hence k[Y 2; the two
towers are multi-complexes with dimension (fig. 6 and fig. 7).
3. The set Y 3 contains five trees, , , , and ,
which correspond, respectively, to the five following base trees:
Hence k[Y 3; ] is the direct sum of five vertical towers, based on these five trees, which are
multi-complexes with dimension 7.
In a similar way one can proceed for p ? 3. Each vertical complex k[Y is the direct sum of
vertical towers (where c p is the number of p-trees), and each vertical tower is a multi-complex
with dimension
Qk
I
6 II
III
@
@
@
@
@
@
@
@
@
@
@
@ @R
IV
\Gamma\Psi
@
@
@
@
@
@
@
@
@
@
@
@
@
@
Figure
Vertical tower T ( ) with base .
Qk
I
6 II
III
@
@
@
@
@
@
@
@
@
@
@
@ @R
IV
\Gamma\Psi
@
@
@
@
@
@
@
@
@
@
@
@
@
@
\Gammad 3
\Gammad 5
Figure
7: Vertical tower T ( ) with base .
A
Appendix
. An invariant of the towers
In this appendix we show that the classes of trees are in bijection with certain classes of set
maps. From this construction it is then clear that the number of =-leaves of a tree y characterizes
the shape of all trees belonging to the vertical tower T (y) associated to y.
A.1 - Proposition. For any p; q - 0, there is a bijective correspondence between (p; q)-trees
and pairs of set maps satisfying the following
conditions:
1. if i !j then a(i)!a(j), hence the map a is monotone strictly increasing;
2. a(i)!b(i) for any i, in particular the maps a and b have disjoint image;
3. if i !j and a(j)!b(i), then b(i)-b(j) (equivalently, if i !j and b(i)!b(j) then b(i)!a(j)).
Proof. (i) Let us show that for any (p; q)-tree y, the set maps ng defined
in (3.4), with which label the oriented leaves of y, satisfy conditions 1, 2, 3. The
first two conditions are evident: 1 means that the p n-leaves are distinct, and 2 means that any
n-leaf is distinct from the =-leaf into which is grafted. Condition 3 is due to the facts that any
n-leaf cannot coincide with any =-leaf, so b i 6= a j , and that for the relation
would correspond to the following impossible picture:
a b
a b
ng be two maps satisfying conditions 1, 2,
1. Then we can construct a tree y with the following algorithm.
label them from 0 to p 1. Draw an edge n from the 0-th
leaf, an edge = from the last leaf and the root.
ffl From any leaf labelled by a(i), draw an edge n and graft it into an edge = drawn from the
leaf labelled by b(i). Extend all the edges untill they reach an edge of opposite orientation.
ffl From any remaining leaf, draw an =-edge, and reach an n-edge.
None of these operations has any freedom of choice, so the tree thus obtained is uniquely deter-
mined, and it is clearly described by the given maps a; b. 2
Here is an example of the algorithm above. Let 2. Choose two
maps according to conditions 1, 2, 3 of (A.1), for instance,
5:
Now follow the three steps in the drawing.
A.2 - Blocks. The map b is not necessarily monotone. However we can say that it is "block"
monotone, since it satisfies
4. For any triple of indices k such that b(i) ! b(j), it is impossible that b(k) - b(i).
This condition says that whenever the map b satisfies the inequality sign
"!" separates two blocks in the image of b, given, respectively, by indices preceding and following
the inequality sign. This follows easily from the above conditions 1, 2, 3. By 3, the inequality
implies that b(i) ! a(j). Condition 2 says that a(k) ! b(k). Thus, combining the two
inequalities and the thesis, we obtain which is impossible because, by
Remark that the number of blocks of the (p; q)-tree associated to the maps a and b can vary
between 1 and p, for p ? 0, and is assumed to be 1 for
A.3 - Proposition. All the trees belonging to a vertical tower T (y) have the same number
blocks, where q y is the number of =-leaves of the tree y.
Hence the number q y has a geometrical meaning which is invariant in the vertical tower T (y)
constructed over y, being related to the number of blocks of leaves of any tree in the tower.
Proof. If a p-tree y has q y -leaves, by (3.10) we know that its associated oriented tree ~y is a
)-tree. The tower T (y) is based on this tree, a nd by construction the tree ~y is the one
with minimal number of =-leaves in the tower. Grafting new =-leaves into any n-leaf does not
affect the ordering of the indices b i , and hence of the number of blocks. Thus we only need to
show that ~y itself has qy
Denote by b y the number of blocks of the (p; q y )-tree ~y. By definition (A.2), the number of
cases the total number of cases between the p indices b i is
y is the number of cases (i that ~y is the tree with
less =-leaves in the tower). This means that y is the number of coinciding =-leaves among
possible ones. Therefore the total number of distinct =-leaves of ~y (included the last one) is
--R
Some remarks and results on Catalan numbers
Historical note on a recurrent combinatorial problem
Dialgebra homology of associative algebras
Research bibliography of two special number sequences
Homoopyy of pseudosimplicial groups
The art of computer programming I.
"Category theory, homology theory and applications"
--TR
Concrete mathematics: a foundation for computer science
The art of computer programming, volume 1 (3rd ed.) | almost-simplicial sets;planar binary trees |
370338 | Partial Evaluation of Views. | Many database applications and environments, such as mediation over heterogeneous database sources and data warehousing for decision support, lead to complex queries. Queries are often nested, defined over previously defined views, and may involve unions. There are good reasons why one might want to remove pieces (sub-queries or sub-views) from such queries: some sub-views of a query may be effectively cached from previous queries, or may be materialized views; some may be known to evaluate empty, by reasoning over the integrity constraints; and some may match protected queries, which for security cannot be evaluated for all users.In this paper, we present a new evaluation strategy with respect to queries defined over views, which we call tuple-tagging, that allows for an efficient removal of sub-views from the query. Other approaches to this are to rewrite the query so the sub-views to be removed are effectively gone, then to evaluate the rewritten query. With the tuple tagging evaluation, no rewrite of the original query is necessary.We describe formally a discounted query (a query with sub-views marked that are to be considered as removed), present the tuple tagging algorithm for evaluating discounted queries, provide an analysis of the algorithm's performance, and present some experimental results. These results strongly support the tuple-tagging algorithm both as an efficient means to effectively remove sub-views from a view query during evaluation, and as a viable optimization strategy for certain applications. The experiments also suggest that rewrite techniques for this may perform worse than the evaluation of the original query, and much worse than the tuple tagging approach. | Introduction
1.1 Motivation and Objectives
Many current database applications and environments, as mediation over heterogeneous database sources
and data warehousing for decision support, incur complex queries. Queries are often nested, defined over
previously defined views, and may involve unions. special type of such queries called fusion queries,
which are self-joins of views defined over unions, was discussed in [25]). This is a necessity in mediation, as
views in the meta-schema are defined to combine data from disparate sources. In these environments, view
definition maintenance is of paramount importance.
There are many reasons why one might want to "remove" pieces (sub-queries) from a given query. Let us call
a "sub-query" an unfolding, as the query can be unfolded via view definition into more specific sub-queries.
These reasons include the following:
1. Some unfoldings of the query may be effectively cached from previous queries [5, 9], or may be materialized
views [16] themselves.
2. Some unfoldings may be known to evaluate empty, by reasoning over the integrity constraints [1, 3].
3. Some unfoldings may match protected queries, which for security cannot be evaluated for all users [22].
4. Some unfoldings may be subsumed by previously asked queries, so are not of interest.
5. An unfolding shared by two queries in an except (difference) operation can be removed from both
queries before the operation is carried out.
d
a
e
Figure
1: The AND/OR tree representation of the original query.
What does it mean to remove unfoldings from a query? The modified query should not subsume-and thus
when evaluated should never evaluate-the removed unfoldings, but should subsume "everything" else of
the original query.
In case 1, one might want to separate out certain unfoldings, because they can be evaluated much less
expensively, and in a networked, distributed environment be evaluated locally. Then, a "remainder query"
can be evaluated independently to find the remaining answers [5]. If the remainder query is less expensive
to evaluate than the original, this is an optimization. In case 2, the unfoldings are free to evaluate, since it
is known in advance that they must evaluate empty. In case 3, when some unfoldings are protected, it does
not mean that the "rest" of the query cannot be safely evaluated. In case 4, when a user is asking a series of
queries, he or she may just be interested in the stream of answers returned. So any previously seen answers
are no longer of interest. In case 5, except queries might be optimized by this technique.
Consider the following example.
Example 1. Let there be six relations defined in the database DB: Departments(did, address), Insti-
tutes(did, address) Faculty(eid, did, rank), Staff(eid, did, position), Health Ins(eid, premium,
provider), and Life ins(eid, premium, provider).
There are also three views defined in terms of these relations:
create view create view create view
Academic Units as Employees as Benefits as
(select did, address (select eid, did (select eid, premium, provider
from Departments) from Faculty) from Life Ins)
union union union
(select did, address (select eid, did (select eid, premium, provider
from Institutes) from Staff) from Health ins)
Define the following query Q:
Q: select E.eid
from Academic Units A, Employees E, Benefits B
where A.did=E.did and E.eid=B.eid and B.provider="Blue Cross"
Query Q can be represented as a parse tree of its relational algebra representation, which is an AND/OR
tree, as shown in Figure 1 (we ignore for brevity explicit representation of project and select operations).
Evaluating the query-in the order of operations as specified in its relational algebra representation-is
equivalent to materializing all nodes of the query tree. This type of evaluation (and representation) is
referred to as bottom-up.
Now assume that the the following query F has been asked before and its result is stored in cache.
select E.eid
from Departments D, Faculty F, Life Ins L
where D.did=F.did and F.eid=L.eid and L.provider="Blue Cross"
Figure
2: The AND/OR tree representation of the modified query.
Equivalently, we can assume that this formula represents a materialized view or that it matches a protected
query whose answers should not be displayed. If F is a protected query, then the join expression
it computes should not be evaluated. Thus, it has to be eliminated from the query. Otherwise, if F
is a cached query or a materialized view, it may still be beneficial to "remove" it from the query. We
call the result of this a discounted query.
One way to achieve this is to rewrite Q as a union of joins over base tables, then to remove the
join represented by F (which then is explicitly present), and finally to evaluate the remaining join
expressions. This may be very inefficient, however. The number of join expressions that remain to be
evaluated may be exponential in the size of the collection of view definitions. Furthermore, we have
shown in [11] that such an evaluation plan (which is often called top-down) may require evaluating
the same joins many times, and incur the expense that a given tuple may be computed many times
(whenever the projected parts of base tables overlap significantly). A top-down evaluation of Q from
Figure
1 is the union of the eight following join expressions:
Departments 1 Faculty 1 Life Ins Departments 1 Faculty 1 Health Ins
Departments 1 Staff 1 Life Ins Departments 1 Staff 1 Health Ins
Institutes 1 Faculty 1 Life Ins Institutes 1 Faculty 1 Health Ins
Institutes 1 Staff 1 Life Ins Institutes 1 Staff 1 Health Ins
A seemingly more efficient evaluation plan for the discounted query can be devised by rewriting the
query so that the number of operations (unions and joins) is minimized. (See Figure 2.) As a side effect
of this approach, the redundancy in join evaluation, as well as redundancy in answer computation, is
reduced. However, such redundancy is not entirely eliminated: for example, the join Institutes 1
Faculty is computed twice (implicitly in the left sub-tree and explicitly in the right sub-tree). One
can verify that there is no rewrite of the query tree that removes the join Departments 1 Faculty
Ins and yet guarantees at the same time that there is no redundancy in the computation of
other joins. Thus, the "optimized" query can sometimes cost more to evaluate than the original query;
our experimental results show (see Section 4) that this is, indeed, the case for this example.
In this paper, we present a new strategy for partial evaluation of queries defined over views, which we call
tuple-tagging, which offers many advantages over explicit query rewrites.
ffl Tuple-tagging is easily implemented at the logical level, and can be accomplished by rewriting the SQL
expression of the query.
ffl The technique is modular: it can be implemented independently of other optimization stages (in
particular, of traditional relational database query optimization) to work in conjunction with the other
stages.
ffl Tuple-tagging is not an algebraic rewrite of the query: it preserves the query tree as is, and thus scales
up for complex queries.
ffl The tuple-tagging is interleaved with query evaluation. Thus, reliable heuristics can be devised and
employed to decide step-wise whether a given "optimization" step should be applied.
The paper is organized as follows. Section 2 presents a formal framework for discounted queries (that section
can be skipped on the first reading). Section 3 presents a tuple-tagging technique for evaluating discounted
queries. Performance analysis of the technique, and experimental results over a TPC/D benchmark database
in DB2, are presented in Section 4. Section 5 concludes with issues and future work.
1.2 Related Work
There is a substantial body of research in rewrite based query optimization [4, 6, 7, 19]. However, all of
the techniques discussed in literature considered rewriting a query into a logically equivalent form. This is
a different goal than the one we consider here: we are interested in generating and efficiently evaluating a
query (view) from which an unfolding has been removed. Thus, the resulting query is not equivalent to the
original query.
The problem we address is also different from the problem of answering queries using materialized views [2,
14, 16, 20]. In the latter problem, the goal is to replace subqueries of a query by views (or other queries)
to generate a query equivalent or contained in the original query. Our goal, again, is to remove, or more
precisely to avoid evaluating, parts of a query for optimization or security reasons.
The work that is most closely related to our work here is [15], in which the authors consider queries that
involve nested union operations. They propose a technique for rewriting such queries, when it is known that
some of the joins evaluated as part of the query are empty. The technique applies, however, only to a simple
class of queries, and no complexity issues are addressed.
Another research area related to our problem is multiple query optimization [21]. The goal here is to optimize
evaluation of a set of queries, rather than a single query. Since the queries in the set are arbitrary, they may
not be related in any structured way to allow use of the techniques that we propose here. The developed
techniques for multiple query optimization are focused towards finding and reusing common subexpressions
across collections of queries and are heuristics-based.
The problem of query tree rewrites for the purpose of optimization has also been considered in the context
of deductive databases with recursion. In [12], the problem of detecting and eliminating redundant subgoal
occurrences in proof trees generated by programs in the presence of functional dependencies is discussed. In
[13], the residue method of [1] is extended to recursive queries.
We investigated the computational complexity of query rewrites in [10]. We showed that the optimal rewrite
of a query is NP-hard. We also identified a special class of queries and unfoldings for which rewrites result
is a simpler query, thus always providing an optimization.
Discounted Queries
In this section, we formally define the notion of an unfolding and a discounted query. For preciseness, we use
the notation of Datalog [24]. We will write a query as a set of atoms, to be interpreted as a conjunction of
the atoms. For instance, fa; e; bg represents the query of Example 1. 1 Some of the atoms may be intensional;
that is, they are written with view predicates defined over base table predicates and, perhaps, other views.
Definition 2. Given query sets Q and U , call U a 1-step unfolding of query set Q with respect to database
DB iff, given some q i 2 Q and a rule ha / b defining a view for a, such that q i ' j a' (for
most general unifier ' [17]), then
We present only "propositional" examples for simplicity's sake. They can be extended in a obvious way to queries with the
variables made explicit.
a
e
l h
(1)
(2) (1)
(2)
(1)
(2)
Figure
3: AND/OR tree of Example 1 with two unfoldings marked.
Denote this by U - 1 Q. Call U 1 simply an unfolding of Q, written as U 1 - Q, iff there is some finite
collection of query sets U
An unfolding U is called extensional iff, for every q i 2 U , atom q i refers to a base table. Call the
unfolding intensional otherwise (in other words, some of the atoms refer to views).
Example 3. The views of Example 1 can be represented in simplified Datalog (where letters represent
atoms) as:
a / d. e / f . b / l.
a / i. e / s. b / h.
Since b in the query Q can be unfolded into l using a single rule hb / l:i, then fa; e; lg is one of the
1-step unfoldings of Q.
Since all of the atoms in unfolding F of Q (d, f , and l) are extensional, it is an extensional unfolding
of Q.
It is easy to see how an unfolding's AND/OR tree can be ``inscribed'' in the query's AND/OR tree. The
atoms of an unfolding can be marked in the query's tree as shown in Figure 3 for unfoldings
and, say, bg.
A query is considered to be equivalent to the union of all its extensional unfoldings; define unfolds (Q) to
be the set of all such extensional unfoldings of Q. We can now define the concept of a discounted query,
which is to represent the query with some of its unfoldings "removed" (or discounted).
Definition 4. Given a query set Q and unfoldings U of Q, then the expression QnfU is a
discounted query. We define its meaning to be:
unfolds
unfolds (U i
We call U the unfoldings-to-discount, and the tuples in the answer sets of these unfoldings the
tuples-to-discount.
Example 5. Consider again the query Q of Example 1. Since
fd; f; lg; fd; f; hg; fd; s; lg; fd; s; hg; fi; f; lg; fi; f; hg; fi; s; lg; fi; s; hg
and
fd; f; lg
then
fd; f; hg; fd; s; lg; fd; s; hg; fi; f; lg; fi; f; hg; fi; s; lg; fi; s; hg
Similarly, for the unfolding G, fi; s; bg, in Figure 3, we have
and
fd; f; hg; fd; f; lg; fd; s; lg; fd; s; hg; fi; f; lg; fi; f; hg
Lastly,
fd; f; hg; fd; s; lg; fd; s; hg; fi; f; lg; fi; f; hg
We discuss more formally the semantics of discounted queries in [8].
3 The Tuple-Tagging Evaluation Strategy
3.1
Overview
Our strategy is a bottom-up materialization strategy for the query tree with the union and join operations
modified to account for the discounted unfoldings. The strategy ensures two things:
ffl that tuples resulting from an unfolding-to-discount do not contribute to the answer set of the discounted
query; and
ffl any join represented by an unfolding-to-discount is never fully evaluated.
The tuples resulting from an unfolding-to-discount can be removed either during or after the actual query
evaluation. To ensure the second property and thus to gain optimization, we need somehow to avoid evaluating
the unfoldings-to-discount; that is, to prevent those tuples from being materialized during query
evaluation.
Our proposed method, tuple-tagging, is to keep extra information in the temporary tables created during
the materialization of the query tree. In essence, each table will have an extra column for each unfolding-to-
discount. The domain of these tag columns is boolean. The value of a tag column for a given tuple is true
when that tuple is derived from the corresponding unfolding-to-discount; it is false otherwise. During each
union or join operation (which creates a new temporary table), these tag columns' values must be maintained
properly.
Example 6. Consider query Q of Example 1. Query F , which represents the unfolding-to-discount, is a
join of Departments, Faculty and Life Ins. For each of these tables, a new column CF is added
and its values initialized to true. Similarly, for tables Institutes, Staff, and Health Ins which are
unioned with the above tables, the same column is added and its values initialized to false.
By keeping this derivation information for each tuple during evaluation, we can easily ensure the first property
from above: after evaluation of the query, select those tuples which have all false values in the tag columns
(and also project away the tag columns). We shall be able further to use the tag columns-and ultimately
satisfy our second property-to determine during a join operation which tuples should be joined, and which
should not be (because the resulting tuple would be "from" an unfolding-to-discount). The computation
saved will primarily depend on the size of the true section in the table.
We present the evaluation strategy in two versions. The simpler version ensures only the first property;
that is, the final answer set contains no tuples that arise solely from unfoldings-to-discount. The strategy
is useful in the case when unfoldings are removed for security reasons. It does little, however, to optimize
query evaluation: gross savings are equal to the difference in the cost of writing back the results of the
original query versus the cost of writing back the results of the discounted query (which can sometimes be
new tag columns on a need-by basis.
For each U i
For
If A 2 U i then
Add column CU i
to
Instantiate all values of CU i
in to true.
For each U i
For
If CU i
belongs to but not to TB then
Add column CU i
to
Instantiate all values of CU i
in TB to false.
% Union the two tables.
Union TLN and TRN to create TN .
Algorithm 1: The Modified Union Operation
significantly smaller). These savings can only be substantial when query results are sent over a network. The
second version removes the tuples-to-discount during query evaluation, as soon as is possible. This strategy
can reduce the cost of query evaluation. Both versions of the algorithm require modifying the union and
join operations. This is what we define next.
3.2 The Modified Union Operation
We assume that the query tree contains only union and join nodes (that is, all other operations are implicit).
Furthermore, without loss of generality, we assume the the tree is binary. We refer to one child of any branch
(non-leaf) node N in a binary query tree as LN (for left child), and the other as RN (for right child). We
assume that any leaf N in the query tree has a corresponding table in the database; that is, the answer set
for N is derivable from some table (perhaps temporary) in the database via selects and projections. Call
N 's table (with any selects and projections implicit) TN .
To handle discounted queries, we modify the traditional algorithm for bottom-up query evaluation. This
involves replacing the union and join operations with specialized versions, which handle and exploit the
tag columns for the unfoldings-to-discount, as discussed above. Given discounted query QnfU
we introduce new columns, CU i
as the tag columns corresponding to the unfoldings-to-
discount (as described in Example 6).
We assume for any well-formed query tree that the tables to be unioned at any union step are union-
compatible. With our addition of tag columns, this could now be violated. The two tables to be unioned
may not be union-compatible over the tag columns. Thus, we need to modify the union step first to make
the tables union-compatible by adding any tag columns that are needed. Algorithm 1 shows this. This is
the only way in which we need to modify the union step.
Note that Algorithm 1 can be efficiently implemented in SQL by adding tag columns and initializing their
values not before, but during the execution of the union operator. As we show in Section 4, this adds very
little overhead to the cost of the query execution.
3.3 The Modified Join Operation
We must assign the correct values to tag columns of joined tables. If a tuple results from the join of one
tuple which was derived under a given unfolding-to-discount U (hence the value of its CU is true), and a
second tuple which was not (hence the value of its CU is false), then the resulting tuple is not in the answer
set of U . So CU for the resulting tuple should have the value false. Only when both tuples being joined
were derived under U should the resulting tuple's column CU be set to true. Thus, tags are conjunctively
combined.
Let N be a join node in a query tree for which the children are LN and RN and let U be an unfoldings-to-
discount with the tag column CU in both tables TLN and TRN . The modified join operation executed at
node N is modified by adding the following assignment statement 2 for each unfolding-to-discount U :
Example 7. Consider the final join of the query of Example 1. The three joined tables, Academic Units,
Employees, Benefits will each contain an extra column, CF , storing the values for the unfolding-
to-discount F . This column has been introduced during the execution of the union operations (as
described in Algorithm 1). The query with the modified join operation is as follows.
select E.eid, (A.CF AND E.CF AND B.CF ) as CF
from Academic Units A, Employees E, Benefits B
where A.did=E.did and E.eid=B.eid and B.provider="Blue Cross"
The modified union and join operations have no influence (except for adding extra columns) on the final
answer set of a query. Their only purpose is to keep the trace information about the unfoldings-to-discount
via the tag columns. The last step of the tuple-tagging algorithm in its first version then consists in using
this information to select only the tuples that are known to be derivable from some unfolding other than an
unfolding-to-discount. To ensure this, it is sufficient to select the tuples that have the value false for all their
tag columns.
We show that the tuple-tagging algorithm (that is, the modified union and join operations) is correct. To
prove it we need to show that all tuples, and only tuples, having the value true in a tag column for a given
unfolding-to-discount in the final answer set of a query are the tuples from the answer set of that unfolding.
Theorem 8. Let QnfU be a discounted query and let denote the answer set
(table) representing the result of evaluating QnfU using the tuple-tagging algorithm. Then,
for any tuple -
Proof. Assume without loss of generality that all unfoldings-to-discount are extensional (an intensional
unfolding can be always represented as a union of extensional unfoldings). Let unfolding U i be a join
Consider a tuple - which is marked true in the column for unfolding U i , that is, -:CU i
true. Assume
that - 62 cannot be that the initial value (set by the
modified union operator) for CU i
is true for each of R Thus, if at least one of them is false,
then the value of CU i
will be changed to false (and will remain such) sometime during the evaluation
of joins. This contradicts our assumption, hence - 2
2 The as statement (as per the SQL'92 standards [18]) perform the requisite logical ands between tag columns and introduces
the tag columns back into the new table.
F
Benefits
Academic Units Employees
Institutes
Figure
4: Final join for the query of Example 9.
Assume
will be initialized to true for each of R Since the modified join
operator assigns a conjunction of the values of CU i
from the join tables, CU i
will remain true for the
duration of the evaluation process (note that the modified union operator never changes values for tag
columns). Hence, -:CU i
3.4 Optimization
As stated in Section 3.1, removing tuples-to-discount from the final query answer set according to the
optimization described above does not, in general, improve efficiency of query evaluation. For complex
queries, however, such removal can be executed during query evaluation; that is, before the final answer set
is produced. In other words, we can push some of the selects for false over the tag columns further down in
the query tree. This constitutes the tuple-tagging algorithm in its second version. Consider the following
example.
Example 9. Let the query be as in Example 1 and the unfolding-to-discount be (marked as (2)
in
Figure
3). Thus, all tuples in the join Intitutes 1 Staff 1 Benefits should be removed from the
query's answer set. Let us assume that the final join of the query Academic Units 1 Employees
1 Benefits is executed as specified in the query tree (that is, left to right). Consider the result of
evaluating Academic Units 1 Employees. Some tuples in the result of that join will have the value
true for the column C G (see Figure 4). Now, all tuples in the table for Benefits have the value true
for that column (because the table is a part of the unfolding-to-discount G). Thus, all of the tuples
marked true in the result of the join Academic Units 1 Employees will remain true after the join
with Benefits, hence will be removed from the final query answer set. 3 If so, they can be eliminated as
soon as Academic Units 1 Employees is evaluated. Note that this provides optimization because
the size of one of the tables in the input to the final join decreases. The gross savings achieved through
this optimization can be estimated to the cost of the join of the result of Institutes 1 Staff with
Benefits (marked with dotted lines in Figure 4).
We introduce the notion of a closing of an unfolding-to-discount by a node in a query tree.
Definition 10. Unfolding-to-discount U is closed by node N with respect to binary query tree QT if all
tuple marked true in TN contribute to all and only tuples of U .
3 In the final tuple-tagging query plan, we would not even need to add a tag column CG to Benefits for this very reason.
Thus, the node representing the join of Academic Units 1 Employees is a closing node of unfolding-to-
discount G in Example 9. Of course, the root of a query tree is a closing node for all unfoldings-to-discount.
Next, we prove a theorem that specifies a method of identifying closing nodes in a query tree.
Theorem 11. Unfolding U is closed by node N with respect to binary query tree QT if every node in U
that cannot be reached from the root of QT through join nodes only, is in the subtree rooted by N .
Proof. Consider the nodes of U that do not lie under N . None of these nodes contain a column for U and
since none of these tables will be used in a union (because there are join nodes only between these
nodes and the query tree root) such columns will never be created for the tuples from these tables.
Hence, no tuple in the table represented by the node N could contribute to a change in the value in
the column CU when they are joined during evaluation with other tables represented by the nodes of
U . Thus, the tuples with the value true for any unfoldings closed in N can be removed immediately
after (or while) the table TN is materialized. 2
Since there may be several closing nodes for a given unfolding, it is useful to identify the first one (in the
sequence of operations specified by the query tree) in order to eliminate tuples-to-discount as soon as possible.
Again, the condition for this property is simple. If N is a closing node for U and U does not have a closing
node in the subtree rooted at N , then N is the first of the closing nodes for U .
The tuple-tagging algorithm can utilize (as described in Example 9) the existence of closing nodes not only
to eliminate unfoldings-to-discount, but also to provide optimization. Our experimental results confirm that
this is indeed the case.
Performance Analysis
4.1 Experiments
The purpose of our experiments is threefold:
1. to evaluate the overhead (in query evaluation time) introduced by the tuple-tagging algorithm through
the modification of union and join operations;
2. to compare the performance of tuple-tagging versus query rewrite techniques in eliminating unfoldings-
to-discount; and
3. to evaluate the optimization of query execution through tuple-tagging.
We used TPC/D benchmark database of size 100MB for our experiments (for details on this benchmark
see [23]) installed on DB2 in Windows-NT. In order to be able to define views with unions, we modified
slightly the TPC/D schema. Thus, the three tables Supplier, Partsupp, and Lineitem, which are base
tables in the TPC/D schema, have been partitioned horizontally in half, and new base tables representing
each of the sub-tables (Supplier1 and Supplier2, and so forth) have been created. Then, views Supplier-
v, Partsupp-v, and Lineitem-v have been defined as unions over Supplier1 and Supplier2, Partsupp1
and Partsupp2, and Lineitem1 and Lineitem2, respectively. Thus each of the views has exactly the same
content as the original tables in the TPC/D benchmark database.
We tested several versions of the following three queries.
select *
from Supplier-v S, Partsupp-v P from Supplier-v S, Partsupp-v P
where S.suppkey=P.suppkey where S.suppkey=P.suppkey
select *
(1)
(1) (1) (3)
(2) (2)
(2)
Figure
5: Query tree for Q 2 with marked unfoldings-to-discount.
from Supplier-v S, Partsupp-v P,
where S.suppkey=P.suppkey and
P.partkey=L.partkey and
For
1 , we defined one unfolding-to-discount:
ffl U 0 , a join of Supplier1 and Partsupp1.
For we defined the following three unfoldings-to-discount (marked in the query tree in Figure 5):
1. U 1 , a join of Supplier1, Lineitem1, and Partsupp1
2. U 2 , a join of Supplier2, Lineitem2, and Partsupp-v
3. U 3 , a join of Supplier2, Lineitem2, and Partsupp2
Then the following discounted queries were tested:
g.
Note that Q 2 and Q have exactly the same structure as, respectively, Q and QnfFg of Example 1.
Similarly, has the same structure as QnfGg of Example 9.
Since DB2 does not support boolean data types, we used integers (0 and 1) for tags, and we used multiplication
instead of logical AND for their manipulation. For example, the query Q evaluated under
the tuple-tagging strategy has the following form:
select *
from Supplier-v S, Partsupp-v P, Lineitem-v L
where S.suppkey=P.suppkey and P.partkey=L.partkey and P.suppkey=L.suppkey
and S.tag1 * P.tag1 *
Here tag1 and tag2 stand for tags for U 1 and U 3 respectively, and their values have been set up in the
definitions of the views Supplier-v, Partsupp-v, and Lineitem-v.
Table
1 presents the results of tests measuring the evaluation cost of several queries. We used both tuple-
tagging and explicit rewrites in evaluating the discounted queries. Thus, the suffixes tag, rew, and top mean
that a discounted query was evaluated using respectively tuple-tagging, explicit rewrite of its query tree to
minimize the number of operations (as described in Example 1), and the top-down approach (that is, by
evaluating all joins of base tables). In particular, the query tree for rew has the same structure as
the query tree of Figure 2. Similarly, Q rew has the following form: 4
AllAttributes [(Supplier1 [
4 We express this in relational algebra only for brevity.
Query Total Execution Time (s) Number of Retrieved Rows
tag 3.495 1
tag 17.5 7242
tag 364.38
rew 582.82 65219
tag 356.18
rew 482.65 55537
tag 298.88 56002
Table
1: Experimental Results
One can easily verify that this is the most compact representation of Q g. Recall that Q 2 nfU 1 g top
was evaluated as a union of all its extensional unfoldings-that is, a union over the joins of base tables-and
similarly for top .
4.2 Discussion
The purpose of testing Q 1 and Q 1 nfU 0 g is to measure the overhead of adding and manipulating tags. The
query is designed in such a way that no optimization in query execution time can possibly be achieved by
using tags. This is done by making the root the closing node for U 0 in Q 1 nfU 0 g (so that the joined tables
are of identical sizes as in Q) and minimizing the size of the answer set (thus making sure that there is no
benefit in writing less data back to a disk). Indeed, query execution time for is 3.495 seconds which
is larger than that for Q 1 , which is 3.485 seconds. The good news is that the difference is negligible: 0.29%
in the case of Q g. Once the size of the answer becomes substantial, tuple-tagging begins to
optimize. Queries Q 0
project all attributes of the two joined tables. By reducing the size of
the answer set, Q 0
provides optimization in query execution time over Q 0
vs. 25.6 seconds for Q 0
1 .
The purpose of queries is to compare tuple-tagging with two other techniques
for removing unfolding from a query: top-down query evaluation; and an explicit rewrite of a query tree
to minimize algebraic form. Tuple-tagging outperforms the other two techniques by a respectable margin.
In fact, both the top-down approach and the explicit query rewrite approach add substantial overhead to
the cost of evaluating discounted queries (see Table 2). As we conjectured in Section 1, this is due to the
introduction of redundancy in join evaluation for both techniques. On the other hand, both and
evaluated under the tuple-tagging strategy provide modest optimization over Q 2 (see Figure 6).
This is still achieved only through the reduction of the size of the answer set, and not through reduction of
the sizes of the joined tables.
The last query, Q 2 nfU 2 g, provides a crucial test for our technique. The temporary table created through
the join Supplier-v 1 Lineitem-v is a closing node for unfolding U 2 . This means, that all tuples marked
as true in that table can be eliminated before the next join (with Partsupp-v) is executed, thus reducing
Query Evaluation Strategy
Tuple-Tagging Top-Down Explicit Rewrite
Table
2: Evaluation time (in seconds) for discounted queries under different evaluation strategies.
the cost of that last join. Indeed, the reduction of the execution cost in Q 2 nfU 2 g over the original query Q 2
is 25.7%.
4.3 Heuristics
As stated in Section 1, one of the advantages of tuple-tagging over explicit query tree rewrites is that reliable
heuristics can be devised to decide step-wise whether a given "optimization" step should be applied. Such
a decision will depend on the reduction of the size of a table representing a closing node for some unfolding
(or a set of unfoldings) in the query tree. For example, in query Q half of the tuples in Supplier-v
and half of the tuples in Lineitem-v are marked true. Hence, the proportion of the tuples marked true
in the join Supplier-v 1 Lineitem-v (assuming a uniform distribution of values) is estimated to be 25%.
Once these tuples are eliminated, the cost of the join with Partsupp-v is roughly estimated to be reduced
by 25% as well. We show formally how such a reduction can be estimated in general, thus providing a main
component for heuristics that can be used with the tuple-tagging algorithm.
The proportion of tuples-to-discount versus all of the tuples, p, in a table representing a closing node N
depends on three factors: the proportion of tuples tagged with true for some unfolding-to-discount closed
by N ; the number of unfoldings-to-discount closed by N ; and the size of the join (how many tables
participate). We state this dependence formally as follows. (We assume a uniform distribution of tuples-to-
discount among all tuples in each table.)
Theorem 12. Let table T 0 be created through a join of tables T 1 ,.,T m each with n different tag columns
be the proportion of tuples in the table T j with the value true in the column
the proportion of tuples in T 0 for which all tag columns have the value true, is:
Proof. Induction on n.
1.
If there is only one unfolding-to-discount, then a tuple - is discounted only if it was created by
joining from the tables participating in a join, such that each tuple -
had the value true in column CU . The probability of this is \Pi m
which is the value of
the above formula for
2. Assume that the formula holds for n unfoldings-to-discount. We show it holds for
A tuple is discounted when are considered in one of the following
cases:
a. It would have been discounted when only n unfoldings-to-discount were considered.
b. It is discounted because it is in the answer set of the n 1-st unfolding-to-discount.
The probability of 2a is given by the induction hypothesis as:
Q2 Q2nfU1g Q2nfU2g
Figure
Comparison of execution times (s) of the original query and queries evaluated with tuple-tagging.
The probability of 2b is a conditional probability that the tuple has not been discounted when
only n unfoldings-to-discount were considered, but is discounted when the 1-st unfolding-to-
discount is considered. This probability can be expressed as: \Pi n
Thus, the probability that a tuple is discounted when n+ 1 unfoldings-to-discount are considered
is equal to the sum of the probabilitues of cases 2a and 2b:
In the special case when the proportion of tuples marked true in all tables and all tag columns is identical
and equal to some value P (T ) (that is,
reduces to:
We show the application of this result to predict the size of tables for closing nodes in our experiments.
Example 13. For queries g, the root of the query tree is the closing node. The
proportion of tuples marked true for both U 1 and U 3 is 0.5 for all three tables: Supplier-v, Partsupp-
v, and Lineitem-v. Thus, for query Q 2 nfU 3 g, Formula 1) has the form
Since the size of the answer of the original query Q was 73943 rows, reducing it by 12.5% would produce
a table of size 64700. This is very close to the actual size of 65219 rows observed for g.
Similarly, for query Q has the form 0:234. The predicted
size of the final query answer set is thus 56566, which again is close to the actual size of 55537 for
g.
The values for Formula 1) (p as a function of n) is plotted in Figure 7 for 2-way and 5-way joins with several
values of p(T ). Not surprisingly, the number of discounted tuples grows with the number of unfoldings-to-
discount as well as the proportion of the tuples-to-discount in the tables participating in the join. There is
also, however, a strong inverse relationship between the number of joins involved in a multiway join and the
number of discounted tuples. Even if each of the tables participating in the join has 30% tuples-to-discount
different unfolding are removed, the number of discounted tuples is very small for a 5-way join.
Conclusions
In this paper, we introduced a new framework in which a query is represented as a collection of selected
unfoldings of the query and a discounted query, which represents the query with those unfoldings "removed".
p(T)=10%
p(T)=30%
p(T)=50%
p(T)=90% p(T)=70%
100%6020(a) 2-way join
p(T)=30%
p(T)=50%
p(T)=70%
p(T)=90%2060100%
(b) 5-way join
Figure
7: Proportion of discounted tuples in the result of a join with respect to the number of unfoldings-
to-discount.
The selected unfoldings may be removed for security reasons, or because their answers are readily available
(through caching or materialized views). We presented an efficient evaluation strategy for discounted queries
called tuple-tagging. We showed through experiments that a discounted query can be, in general, evaluated
more efficiently than the query itself. The experiments also suggested that rewrite techniques, which seem
to be an intuitive approach to removing unfoldings from a query, may perform worse than the evaluation of
the original query, and much worse than the tuple tagging approach. Thus the discounting framework and
the tuple-tagging algorithm offer a viable approach to optimization of queries which employ views.
There are numerous issues to explore with respect to optimization of queries over views. This type of
optimization is orthogonal to other optimization techniques, and so can be directly used in conjunction
with existing optimizers. It would be beneficial to identify the types of interaction with the traditional
query optimizer that could increase overall optimization. Currently tuple-tagging is done in a prior stage,
and optimization is applied over the resulting queries. We also need to understand better the various cost
trade-offs in tuple-tagging, and how best to balance them.
--R
Optimizing queries with materialized views.
Implementation of two semantic query optimization techniques in DB2 universal database.
Rule languages and internal algebras for rule based optimizers.
data caching and replacement.
A rule specification framework for query optimizers.
A framework for intensional query optimization.
Answering queries by semantic caches.
View disassembly.
query optimization for bottom-up evaluation
Structural query optimization - a uniform framework for semantic query optimization in deductive databases
Pushing semantics inside recursion: A general framework for semantic optimization of recursive queries.
Computing queries from derived relations.
query reformulation in deductive databases.
Answering queries using views.
Foundations of Logic Programming.
Understanding the New SQL: A Complete Guide.
Extensible/rule based query rewrite optimization in Starburst.
Query folding.
On the multiple-query optimization problem
constraint processing in a multilevel secure distributed database management system.
Transaction Processing Performance Council
Principles of Database and Knowledge-Base Systems
Fusion queries over internet databases.
--TR
A rule-based view of query optimization
Foundations of logic programming; (2nd extended ed.)
Principles of database and knowledge-base systems, Vol. I
Logic-based approach to semantic query optimization
Structural query optimizationMYAMPERSANDmdash;a uniform framework for semantic query optimization in deductive databases
Extensible/rule based query rewrite optimization in Starburst
Understanding the new SQL
Answering queries using views (extended abstract)
Rule languages and internal algebras for rule-based optimizers
On the Multiple-Query Optimization Problem
Constraint Processing in a Multilevel Secure Distributed Database Management System
Fusion Queries over Internet Databases
Query Reformulation in Deductive Databases
Praire
Pushing Semantics Inside Recursion
Optimizing Queries with Materialized Views
Query Folding
View Disassembly
Implementation of Two Semantic Query Optimization Techniques in DB2 Universal Database
Semantic Data Caching and Replacement
Query Optimization for Bottom-Up Evaluation
Answering Queries by Semantic Caches
--CTR
Parke Godfrey , Jarek Gryz, View disassembly: A rewrite that extracts portions of views, Journal of Computer and System Sciences, v.73 n.6, p.941-961, September, 2007 | query rewrite;query optimization;relational databases;database mediation;TPC/D benchmark;database security;data warehousing |
371036 | Verification of Large State/Event Systems Using Compositionality and Dependency Analysis. | A state/event model is a concurrent version of Mealy machines used for describing embedded reactive systems. This paper introduces a technique that uses compositionality and dependency analysis to significantly improve the efficiency of symbolic model checking of state/event models. It makes possible automated verification of large industrial designs with the use of only modest resources (less than 5 minutes on a standard PC for a model with 1421 concurrent machines). The results of the paper are being implemented in the next version of the commercial tool visualSTATETM. | Introduction
Symbolic model checking is a powerful technique for formal verification of finite-state
concurrent systems. The technique was initially developed to verify digital
systems and for this class of systems, it has proven very efficient: hardware
systems with an extremely large number of reachable states has been verified.
However, it is not clear whether model checking is effective for other kinds of
concurrent systems as, for example, software systems. One reason that symbolic
model checking may not be as efficient is that software systems tend to be
both larger and less regularly structured than hardware. For example, many
of the results reported for verifying large hardware systems have been for linear
structures like stacks or pipelines (see, e.g., [7]) for which it is known [17] that the
size of the transition relation (when represented as an ROBDD) grows linearly
with the size of the system. Only recently has the first experiments on larger
realistic software systems been reported [3, 18].
? Supported by CIT, The Danish National Center of IT Research
?? BRICS (Basic Research in Computer Science) is a basic research center funded by
the Danish government at Aarhus and Aalborg
This paper presents a new technique that significantly improves the performance
of symbolic model checking on embedded reactive systems modeled using
a state/event model. The state/event model is a concurrent version of Mealy ma-
chines, that is, it consists of a fixed number of concurrent finite state machines
that have pairs of input events and output actions associated with the transitions
of the machines. The model is synchronous: each input event is reacted
upon by all machines in lock-step; the total output is the multi-set union of the
output actions of the individual machines. Further synchronization between the
machines is achieved by associating a guard with the transitions. Guards are
Boolean combinations of conditions on the local states of the other machines.
In this way, the firing of transitions in one machine can be made conditional on
the local state of other machines. If a machine has no enabled transition for a
particular input event, it simply does not perform any state change.
The state/event model is convenient for describing the control portion of
embedded reactive systems, including smaller systems as cellular phones, hi-fi
equipment, and cruise controls for cars, and large systems as train simulators,
flight control systems, telephone and communication protocols. The model is
used in the commercial tool visualSTATE tm [16]. This tool assists in developing
embedded reactive software by allowing the designer to construct a state/event
model and analyze it by either simulating it or by running a consistency checker.
The tool automatically generates the code for the hardware of the embedded
system. The consistency checker is in fact a verification tool that checks for a
range of properties that any state/event model should have. Some of the checks
must be passed for the generated code to be correct, for instance, it is crucial
that the model is deterministic. Other checks are issued as warnings that might
be design errors such as transitions that can never fire.
State/event models can be extremely large. And unlike in traditional model
checking, the number of checks is at least linear in the size of the model. This paper
reports results for models with up to 1421 concurrent state machines (10 476
states). For systems of this size, traditional symbolic model checking techniques
fail, even when using a partitioned transition relation [5] and backwards itera-
tion. We present a compositional technique that initially considers only a few
machines in determining satisfaction of the verification task and, if necessary,
gradually increases the number of considered machines. The machines considered
are determined using a dependency analysis of the structure of the system.
The results are encouraging. A number of large state/event models from
industrial applications have been verified. Even the largest model with 1421
concurrent machines can be verified with modest resources (it takes less than an
hour on a standard PC). Compared with the current version of visualSTATE tm ,
the results improve on the efficiency of checking the smaller instances and dramatically
increase the size of systems that can be verified.
Related Work
The use of ROBDDs [4] in model checking was introduced by Burch et al. [6]
and Coudert et al. [12]. Several improvements have been developed since, such
as using a partitioned transition relation [5, 13] and simplifying the ROBDD
representation during the fixed-point iteration [11]. Many of these improvements
are implemented in the tool SMV [17]. Other techniques like abstraction [9]
and compositional model checking [10] further reduce the complexity of the
verification task, but require human insight and interaction.
The experiments by Anderson et al. [3] and Sreemani and Atlee [18] verified
large software systems using SMV. The technique presented here significantly
improves on the results we have obtained using SMV and makes it possible
to verify larger systems. The compositional technique shares ideas with partial
model checking [1, 2, 15], but explicitly analyzes the structure of the model.
Outline
The state/event model is described in section 2. Section 3 explains how the range
of consistency checks performed by visualSTATE tm are reduced to two simple
types of checks. Section 4 shows how state/event systems are encoded by ROB-
DDs. The compositional technique and the dependency analysis is introduced
in section 5, and further developed in section 6. The technique is evaluated in
section 7 and section 8 draws some conclusions.
State/Event Systems
A state/event system consists of n machines over an input (or event)
alphabet E and an output alphabet O. Each machine M i is a triple (S
of local states, an initial state, and a set of transitions. The set of transitions is
a relation
where M(O) is a multi-set of outputs, and G i is the set of guards not containing
references to machine i. These guards are generated from the following simple
grammar for Boolean expressions:
The atomic predicate l is read as "machine j is at local state p" and
tt denotes a true guard. The global state set of the state/event system is the
product of the local state sets: S The guards are interpreted
straightforwardly over S: for any s 2 S, s exactly when the
j'th component of s is p. The notation g[s j =l j ] denotes that s j is
substituted for l j , with occurrences of atomic propositions of the form l
replaced by tt or :tt depending on whether s j is identical to p.
Considering a global state s, all guards in the transition relation can be
evaluated. We define a version of the transition relation in which the guards
have been evaluated. This relation is denoted s e
expressing that machine
when receiving event e makes a transition from s i to s 0
i and generates output
o. Formally,
e
e
e
Fig. 1. Two state/event machines and the corresponding parallel combination. The
guards, which formally should be of the form l are simply written as the state p
since the location l j is derivable from the name of the state. The small arrows indicate
the initial states. The reference to r is a requirement to a state in a third machine not
shown.
Two machines can be combined into one. More generally if M I and M J
are compositions of two disjoint sets of machines I and J , I ; J '
we can combine them into one M
I \Theta S J ands 0
J ). The transition relation T IJ is a subset of
S IJ \Theta E \Theta G IJ \Theta M(O) \Theta S IJ , where G IJ are the guards in the composite
machine. By construction of T IJ , the guards G IJ will not contain any references
to machines in I [ J . To define T IJ , we introduce the predicate idle:
I
which holds for states in which no transitions in M I are enabled at state s I
when receiving event e. The transition relation T IJ is defined by the following
inference rules (the symbol ] denotes multi-set union):
I
I
In particular, the full combination of all n machines yields a Mealy machine in
which the transitions s e
\Gamma\Gamma! s 0 are defined by
Y
is true:
The state/event systems in figure 1 illustrates two machines and the parallel
composition of them. By synchronizing on the same event e they are restricted
to a cyclic iteration through the total state space with the side effect that they
will generate the output when they are finished.
3 Consistency Checks
The consistency checker in visualSTATE tm performs seven predefined types of
checks, each of which can be reduced to verifying one of two types of properties.
The first type is a reachability property. For instance, visualSTATE tm performs a
check for "conflicting transitions", i.e., it checks whether two or more transitions
can become enabled in the same local state, leading to non-determinism. This
can be reduced to questions of reachability by considering all pairs of guards g 1
and g 2 of transitions with the same local state s i and input event e. A conflict
can occur if a global state is reachable in which (l
In total, five of the seven types of checks reduce to reachability checks. Four
of these, such as check for transitions that are never enabled and check for
states that are never reached, generate a number of reachability checks which is
linear in the number of transitions, t. In the worst-case the check for conflicting
transitions gives rise to a number of reachability checks which is quadratic in
the number of transitions. However, in practice very few transitions have the
same starting local state and input event, thus in practice the number of checks
generated is much smaller than t.
The remaining two types of consistency checks reduce to a check for absence
of local deadlocks. A local deadlock occurs if the system can reach a state in
which one of the machines idles forever on all input events. This check is made
for each of the n machines. In total at least t checks have to be performed
making the verification of state/event systems quite different from traditional
model checking where typically only a few key properties are verified.
We attempt to reduce the number of reachability checks by performing a
implicational analysis between the guards of the checks. If a guard g 1 implies
another guard g 2 then clearly, if g 1 is reachable so is g 2 . To use this information
we start by sorting all the guards in ascending order of the size of their satisfying
state space. In this way the most specific guards are checked first and for each
new guard to be checked we compare it to all the already checked (and reachable)
guards. If the new guard includes one of them, then we know that it is satisfiable.
In the worst-case this is a quadratic number of tests. However, in our experiments
a reduction between 40% and 94% is obtained.
4 ROBDD Representation
This section describes how Reduced Ordered Binary Decision Diagrams (ROB-
DDs) [4] are used to represent sets of states and the transition relation. We also
show how to perform a forward iteration to construct the set of reachable states
from which it is straightforward to check each of the reachability checks.
To construct the ROBDD e
T for the transition relation T , we first construct
the local transition relations e
for each machine M i . The variables of
the ROBDD represents an encoding of the inputs, the current states and the
next-states. The variables are ordered as follows: The first kEk variables encodes
the input events E (kXk denotes dlog 2 jXje) and are denoted VE . Then
encoding the current- (un-
primed variables) and the next-states (primed variables) for machine i.
The transition relation e
constructed as an ROBDD predicate
over these variables. We construct the ROBDD for a transition (s
by the conjunction of the ROBDD encoding s i , e, g, and s 0
. (The outputs are
not encoded as they have no influence on the reachable states of the system.)
The encoding of s i , e, and s 0
i is straightforward and the encoding of the guard
g is done by converting all atomic predicates l to ROBDD predicates over
the current-state variables for machine M j and then performing the Boolean
operations in the guard. The encoding of all transitions of machine i is obtained
from the disjunction of the encoding of the individual transitions:
where e
e is the ROBDD encoding of input e and e s i and e s 0
i are the ROBDD
encodings of the current-state s i and next-state s 0
To properly encode the global transition relation T , we need to deal with
situations where no transitions of T i are enabled. In those cases we want the
machine i to stay in its current state. We construct an ROBDD neg i representing
that no transition is enabled by negating all guards in machine i (including the
The ROBDD equ i encodes that machine i does not change state by requiring
that the next-state is identical to the current-state:
The local transition relation for machine i is then:
e
The ROBDD e
T for the full transition relation is the conjunction of the local
transition relations:
e
e
One way to check whether a state s is reachable is to construct the reachable
state space R. The construction of R can be done by a standard forward iteration
of the transition relation, starting with the initial state s 0 :
where V is the set of current-state variables, V 0 is the set of next-state variables,
and denotes the result of replacing all the primed variables in V 0 by
their unprimed versions.
The construction of the full transition relation T can be avoided by using
a partitioned transition relation [5]. The local transition relations T i are used
without ever building the full
Both approaches have been implemented and tested on our examples as shown in
section 7. Here we see that the calculation of the reachable state space using the
full transition relation is both fast and efficient for the small examples. However,
for models with more than approximately machines, both approaches fail to
complete.
Compositional Backwards Reachability
The problems of forwards iteration can typically be solved by using a backwards
reachability analysis. The verification task is to determine whether a guard g can
be satisfied. Instead of computing the reachable state space and check that g is
valid somewhere in this set, we start with the set of states in which g is valid and
compute in several backwards iterations, states that can reach a state in which
g is satisfied. The goal is to determine whether the initial state is among these
states. Our novel idea is to perform the backwards iteration in a compositional
manner considering only a minimal number of machines. Initially, only machines
mentioned in g will be taken into account. Later also machines on which these
depend will be included.
Notice that compared to the forwards iteration, this approach has an apparent
drawback when performing a large number of reachability checks: instead
of just one fixed-point iteration to construct the reachable state space R (and
then trivially verify the properties), a new fixed-point iteration is necessary for
each property that is checked. However, our experiments clearly demonstrate
that when using a compositional backwards iteration, each of the fixed-point
iterations can be completed even for very large models whereas the forwards
iteration fails to complete the construction of R for even medium sized models.
To formalize the backwards compositional technique, we need a semantic version
of the concept of dependency. For a subset of the machines I '
two states s; s 0 2 S are I-equivalent, written
. If
a subset P of the reachable states S only is constrained by components in some
index set I we can think of P as having I as a sort. This leads to the following
definition: A subset P of S is I-sorted if for all s; s
s I
M3
Fig. 2. The left figure is an example showing the effect of BI (g). If X is the guard
Y the guard l I then the transitions from sI seem to
depend on machines M j and Mk outside I. However, the guards X, :X Y , and :Y
together span all possibilities and therefore by selecting either e1 , e2 , or e3 the state
sI can reach g irrespective of the states of the machines M j and Mk . The right figure
illustrates the dependencies between 9 state machines taken from a real example (the
example "hi-fi" of section 7). An arrow from one machine M i to another M j indicates
the existence of a transition in M i with a guard that depends on a state in machine
As an example, consider a guard g which mentions only machines 1 and 3. The
set of states defined by g is I-sorted for any I containing 1 and 3. 1 Another
understanding of the definition is that a set P is I-sorted if it is independent of
the machines in the complement I ng n I .
From an I-sorted set g we will perform a backwards reachability computation
by including states which, irrespective of the states of the machines in I , can
reach g. One backward step is given by the function B I (g) defined by:
\Gamma\Gamma! s 00 and s 00
(1)
By definition B I (g) is I-sorted. The set B I (g) defined by B I is the set of states
which independently of machines in I , by some input e, can reach a state in g.
Observe that B I (g) is monotonic in both g and I . Figure 2 shows how a state
s I of a machine is included in B I (g) although it syntactically seems to depend
on machines outside I .
By iterating the application of B I , we can compute the minimum set of states
containing g and closed under application of B I . This is the minimum fixed-point
I (X), which we refer to as B
I (g). Note that B
f1;::: ;ng (g) becomes the
desired set of states which may reach g.
A set of indices I is said to be dependency closed if none of the machines in
I depend on machines outside I . Formally, I is dependency closed if for all i 2 I ,
states inputs e, and outputs o, s e
The basic properties of the sets B
I (g) are captured by the following lemma:
1 If the guard is self-contradictory (always false), it will be I-sorted for any I. This reflects
the fact that the semantic sortedness is more precise than syntactic occurrence.
(Compositional Reachability Lemma). Assume g is an I-sorted
subset of S. For all subsets of machines I, J with I ' J the following holds:
I
2: B
I (g))
3: I dependency closed
I
The results of the lemma are applied in the following manner. To check whether
a guard g is reachable, we first consider the set of machines I 1 syntactically
mentioned in g. Clearly, g is I 1 -sorted. We then compute B
I1 (g). If the initial
state s 0 belongs to B
I1 (g), then by (1) s
f1;::: ;ng (g) and therefore g is
reachable from s 0 and we are done. If not, we extend I 1 to a larger set of
machines I 2 by adding machines that are syntactically mentioned on guards in
transitions of machines in I 1 . We then compute B
I2 (g) as B
I1 (g)) which
is correct by (2). We continue like this until s 0 has been found in one of the
sets or an index set I k is dependency closed. In the latter case we have by (3)
I k
f1;::: ;ng (g) and g is unreachable unless s
I k
(g).
As an example, assume that we want to determine whether the guard
reachable in the example of figure 2 (right). The initial index
set is I 3g. If this is not enough to show g reachable, the second index set
I used. Since this set is dependency closed, g is reachable if and
only if the initial state belongs to B
I1 (g)).
The above construction is based on a backwards iteration. A dual version of
I for a forwards iteration could be defined. However, such a definition would
not make use of the dependency information since s 0 is only I-sorted for I =
ng. Therefore all machines would be considered in the first fixed-point
iteration reducing it to the complete forwards iteration mentioned in the previous
section.
Seemingly, the definition of B I (g) requires knowledge of the global transition
relation, and therefore does not seem to yield any computational advantage.
However, as explained below, using ROBDDs this may be avoided leading to an
efficient computation of B I (g). The ROBDD e
I (eg) representing one iteration
backwards from the states represented by the ROBDD e g can be constructed
immediately from the definition (1):
e
is equal to eg with all variables in V replaced by their primed
versions in V 0 . It is essential to avoid building the global transition relation e
T .
This is done by writing 9V 0 as 9V 0
I :9V 0
I
and e
I where e
i2I
e
This allows us to push the existential quantification of V 0
I
to e
I since g is I-sorted
and thus independent of the variables in V 0
I . As 9V 0
I is a tautology, equation
(2) reduces to:
e
which only uses the local transition relations for machines in I . Each T i refers
only to primed variables in V 0
allowing early variable quantification for each
machine individually:
e
This equation efficiently computes one step in the fixed-point iteration constructing
e
I (eg).
Notice, that the existential quantifications can be performed in any order.
We have chosen the order in which the machines occur in the input, but other
orders might exist which improves performance.
6 Local Deadlock Detection
In checking for local deadlocks we use a construction similar to backwards reach-
ability. To make the compositional backwards lemma applicable we work with
the notion of a machine being live which is the exact dual of having a local
deadlock. In words, a machine is live if it always is the case that there exists a
way to make the machine move to a new local state. Formally, a global state s
is live for machine i if there exists a sequence of states s
and s j e
\Gamma\Gamma! s j+1 (for some e and o) such that s k
i . Machine i is live if all
reachable states are live. A simple example of a state/event system with a local
deadlock is shown in figure 3.
Fig. 3. A state/event system with a local deadlock. The global state
live for the machine to the right since for all input events the guard p1 remains false.
The state s is reachable (e.g., by initially receiving e1) and thus the machine to the
right has a local deadlock.
The check is divided into two parts. First, the set of all live states L
machine i is computed. Second, we check that all reachable states are in L
A straightforward but inefficient approach would be to compute the two sets
and check for inclusion. However, we will take advantage of the compositional
construction used in the backwards reachability in both parts of the check.
Similar to the definition of B I (g), we define L I;i (X) to be the set of states that
are immediately live for machine i 2 I independently of the machines outside I
or leads to states in X (which are states assumed to be live for machine i):
Notice, that compared to definition (1) the only difference is the extra possibility
that the state is immediately live, i.e., s i 6= s 00
. The set of states that are live
for machine i independently of machines outside I is then the set L
I;i (;) where
is the minimum fixed point defined by L
The three properties of lemma 1 also holds for L
If I is dependency closed it follows from property (3) of the lemma that L
equals L
(;) which is precisely the set of live states of machine i. This
gives an efficient way to compute the sets L
I;i (;) for different choices of I .
We start with I 1 equal to fig and continue with larger I k 's exactly as for the
backwards reachability. The only difference is the termination conditions. One
possible termination case is if L
I k ;i (;) becomes equal to S for some k. In that
case it is trivial that the set of reachable states is contained in L
I k ;i (;). From
the monotonicity property (1) of the lemma it follows that machine i is live
and thus free of local deadlocks. The other termination case is when I k becomes
dependency closed. Then we have to check whether there exists reachable states
not in L
I k ;i (;). This is done by a compositional backwards reachability check
with
I k ;i (;).
7 Experimental Results
The technique presented above has been used on a range of real industrial
state/event systems and a set of systems constructed by students in a course on
embedded systems. The examples are all constructed using
They cover a large range of different applications and are structurally highly
irregular.
The examples hi-fi, surround, flow, motor, intervm, dkvm, multicd,
ice1 and ice2 are all industrial examples. hi-fi is the control part of an advanced
compact hi-fi system, surround is a surround sound control unit for a video
player, flow is the control part of a flow meter, motor is a motor control,
intervm and dkvm are advanced vending machines, multicd is the control of a
multi-volume CD player, and ice1 and ice2 are both independent subsystems of
a train simulator. The remaining examples are constructed by students. The vcr
is a simulation of a video recorder, cyber is an alarm clock, jvc is the control
of a compact hi-fi system, video is a video player, and volvo is a simulation of
the complete functionality of the dashboard of a car. The characteristics of the
state/event systems are shown in table 1.
The experiments were carried out on a 166 MHz Pentium PC with
RAM running Linux. To implement the ROBDD operations, we have constructed
our own ROBDD package which is comparable to state-of-the-art packages in
terms of performance. In all experiments we limit the total number of ROBDD
nodes to one million corresponding to 20 MB of memory. We check for each
transition whether the guard is reachable and whether it is conflicting with
other transitions. Furthermore, we check for each machine whether it has a local
deadlock. The total runtime and memory consumption for these checks are shown
in table 2. The total number of checks is far from the quadratic worst-case, which
Table
1. The state/event systems used in the experiments. The last two columns
show the size of the declared and reachable state space. The declared state space is
the product of the number of local states of each machine. The reachable state space
is only reported for those systems where the forwards analysis completes.
System Machines Local states Transitions Declared Reachable
vcr 7
cyber 8 19
dkvm 9
flow
motor 12
surround 12
multicd 28
Table
2. The runtime and memory consumptions of the experiments. The second column
of the table shows the total number of guards that are checked for reachability
after this number has been reduced by the implicational analysis. The forward columns
show results using a forward iteration with a full and a partitioned transition relation.
The backward columns show the results of a backwards iteration using the full transition
relation and the compositional backwards reachability. The visualSTATE column
shows the runtimes obtained using an explicit state enumeration as implemented in version
3.0 of visualSTATE tm . A "\Gamma" denotes that we ran out of memory or the runtime
exceeded two hours without finishing.
Forward Backward
System Guards Full Partitioned Full Compositional visualSTATE
checked Sec MB Sec MB Sec MB Sec MB Sec
cyber
dkvm
flow
motor
surround 173 8.9 6 269.0 14 9.7 6 13.0 6 3780
volvo
multicd 199 4.9
Fig. 4. The fraction of machines actually used in the compositional backwards reachability
analysis of the guards of the largest system ice2. For each size of dependency
closed set, a line between is drawn between the minimum and maximum fraction of
machines used in verifying guards with dependency closed sets of that size. For in-
stance, for the guards with dependency closed sets with 234 machines (the right-most
line) only between 1% and 32% of the machines are needed to prove that the guard is
reachable.
supports the claim that in practice only very few checks are needed to check for
conflicting rules (see section 3).
As expected the forwards iteration with full transition relation is efficient
for smaller systems. It is remarkable that the ROBDD technique is superior to
explicit state enumeration even for systems with a very small number of reachable
states. Using the partitioned transition relation in the forwards iteration
works poorly. This could be due to the fact that we do not use early variable
quantification since this is not as straightforward as for the backwards iteration.
For the two largest systems only the compositional backwards technique suc-
ceeds. In fact, for the four largest systems it is the most efficient and for the
small examples it has performance comparable to the full forward technique.
This is despite the fact that the number of checks is high and the backward iterations
must be repeated for each check. From the experiments it seems that the
compositional backwards technique is better than full forwards from somewhere
around 20 machines.
In order to understand why the compositional backwards technique is successful
we have analyzed the largest system ice2 in more detail, see figure 4.
For each guard we have computed the size of its smallest enclosing dependency
closed set of machines. During the backwards iterations we have kept track of
how many times the set of machines I (used in B
I (g)) need to be enlarged and
how many machines was contained in the set I when the iteration terminated.
The dependency closed sets of cardinality 63, 66, 85, 86, 125, 127 all contain at
least one machine with a guard that is unreachable. As is clearly seen from the
figure, in these cases the iteration has to include the entire dependency closed
set in order to prove that the initial state cannot reach the guard. In fact, only
in the case of unreachable guards is more than 32% of the machines in a dependency
closed set ever needed (ignoring the small dependency closed sets with
less than 12 machines). A reduction of 32% amounts to a reduction in runtime
much larger than a third due to the potential exponential growth of the ROBDD
representation in the number of transition relations e
8 Conclusion
We have presented a verification problem for state/event systems which is characterized
by a large number of reachability checks. A new compositional technique
has been presented which significantly improves on the performance of symbolic
model checking for state/event systems. This has been demonstrated on a variety
of industrial systems several of which could not be verified using traditional
symbolic model checking, e.g., using SMV.
We expect that these results can be translated to other models of embedded
control systems than the state/event model, as for example StateCharts [14].
Furthermore, the two types of checks are of a general nature. Clearly, reachability
is a key property which captures a number of important properties of a system.
Moreover, the check for local deadlock shows how properties requiring nesting of
fixed points can be checked efficiently with the compositional backwards analysis.
Thus, it seems straightforward to implement more general checks as expressed
in, for instance, CTL [8].
--R
Partial model checking with ROBDDs.
Partial model checking (extended abstract).
Model checking large software specifications.
Symbolic model checking with partitioned transition relations.
Symbolic model checking: 10 20 states and beyond.
Symbolic model checking for sequential circuit verification.
Automatic verification of finite-state concurrent systems using temporal logic specifications
Model checking and abstraction.
Compositional model checking.
Verification of synchronous sequential machines based on symbolic execution.
Verifying temporal properties of sequential machines without building their state diagrams.
Efficient model checking by automated ordering of transition relation partitions.
STATECHARTS: A visual formalism for complex systems.
A compositional proof of a read-time mutual exclusion protocol
Beologic r A/S.
Symbolic Model Checking.
Feasibility of model checking software requirements: a case study.
--TR
Graph-based algorithms for Boolean function manipulation
Statecharts: A visual formalism for complex systems
Compositional model checking
Verification of synchronous sequential machines based on symbolic execution
Model checking and abstraction
Requirements Specification for Process-Control Systems
Model checking large software specifications
Tearing based automatic abstraction for CTL model checking
Symbolic Model Checking
Partial Model Checking with ROBDDs
A Compositional Proof of a Real-Time Mutual Exclusion Protocol
Stepwise CTL Model Checking of State/Event Systems
Verifying Temporal Properties of Sequential Machines Without Building their State Diagrams
An Iterative Approach to Language Containment
Efficient Model Checking by Automated Ordering of Transition Relation Partitions
Automatic Abstraction Techniques for Propositional MYAMPERSAND#181;-calculus Model Checking
Verification of Hierarchical State/Event Systems Using Reusability and Compositionality
Partial Model Checking | embedded software;symbolic model checking;formal verification;backwards reachability |
371431 | Parametric Design Synthesis of Distributed Embedded Systems. | AbstractThis paper presents a design synthesis method for distributed embedded systems. In such systems, computations can flow through long pipelines of interacting software components, hosted on a variety of resources, each of which is managed by a local scheduler. Our method automatically calibrates the local resource schedulers to achieve the system's global end-to-end performance requirements. A system is modeled as a set of distributed task chains (or pipelines), where each task represents an activity requiring nonzero load from some CPU or network resource. Task load requirements can vary stochastically due to second-order effects like cache memory behavior, DMA interference, pipeline stalls, bus arbitration delays, transient head-of-line blocking, etc. We aggregate these effectsalong with a task's per-service load demandand model them via a single random variable, ranging over an arbitrary discrete probability distribution. Load models can be obtained via profiling tasks in isolation or simply by using an engineer's hypothesis about the system's projected behavior. The end-to-end performance requirements are posited in terms of throughput and delay constraints. Specifically, a pipeline's delay constraint is an upper bound on the total latency a computatation can accumulate, from input to output. The corresponding throughput constraint mandates the pipeline's minimum acceptable output ratecounting only outputs which meet their delay constraints. Since per-component loads can be generally distributed, and since resources host stages from multiple pipelines, meeting all of the system's end-to-end constraints is a nontrivial problem. Our approach involves solving two subproblems in tandem: 1) finding an optimal proportion of load to allocate to each task and channel and 2) deriving the best combination of service intervals over which all load proportions can be guaranteed. The design algorithms use analytic approximations to quickly estimate output rates and propagation delays for candidate solutions. When all parameters are synthesized, the estimated end-to-end performance metrics are rechecked by simulation. The per-component load reservations can then be increased, with the synthesis algorithms rerun to improve performance. At that point, the system can be configured according to the synthesized scheduling parametersand then revalidated via on-line profiling. In this paper, we demonstrate our technique on an example system, and compare the estimated performance to its simulated on-line behavior. | Introduction
An embedded system's intrinsic real-time constraints are imposed on its external inputs and out-
puts, from the perspective of its environment. At the same time, the computation paths between
these end-points may flow through a large set of interacting components, hosted on a variety of
resources - and managed by local scheduling and queuing policies. A crucial step in the design
process involves calibrating and tuning the local resource management policies, so that the original
real-time objectives are achieved.
Real-time scheduling analysis is often used to help make this problem more tractable. Using the
approach, upper bounds are derived for processing times and communication delays. Then, using
these worst-case assumptions, tasks and messages can be deterministically scheduled to guarantee
that all timing constraints will get met. Such constraints might include an individual thread's
processing frequency, a packet's deadline, or perhaps the rate at which a network driver is run. In
this type of system, hard real-time analysis can be used to help predict (and then ensure) that the
performance objectives will be attained.
This approach is becoming increasingly difficult to carry out. First, achieving near-tight
execution-time bounds is virtually impossible due to architectural features like superscalar pipelin-
ing, hierarchies of cache memory, etc. - not to mention the nondeterminism inherent in almost any
network. Given this, as well as the fact that a program's actual execution time is data-dependent, a
worst-case timing estimate may be several orders of magnitude greater than the average case. If one
incorporates worst-case costs in a design, the result will often lead to an extremely under-utilized
system.
Moreover, parameters like processing periods and deadlines are used to help achieve acceptable
end-to-end performance - i.e., as a means to an end, and not an end in itself. In reality, missing
a deadline will rarely lead to failure; in fact, such an occurrence should be expected, unless the
system is radically over-engineered. While hard real-time scheduling theory provides a sufficient
way to build an embedded system, it is not strictly necessary, and it may not yield the most efficient
design.
In this paper we explore an alternative approach, by using statistical guarantees to generate
cost-effective system designs. We model a real-time system as a set of task chains, with each
task representing some activity requiring a specific CPU or network link. For example, a chain
may correspond to the data path from a camera to a display in a video conferencing system, or
reflect a servo-loop in a distributed real-time control system. The chain's real-time performance
requirements are specified in terms of (i) a maximum acceptable propagation delay from input to
output, and (ii) a minimum acceptable average throughput. In designing the system, we treat the
first requirement as a hard constraint, that is, any end-to-end computation that takes longer than
the maximum delay is treated a failure, and does not contribute to the overall throughput. (Some
applications may be able to use late outputs - yet within the system model we currently do not
count them.) In contrast, the second requirement is viewed in a statistical sense, and we design
a system to exceed, on average, its minimal acceptable rate. We assume that a task's cost (i.e.,
execution time for a program, or delay for a network link) can be specified with any arbitrary
discrete probability distribution function.
Problem and Solution Strategy. Given a set of task chains, each with its own real-time
performance requirements, our objective is to design a system which will meet the requirements
for all of the chains. Our system model includes the following assumptions, which help make the
solution tractable:
ffl We model a task's load requirements stochastically, in terms of a discrete probability distribution
function (or PDF), whose random variable characterizes the resource time needed for
one execution instance of the task. Successive task instances are modeled to be independent
of each other.
ffl We assume a static partitioning of the system resources; in other words, a task has already
been allocated to a specific resource.
It is true that embedded systems design often involves making some task-placement decisions.
However, we note that tuning the resource schedulers is, by definition, subservient to the allocation
phase - which often involves accounting for device-specific localities (e.g., IO ports, DMA channels,
etc.), as well as system-level issues (e.g., services provided by each node). While this paper's focus
is narrowed to the scheduling synthesis problem at hand, we note that a "holistic" design tool could
integrate the two problems, and use our system-tuning algorithms as "subroutines."
Also, while our objective is to achieve an overall, statistical level of real-time performance,
we can still use the tools provided by hard real-time scheduling to help solve this problem. Our
method involves the following steps: (1) assigning to each task a fixed proportion of its resource's
load; and (2) determining the reasonable service interval (or frame) over which the proportion can
be guaranteed. Then, using some techniques provided by real-time CPU and network scheduling,
we can guarantee that during any such frame, a task will get at least its designated share of
the resource's time. When that share fails to be sufficient for a currently running task to finish
executing, it runs over into the next frame - and gets that frame's share, etc. Given this model,
the design problem may be viewed as the following interrelated sub-problems:
(1) How should the CPU and network load be partitioned among the tasks, so that every chain's
performance requirements are met?
(2) Given a load-assignment, how should the frame-sizes be set to maximize the effective output
rate?
As we discuss in the sequel, load proportions cannot be quantized over infinitesimal time-frames.
Hence, when a task's frame gets progressively smaller, it starts paying a large price for its guarantees
- in the form of wasted overhead.
In this paper we present algorithms to solve these problems. The algorithm for problem (1) uses
a heuristic to compare the relative needs of tasks from different chains competing for the same
resources. The algorithm for problem (2) makes use of connecting Markov chains, to estimate the
effective throughput of a given chain. Since the analysis is approximate, we validate the generated
solution using a simulation model.
Related Work
Like much of the work in real-time systems, our results extend from preemptive, uniprocessor
scheduling analysis. There are many old and new solutions to this problem (e.g., [1, 15, 20, 21]);
moreover, many of these methods come equipped with offline, analysis tests, which determine a
priori whether the underlying system is schedulable. Some of these tests are load-oriented sufficiency
conditions - they predict that the tasks will always meet their deadlines, provided that the system
utilization does not exceed a certain pre-defined threshold.
The classical model has been generalized to a large degree, and there now exist analogous
results for distributed systems, network protocols, etc. For example, the model has been applied
to distributed hard real-time systems in the following straightforward manner (e.g., see [26, 31]):
each network connection is abstracted as a real-time task (sharing a network resource), and the
scheduling analysis incorporates worst-case blocking times potentially suffered when high-priority
packets have to wait for transmission of lower-priority packets. Then, to some extent, the resulting
global scheduling problem can be solved as a set of interrelated local resource-scheduling problems.
In [30], the classical model was extended to consider probabilistic execution times on uniprocessor
systems. This is done by giving a nominal "hard" amount of execution time to each task
instance, under the assumption that the task will usually complete within this time. But if the
nominal time is exceeded, the excess requirement is treated like a sporadic arrival (via a method
similar to that used in [19]).
In our previous work [8, 9] we relaxed the precondition that period and deadline parameters
are always known ahead of time. Rather, we used the system's end-to-end delay and jitter requirements
to automatically derive each task's constraints; these, in turn, ensure that the end-to-end
requirements will be met on a uniprocessor system. A similar approach for uniprocessor systems
was explored in [2], where execution time budgets were automatically derived from the end-to-end
delay requirements; the method used an imprecise computation technique as a metric to help gauge
the "goodness" of candidate solutions.
These concepts were later modified for use in various application contexts. Recent results
adapted the end-to-end theory to both discrete and continuous control problems (e.g. [18, 27],
where real-time constraints were derived from a set of control laws, and where the objectives were
to optimize the system's performance index while satisfying schedulability. Our original approach
(from [8, 9]) was also used to produce schedules for real-time traffic over fieldbus networks [6, 7],
where the switch priorities are synthesized to ensure end-to-end rate and latency guarantees. A
related idea was pursued for radar processing domains [11], where an optimization method produces
per-component processing rates and deadlines, based on the system's input pulse rate and its
prescribed allowed latency.
End-to-end design becomes significantly more difficult in distributed contexts. Solving this
problem usually involves finding an answer to the following question: "Given an end-to-end latency
budget, what is the optimal way to spend this budget on each pipeline hop?" Aside the complexity
of the basic decision problem, a solution also involves the practical issue of getting the local runtime
schedulers to guarantee their piece-wise latencies. Results presented in [25] address this problem
in a deterministic context: they extend our original uniprocessor method from [8] to distributed
systems, by statically partitioning the end-to-end delays via heuristic optimization metrics [25].
Similar approaches have been proposed for "soft" transactions in distributed systems [17], where
each transaction's deadline is partitioned between the system's resources.
To our knowledge, this paper presents the first technique that achieves statistical real-time
performance in a distributed system, by using end-to-end requirements to assign both periods and
the execution time budgets. In this light, our method should be viewed less as a scheduling tool
(it is not one), and more as an approach to the problem of real-time systems design.
To accomplish this goal, we assume an underlying runtime system that is capable of the follow-
ing: (1) decreasing a task's completion time by increasing its resource share; (2) enforcing, for each
resource, the proportional shares allocated for every task, up to some minimum quantization; and
(3) within these constraints, isolating a task's real-time behavior from the other activities sharing
its resource. In this regard, we build on many results that have been developed for providing
OS-level reservation guarantees, and for rate-based, proportional-share queuing in networks. Since
these concepts are integral to understanding the work in this paper, we treat them at some length.
In rate-based methods, tasks get allocated percentages of the available bandwidth. Obviously
these percentages are cannot be maintained over infinitesimal time-intervals; rather, the
proportional-shares are serviced in an approximate sense - i.e., within some margin of error. The
magnitude of the error is usually due to the following factors: (1) quantization (i.e., the degree
to which the underlying system can multiplex traffic); and (2) priority-selection (i.e., the order in
which tasks are selected for service). At higher levels quantization, and as multiple streams share
the same FIFO queues, the more service-orders depart from true proportional-sharing.
Our analytical results rely on perhaps the oldest known variant of rate-based scheduling - time-division
multiplexing, or TDM. In our TDM abstraction, a task is guaranteed a fixed number of
"time-slots" over pre-defined periodic intervals (which we call frames). Our analytical techniques
assume tasks have their time-slots reserved; i.e., if a task does not claim its load, the load gets
wasted. We appeal to TDM for a basic reason: we need to handle an inherently stochastic workload
model, in which tasks internally "decide" how much load they will need for specific instances.
These load demands can be be quite high for arbitrary instances; they may be minuscule for other
instances. Moreover, once a task is started, we assume that its semantics mandate that it also
needs to be finished. So, since unregulated workloads cannot simply be "re-shaped," and since
end-to-end latency guarantees still must be guaranteed, TDM ensures a reasonable level of fairness
between different tasks on a resource - and between successive instances of the same task.
The downside of this scheme is that TDM ends up wasting unused load. Other rate-based disciplines
solve this problem by re-distributing service over longer intervals - at a cost of occasionally
postponing the projected completion times of certain tasks. Most of these disciplines, however,
were conceived for inherently regulated workload models, e.g., linear bounded arrival processes [3].
In such settings, transient unfairnessis is often smoothed out by simply "re-shaping" the departure
process - i.e., by inserting delay stages.
Many algorithms have been developed to provide proportional-share service in high-speed net-
works, including the "Virtual Clock Method" [38], "Fair-Share Queuing" [4], "Generalized Processor
Sharing" (or GPS) [23], and "Rate Controlled Static Priority Queuing" (or RCSP) [36]. These models
have also been used to derive statistical delay guarantees; in particular, within the framework of
RCSP (in [37]) and GPS (in [39]). Related results can be found in [5] (using a policy like "Virtual
Clock" [38]), and in [34] (for FCFS, with a variety of traffic distributions). In [14], statistical service
quality objectives are achieved via proportional-share queuing, in conjunction with server-guided
backoff, where servers dynamically adjust their rates to help utilize the available bandwidth.
Recently, many of these rate-based disciplines have sprouted analogues for CPU scheduling. For
example, Waldspurger et al. [32] proposed "Lottery Scheduling," which multiplexes available CPU
load based on the relative throughput rates for the system's constituent tasks. The same authors
also presented a deterministic variant of this, called "Stride Scheduling" [33]; this method provides
an OS-level server for a method similar to the Weighted-Fair-Queuing (WFQ) discipline used in
switches. WFQ - also known as "Packetized GPS" (or PGPS) - is a discrete, quantized version
of the fluid-flow abstraction used in GPS. Scheduling decisions in WFQ are made via "simulating"
proportional-sharing for the tasks on the ready-queue, under an idealized model of continuous-time
multiplexing. The task which would hypothetically finish first under GPS gets the highest priority
- and is put on the run queue (until the next scheduling round). Stoica et al. [29] proposed a
related technique, which similarly uses a "virtual time-line" to determine the runtime dispatching
order. This concept was also applied for hierarchical scheduling in [13], where multiple classes of
tasks (e.g., hard and soft real-time applications) can coexist in the same system.
Several schemes have been proposed to guarantee processor capacity shares for the system's
real-time tasks, and to simultaneously isolate them from overruns caused by other tasks in the
system. For example, Mercer et al. [22] proposed a processor capacity reservation mechanism to
achieve this, a method which enforces each task's reserved share within its reservation period, under
micro-kernel control. Also, Shin et al. [28] proposed a reservation-based algorithm to guarantee the
performance of periodic real-time tasks, and also to improve the schedulability of aperiodic tasks.
As noted above, many proportional-share methods have been subjected to response-time studies,
for different types of arrival processes. This has been done for switches, CPUs, and for entire
networks. Note that the problem of determining aggregate delay in a network is dual to the
problem of assigning per-hop delays to achieve some end-to-end deadline. The latter is a "top-
down" approach: the designer "tells" the network what its per-hop latencies should be, and then
the network needs to guarantee those latencies. Delay analysis works in a "bottom-up" fashion: the
network basically "tells" the user what the end-to-end delay will be, given the proportional-share
allocations for the chain under observation.
While seemingly different, these problems are inextricably related. "Top-down" deadline-
partitioning could not function without some way of getting "bottom-up" feedback. Similarly,
the "bottom-up" method assumes a pre-allocated load for the chain - which, in reality, is negotiated
to meet the chain's end-to-end latency and throughput requirements. In real-time domains,
solving one problem requires solving the other.
Deriving the end-to-end latency involves answering the following question: "If my chain flows
through N nodes, each of which is managed by a rate-based discipline, what will the end-to-end
response time be?" This issue is quite simple when all arrival process are Poisson streams, service
times are exponentially distributed, and all nodes use a FIFO service discipline. For a simple
Jackson queuing network like this, many straightforward product-form techniques can be applied.
The question gets trickier for linearly regulated traffic, where each stream has a different arrival
rate, with deviations bounded over different interval-sizes, and where each stream has different
proportional service guarantees. Fortunately, compositional results do exist, and have been presented
for various rate-based disciplines - for both deterministic [12, 24, 35] and statistical [39, 37]
workloads. Deterministic, end-to-end per connection delays were considered in [24] for leaky-bucket
regulated traffic, using the PGPS scheduling technique. In [35] a similar study was performed using
a non-work-conserving service discipline. Also, as noted above, statistical treatments have been
provided for the PGPS [39] and for RCSP [37].
In Section 4 we present an analytical approximation for our TDM abstraction - perhaps the extreme
case of a non-work-conserving discipline. The method is used to estimates end-to-end delays
over products of TDM queues; where a chain's load demand at a node can be generally distributed;
where all tasks in a chain can have different PDFs; and where queue sizes are constrained to a
single slot.
This problem is innately complex. Moreover, our design algorithm needs to test huge numbers
of solution candidates before achieving the system objectives. Hence, the delay analysis should be
fast - and will consequently be coarse. At the same time, it cannot be too coarse; after all, it must
be sufficiently accurate to expose key performance trends over the entire solution space. As shown
in Section 4, we approach this problem in a compositional, top-down fashion. Our algorithm starts
by analyzing a chain's head task in isolation. The resulting statistics are then used to help analyze
the second task, etc., down the line, until delay and throughput metrics are obtained for the chain's
output task - and hence, for the chain as a whole.
3 Model and Solution Overview
As stated above, we model a system as set of independent, pipelined task chains, with every task
mapped to a designated CPU or network resource. The chain abstraction can, for example, capture
the essential data path structure of a video conferencing system, or a distributed process control
loop. Formally, a system possesses the following structure and constraints.
Bounded-Capacity Resources: There is a set of resources r
r i corresponds to one of the system's CPUs or network links. Associated with r i is a maximum
allowable capacity, or ae m
i , which is the maximum load the resource can multiplex effectively. The
will typically be a function of its scheduling policy (as in the case of a workstation),
r 7
r 9
r 4
Figure
1: Example System Topology.
or its switching and arbitration policies (in the case of a LAN).
Task Chains: A system has n task chains, denoted th task in a chain
Each computation on \Gamma i carries out an end-to-end transformation from its
external input X i to its output Y i . Also, a producer/consumer relationship exists between each
connected pair of tasks - i;j \Gamma1 and - i;j , and we assume a one-slot buffer between each such pair, since
the queuing policy chooses only the newest data in the buffer. Hence a producer may overwrite
buffered data which is not consumed.
Stochastic Processing Costs: A task's cost is modeled via a discrete probability distribution
function, whose random variable characterizes the time it needs for one execution instance on its
resource.
Maximum Delay Bounds: \Gamma i 's delay constraint, MD i , is an upper bound on the time it should
take for a computation to flow through the system, and still produce a useful result. For example,
means that if \Gamma i produces an output at time t, it will only be used if the
corresponding input is sampled no earlier than t \Gamma 500ms. Computed results that exceed this
propagation delay are dropped.
Minimum Output Rates: \Gamma i 's rate constraint, MOR i , specifies a minimum acceptable average
rate for outputs which meet their delay constraints. For example, if MOR means that
the chain \Gamma i must, on average, produce 10 outputs per second. Moreover, MOR i implicitly specifies
the maximum possible frame-size for the tasks in
frame-size is 0:1 - which would suffice only if an output were produced during every frame.
An Example. Consider the example shown in Figure 1, which possesses six chains, labeled
rectangles denote shared resources, black circles denote tasks, and the shaded boxes are
Resource Usage PDFs
For Tasks Derived From E[t] (ms) Var[t] [Min,Max] NumSteps
End-To-End Constraints
Maximum Resource Utilizations:
Figure
2: Constraints in Example.
external inputs and outputs. The system's resource requirements and end-to-end constraints are
shown in Figure 2.
In any system, a task's load demand varies stochastically, due to second-order effects like cache
memory behavior, DMA interference, pipeline stalls, bus arbitration delays, transient head-of-line
blocking, etc. By using one random variable to model a task's load, we essentially collapse all
these residual effects into a single PDF, which also accounts for the task's idealized ``best-case''
execution-time. In our model, any discrete probability distribution can be used for this purpose.
Two points should be articulated here, the first of which is fairly obvious: If the per-task load
models are ill-founded, then the synthesis results will be of little use. Indeed, in some situations one
cannot just convolve all the abovementioned effects into a single PDF. While it may be tempting
to stochastically bundle "driver processing interference" with a task's load model - and while often
one can do just that - in other situations, this sort of factor needs to be represented explicitly, as
a task. (Our method can easily accommodate this alternative.) The process of obtaining a good
model abstraction is non-trivial; it requires accounting for matters like causality (i.e., charging load
deviations to the tasks which cause them), scale (i.e., comparing the task's loading statistics -
mean and variance - to those of the residual effects charged to it), and sensitivity (i.e., statistically
gauging the effect of load quantization on end-process results). These problems are well outside the
scope of this paper; interested readers should consult [16] for a decent introduction to statistical
performance modeling. However, while none of these problems is trivial, given sufficient time,
patience and statistical competence, one can employ some standard techniques for handling all of
them.
The second point is a bit more subtle, though equally true: For the purposes of design, a coarse
load model - represented with a single stationary distribution - is better than no load model. In
this sort of systems design, one needs to start with some basic notion of the per-task performance.
The alternative is designing the system based on blind guesswork.
Consider a task representing an inner-product function, where the vector-sizes can range between
10,000-20,000 elements, and where the actual vector-sizes change dynamically. Assume we
profile the task on a representative data-set, and the resulting load demands appear to track the
inner two quartiles of a normal distribution. Can we just quantize the resulting histogram, and
use it to model the task's PDF? Assume we cannot explicitly model vector-size as a controlled
variable. Perhaps its value is truly nondeterministic. Perhaps we just choose to treat it that way,
since its addition contributes little marginal accuracy to the pipeline's estimated departure process
statistics. In such cases, the quasi-normal distribution may, in fact, be the best model we can
achieve for our inner-product. And, it may be sufficient for the purposes of the design, and lead to
reasonable results in the real system. Our objective is to use these models as a tool - to abstractly
predict trends in our synthesis algorithms. Our objective is not to directly represent the system
itself.
To obtain a load model, one can often appeal to the method outlined above, a method commonly
used in Hardware/Software Co-Design: a task is profiled in isolation, and the resulting histogram
gets post-processed into a stationary response-time distribution. (We have experienced good results
via this method in our multi-platform simulation work on digital video playout systems [10]). A
second technique can be used at a more preliminary stage: the designer can coarsely estimate a
task's average load, and use it to create a synthetic distribution - e.g., exponential, normal, chi-
square, etc. Such load models would correspond to hypotheses on how a task might behave in the
integrated system. When this is done for all tasks - with the PDFs quantized at some acceptable
level - the results can be fed into the synthesis tool. If the system topology can handle a number
of hypothetical per-task load profiles, a designer can gain a margin of confidence in the system's
robustness to subtle changes in loading conditions.
We take the latter approach in our running example: we discretize two very different continuous
distributions, and then re-use the results for multiple tasks. In Figure 2, "Derived From" denotes a
base continuous distribution generated and quantized using the parameters Min, Max, E[t] (mean),
(variance) and "NumSteps" (number of intervals). In the case of an exponential distribu-
tion, the CDF curve is shifted up to "Min," and the probabilities given to values past "Max" are
distributed to each interval proportionally. The granularity of discretization is controlled by "Num-
Steps," where we assume that execution time associated with an interval is the maximum possible
value within that interval. For example, the time requirement for - 1;1 follows normal distribution,
with 10ms mean, 8ms standard deviation, a minimum time of 4ms, and a maximum time of 35 ms.
The continuous distribution is chopped into 10 intervals; since the first interval contains all times
within [4ms; 7ms], we associate a time of 7ms with this interval, and give it the corresponding
portion of the continuous CDF.
Note that if we attempt a hard real-time approach here, no solution would be possible. For
example, - 3;1 requires up to 200ms dedicated resource time to run, which means that - 3;1 's frame
must be no less than 200ms. Yet MOR must produce a new output at least 5 times
Flow Graph
End-to-end Constraints System
Resource Distributions
Final
Load
Assignment Assignment
Throughput Approximation
Quantum
Increase MOR
Unsatisfied Chains
Candidate
System
Simulate
Constraint Satisfaction
Failure
Figure
3: Design Overview
per second. This, in turn, means - 3;1 's frame can also be no greater than 200ms. But if the frame
is exactly 200ms, the task induces a utilization of 1.0 on resource r 1 - exceeding the resource's
intrinsic 0.9 limit, and disallowing any capacity for other tasks hosted on it.
3.1 Run-Time Model
Within the system model, all tasks in chain \Gamma i are considered to be scheduled in a quasi-cyclic
fashion, using a time-division multiplexing abstraction for resource-sharing, over F i -sized frames.
That is, all of \Gamma i 's load-shares are guaranteed for F i intervals on all constituent resources. Hence,
the synthesis algorithm's job is to (1) assign each task - i;j a proportion of its resource's capacity
(which we denote as u i;j ) and (2) assign a global F i frame for \Gamma i . Given this, - i;j 's runtime behavior
can be described as follows:
(1) Within every F i frame, - i;j can use up to u i;j of its resource's capacity. This is policed by
assigning - i;j an execution-time budget is an upper bound on the
amount of resource time provided within each F i frame, truncated to discrete units. (We assume
that the system cannot keep track of arbitrarily fine granularities of time.)
(2) A particular execution instance of - i;j may require multiple frames to complete, with E i;j
of its running time expended in each frame.
(3) A new instance of - i;j will be started within a frame if no previous instance of - i;j is still
running, and if - i;j 's input buffer contains new data that has not already exceeded MD i . This is
policed by a time-stamp mechanism, put on the computation when its input is sampled.
Due to a chain's pipeline structure, note that if there are n i tasks in \Gamma i , then we must have that
since data has to flow through all n i elements to produce a computation.
3.2 Solution Overview
A schematic of the design process is illustrated in Figure 3, where the main steps are as follows.
(1) partitioning the CPU and network capacity between the tasks; (2) selecting each chain's frame
to optimize its output rate; and (3) checking the solution via simulation, to verify the integrity of
the approximations used, and to ensure that each chain's output profile is sufficiently smooth (i.e.,
not bursty).
The partitioning algorithm processes each chain \Gamma i and finds a candidate load-assignment vector
for it, denoted u i . (An element u i;j in u i contains the load allocated to - i;j on its resource. ) Given
a load assignment for \Gamma i , the synthesis algorithm attempts to find a frame F i at which \Gamma i achieves
its optimal output rate. This computation is done approximately: For a given F i , a rate estimate
is derived by (1) treating all of \Gamma i 's outputs uniformly, and deriving an i.i.d. per-frame "success
simply multiplying - i \Theta 1
to approximate the chain's output rate,
lower than the MOR i requirement, the load assignment vector is increased, and so
on. Finally, if sufficient load is found for all chains, the resulting system is simulated to ensure that
the approximations were sound - after which excess capacity can be given to any chain, with the
hope of improving its overall rate.
4 Throughput Analysis
In this section we describe how we approximate \Gamma i 's output rate, candidate load and
frame parameters for the chain. Then in Section 5, we show how we make use of this
technique to derive all the system's F i and u i parameters for every chain.
Assume we are currently processing \Gamma i , which has some frame-size F i and load vector u i . How
do we estimate its output probability - i ? Recall that outputs exceeding the maximum allowed delay
MD i are not counted - and hence, we need some way of determining latency through the system.
One benefit of proportional-share queuing is as follows: Since each chain is effectively isolated from
others over F i intervals of observation, we can analyze the behavior of \Gamma i independently, without
worrying about head-of-line blocking effects from other components.
We use a discrete-time model, in which the time units are in terms of a chain's frame-size; i.e.,
our discrete domain f0; corresponds to the real times f0; F only does this
reduction make the analysis more tractable, it also corresponds to "worst-case" conditions: Since
the underlying system may schedule a task execution at any time within a F i frame, we assume
that input may be read as early as the beginning of a frame, and output may be produced as late
as the end of a frame. And with one exception, a chain's states of interest do occur at its frame
boundaries. That exception is in modeling aggregate delay - which, in our discrete time domain we
treat as b MD i
c. Hence the fractional part of the last frame is ignored, leading to a tighter notion
of success (and consequently erring on the side conservatism).
Theoretically, we could model a computation's delay by constructing a stochastic process for
the chain as a whole - and solving it for all possible delay probabilities. But this would probably be
a futile venture for even smaller chains; after all, such a model would have to simultaneously keep
track of each task's local behavior. And since a chain may hold as few as 0 ongoing computations,
and as many as one computation per task, it's easy to see how the state-space would quickly
explode.
Instead, we go about constructing a model in a compositional (albeit inexact) manner, by
processing each task locally, and using the results for its successors. Consider the following diagram,
which portrays the flow of a computation at a single task:
age
\DeltaOut
age i;j
These random variables are defined as follows:
1. Data-age (age i;j This variable charts a computation's total accumulated time, from entering
's head, to leaving - i;j .
2. Blocking time (B i;j The duration of time an input is buffered, waiting for - i;j to complete
its current computation.
3. Processing time (\Psi i;j is a random variable ranging over - i;j 's PDF, then \Psi i;j
e
is the corresponding variable in units of frames.
4. Inter-output time (\DeltaOut i;j An approximation of - i;j 's inter-output distribution, in terms
of frames; it measures the time between two successive outputs.
We assume data is always ready at the chain's head; hence age i;j can be approximated via the
following recurrence relation:
age i;1
age
And for j ? 1, we approximate the entire age i;j distribution by assuming the three variables to be
independent, i.e.,
Pr[age
Pr[age
Note that - i;j 's success probability, - i;j , will then be 1
the probability that a (non-
stale) output is produced during a random frame. After processing the final task in the chain, - i;n ,
we can approximate the end-to-end success probability - which is just - i;n 's output probability,
appropriately scaled by the probability of excessive delay injected during - i;n 's execution:
At this point the end-to-end success rate is estimated as
Note that our method is "top-down," i.e., statistics are derived for - i;1 , then - i;2 (using the
synthesized metrics from - i;1 ), then - i;3 , etc. Also note that when processing - i;1 , we already have
Figure
4: Chain X i;j (t), max(\Psi i;j
all the information we require - from its PDF. In other words, we trivially have Pr[\DeltaOut
retrieve fresh input whenever it is ready
(and therefore incurs no blocking time), - i;1 can execute a new phase whenever it finishes with the
previous phase.
Blocking-Time. Obtaining reasonable blocking-time metrics at each stage is a non-trivial affair,
especially when longer-tailed distributions are involved. In carrying out the analysis, we consider a
stochastic process X i;j , whose states and transitions characterize two factors: (1) - i;j 's remaining
execution time until the next task instance, and (2) whether an input is to be processed or dropped.
transitions can be described using a simple Markov Chain, as shown in Figure 4 (where
the maximum execution time is 6F i .) The transitions are event-based, i.e., they are triggered by
new inputs from - i;j \Gamma1 . On the other hand, states measure the remaining time left in a current
execution, if there is any. In essence, moving from state k to l denotes that (1) a new input just
received; and (2) it will induce blocking time of l frames. For the sake of our analysis, we distinguish
between three different outcomes on moving from state k to state l. In the transition descriptions,
we use the term d i to denote the end-to-end delay bound, in terms of frames, i.e., d
c.
(1) Dropping [k ! l]: The task is currently executing, and there is already another input queued
up in its buffer - which was calculated to induce a blocking-time of k. The new input will overwrite
it, and induce l blocking-time frames.
(2) Failure [k ! 0]: A new input arrives, but it will be too stale to get processed by the finish-time
of the current execution.
(3) Success [k ! l]: A new input arrives, and will get processed with blocking time l. Figure 5
illustrates a case of a successful transition.
l
\DeltaOut
Figure
5: Transition k ! l, with
Case A: Destination state is 0.
Case B: Destination state l ? 0:
t?l
We distinguish between outcomes (1)-(3) via partitioning the state-transition matrix - i.e., P d ,
denote the transition matrices for dropping, failure and success, respectively. Each is
calculated in terms of parameters we discussed above - i.e., age
After getting the complete transition matrix,
probabilities in the usual fashion, i.e., for x P. In turn, the steady-state probabilities
are used to derive - i;j 's per-frame success probability
where the k index denotes the kth element in the resulting vector. In essense, the calculation just
computes the probability of (1) having an input to read at a random frame, and (2) successfully
processing it - which is obtained by summing up the successful out-flow probabilities. The same
simple Bayesean method is used to achieve a stationary blocking-time PDF:
l-0
The final ingredient is to derive task - i;j 's inter-output distribution, \DeltaOut i;j . To do this we use a
coarse mean-value analysis: After - i;j produces an output, we know that it goes through an idle
phase (waiting for fresh input from its producer), followed by a busy phase, culminating in another
output. Let Idle i;j be a random variable which counts the number of idle cycles before the busy
phase. Then we have:
Using this information, we model the event denoting "compute-start" as a pure Bernoulli decision
in probability ST i;j , where ST
E[Idle i;j ]+1 , i.e., after an output has been delivered, ST i;j is
the probability that a random cycle starts the next busy phase. We then approximate the idle
durations via a modified geometric distribution:
Pr[Idle
Then we derive the distribution for Pr[\DeltaOut i;j ] as:
Pr[Idle
Finally, we approximate the end-to-end success probability, which is just - i;n 's output proba-
bility, appropriately scaled to account for excessive delay injected during - i;n 's execution:
So by definition, the end-to-end output rate is given as follows:
Example. As an example, we perform throughput estimation on \Gamma 6 , assuming system parameters
of F 30ms. Recall that the delay bound for the chain is
5. Within our head-to-tail approach, we first have to consider
task - 6;1 . Recall, however, that the distributions for both age 6;1 and \DeltaOut 6;1 are identical to \Psi 6;1 ,
the quantized load distribution. Moreover, we also have that -
Next we consider the second (and last) task - 6;2 . The following tables show the PDFs for \Psi 6;2 ,
for the Markov Chain's steady states, the blocking times, and for age 6;2 .
6 0:0005 0:000001 0:0 0:1153
7 0:00008 0:0 0:0 0:0269
9 0:0 0:0 0:0 1:3 \Theta 10 \Gamma3
summing up the successful out-flow probabilities, we have
and hence, the chain's i.i.d success probability is defined as follows:
Pr[age 6;2 - d 6
Multiplying by the frame-rate, we get
5 System Design Process
We now revisit the "high-level" problem of determining the system's parameters, with the objective
of satisfying each chain's performance requirements. As stated in the introduction, the design
problem may be viewed as two inter-related sub-problems:
1. Load Assignment. Given a set of chains, how should the CPU and network load be partitioned
among the set of tasks, so that the performance requirements are met?
2. Frame Assignment. Given a load-assignment to the tasks in the chain, what is the optimal
frame for the chain, such that the effective throughput is maximized?
Note that load-allocation is the main "inter-chain" problem here, while frame-assignment can be
viewed strictly as an "intra-chain" issue. With our time-division abstraction, altering a chain's
frame-size will not effect the (average) rates of other chains in the system.
Consider the synthesis algorithm in Figure 6, and note that the F i 's (expressed in milliseconds)
are initialized to the largest frame-sizes that could achieve the desired output rates. Here \DeltaF
returns ng
e (for all 1 - i - n)
(2)
(for all - i;j )
ng
while (S 6= ;)
f
Find the - i;j in
which maximizes
return Failure
Get
f
f
f
Figure
Synthesis Algorithm
denotes global time-scale; in our example it was chosen as 1000, since all units were in millisec-
onds. Also note that for any task - i;j , its resource share u i;j is initialized to accommodate the
corresponding mean response-time time E[t i;j ]. (The system could only be solved with these initial
parameters if all execution times were constant.)
Load Assignment. Load-assignment works by iteratively refining the load vectors (the u i 's),
until a feasible solution is found. The entire algorithm terminates when the output rates for all
chains meet their performance requirements - or when it discovers that no solution is possible. We
do not employ backtracking, and task's load is never reduced. This means the solution space is not
searched totally, and in some tightly-constrained systems, potential feasible solutions may not be
found.
Load-assignment is task-based, i.e., it is driven by assigning additional load to the task estimated
to need it the most. The heart of the algorithm can be found on lines (6)-(7), where all of the
remaining unsolved chains are considered, with the objective of assigning additional load to the
"most deserving" task in one of those chains. This selection is made using a heuristic weight w i;j ,
reflecting the potential benefit of increasing - i;j 's utilization, in the quest of increasing the chain's
end-to-end performance.
The weight actually combines three factors, each of which plays a part in achieving feasibility:
(1) additional output rate required, normalized via range-scaling to the interval [0,1]; (2) the
current/maximum capacities for the task's resource (where current capacity is denoted as ae k for
resource r k ); and (3) the task's current load assignment. The idea is that a high load assignment
indicates diminishing returns are setting in, and working on a chain's other tasks would probably
be more beneficial. For the results in this paper, the heuristic we used was:
\Theta (ae m
Then the selected task gets its utilization increased by some tunable increment. Smaller increments
will obviously lead to a greater likelihood of finding feasible solutions; however, they also incur a
higher cost. (For the results presented in this paper, we set
After additional load is given to the selected task, the chain's new frame-size and rate parameters
are determined; if it meets its minimum output requirements, it can removed from further
consideration.
Frame Assignment. "Get Frame" derives a feasible frame (if one exists). While the problem of
frame-assignment seems straightforward enough, there are a few non-linearities to surmount: First,
the true, usable load for a task is given by bu due to the fact that the system cannot
multiplex load at arbitrarily fine granularities of time. Second, in our analysis, we assume that the
effective maximum delay is rounded up to the nearest frame, which errs on the side of conservatism.
The negative effect of the second factor is likely to be higher at larger frames, since it results
in truncating the fractional part of a computation's final frame. On the other hand, the first
factor becomes critical at smaller frames. Hence, the approximation utilizes a few simple rules.
Since loads are monotonically increased, we restrict the search to frames which are lower than the
current one. Further, we restrict the search to situations where our frame-based delay estimate
truncates no more than ff \Theta 100 percent of the continuous-time deadline, where ff !! 1. Subject
to these guidelines, frames are evaluated via the throughput analysis presented in Section 4 - when
determines the current
Design Process - Solution of the Example. We ran the algorithm in Figure 6 to find a feasible
solution, and the result is presented in Figure 7.
On a SPARC Ultra, the algorithm synthesized parameters for the example in approximately
minutes of wall-clock time. The results are presented in Figure 7. Note that the resources have
capacity to spare; r 1 is has highest load in this configuration (at 0:72), r 7 has the lowest (at 0:35);
most others are around 50% loaded. The spare load can be used to increase any chain's output
rate, if desired - or held for other chains to be designed-in later.
A. Synthesized Solutions for Chains.
B. Resource Capacity Used by System.
0:72 0:39 0:45 0:4 0:65 0:52 0:35 0:62 0:65 0:65
Figure
7: Synthesized Solution of the Example System.
6 Simulation
Since the throughput analysis uses some key simplifying approximations, we validate the resulting
solution via a simulation model.
Recall that the analysis injects imprecision for the following reasons. First, it tightens all (end-
output delays by rounding up the fractional part of the final frame. Analogously, it assumes
that a chain's state-changes always occur at its frame boundaries; hence, even intermediate output
times are assumed to take place at the frame's end. A further approximation is inherent in our
compositional data-age calculation - i.e., we assume the per-frame output ratios from predecessor
tasks are i.i.d., allowing us to solve the resulting Markov chains in a quasi-independent fashion.
The simulation model discards these approximations, and keeps track all tagged computations
through the chain, as well as the "true" states they induce in their participating tasks. Also, the
clock progresses along the real-time domain; hence, if a task ends in the middle of a frame, it gets
placed in the successor's input buffer at that time. Also, the simulation model schedules resources
using a modified deadline-monotonic dispatcher (where a deadline is considered the end of a frame),
so higher-priority tasks will get to run earlier than the analytical method assumes. Recall that the
analysis implicitly assumes that computations may take place as late as possible, within a given
frame.
On the other hand, the simulator does inherit some other simplifications used in our design.
For example, input-reading is assumed to happen when a task gets released, i.e, at the start of a
frame. As in the analysis, context switch overheads are not considered; rather, they are charged to
the load distributions. Figure 8 summarizes the differences between the two models.
Validation of the Design. The following table compares the analytical throughput estimates
with those derived via simulation. Simulated rates are displayed with 95% confidence-intervals
Simulation Analysis
Maximum delay MD i bMD i =F i c \Theta F i
State change Measured Frame boundary
Output Rate Measured i.i.d
Scheduling Frame-based As late as possible
deadline monotonic
Data reading time start of a frame start of a frame
Figure
8: Comparison between Analysis model and Simulation model
over 100 trials, where each trial ran over 100,000 frames (for the largest frame in the system). The
last column shows the standard deviation for output-rates calculated over t-time moving averages
w.r.t. the simulated 1:0g. This means that for 1-second intervals our tool
does the following: (1) \Gamma i 's output rate is charted over all 1-second intervals; (2) each interval's
deviation from OR i is calculated; (3) the sum-of-squared deviations is obtained, and then divided
by the degrees of freedom in the sample (4) the square-root of the result is produced.
for moving averages over t
Note that the resulting (simulated) system satisfies minimum throughput requirements of all
chains; hence we have a satisfactory solution. If desired, we can improve it by doling out the excess
resource capacity, i.e., by simply iterating through the design algorithm again.
Figure
9 compares the simulated and analytic results for multiple frames, assigned to three
selected chains. The frame-sizes are changed for one chain at a time, and system utilization remains
fixed at the synthesized values. The simulation runs displayed here ran for 10; 000 frames, for the
largest frame under observation (e.g, on the graph, if the largest frame tested is denoted as 200
ms, then that run lasted for 2000 seconds). The graphs show average output-rates for the chains
over an entire simulation trial, along with the corresponding standard deviation, computed over all
one-second interval samples vs. the mean.
The combinatorial comparison allows us to make the following observations. First, output rates
generally increase as frame-sizes decrease - up to the point where the the system starts injecting
a significant amount of truncation overhead (due to multiplexing). Recall that the runtime system
cannot multiplex infinitesimal granularities of execution time; rather, a task's utilization is allocated
in integral units over each frame.
Second, the relationship between throughput and burstiness is not direct. Note that for some
chains deviations tend to increase at both higher and lower frames. This reflects two
separate facts, the first of which is fairly obvious, and is an artifact of our measurement process.
Success
Rate
Frame Size
Analysis Simulation
200135Success
Rate
Frame Size
Chain 3 Success Rate
Analysis Simulation
2001357Success
Rate
Frame Size
Chain 6 - Success Rate
Analysis Simulation
Standard
Deviation
of
success
rate
Frame Size
Chain - Standard Deviation
Standard
Deviation
of
success
rate
Frame Size
Chain - Standard Deviation
Deviation
of
success
rate
Frame Size
Chain 6 Standard Deviation
Figure
9: Analysis vs. simulation: standard deviations for
second moving averages.
We calculate variance statistics over 1-second intervals, yields deviations computed on the basis of
a single reference time-scale. This method inserts some bias at low frames - since these processes
get sampled more frequently. In turn, this can lead to artificially higher deviations.
At very large frames, another effect comes into play. Recall that tasks are only dispatched at
frame boundaries, and only when they have input waiting from their predecessors. Hence, if a
producer overruns its frame by even a slight amount, its consumer will have to wait for the next
frame to use the data - which consequently adds to the data's age.
On the whole, however, we note that the simulated deviations are not particularly high, especially
considering the fact that they include the one-second intervals where measured output rates
actually exceed the mean.
Also, while output rates decrease as frame-sizes increase, the curves are not smooth, and small
spikes can be observed in both the simulated and analytical results. Again, this effect is due to
the multiplexing overhead discussed above, injected since execution-time budgets are integral. For
example if a task's assigned utilization is 0.11, and if its frame-size is 10, then its execution-time
budget will be 1 - resulting in an actual, usable utilization of 0.1. However, the output rates do
monotonically decrease for larger frames when we only consider candidates that result in integral
execution-time budgets.
Finally, we note that the difference between analysis and simulation is larger when a frame-size
does not divide maximum permissible delay. Again, this is the result of rounding up the fractional
part of a computation's final frame, which will cause us to overestimate the output's age.
Remarks. Coarse analytical estimations are essential at the synthesis stage. As we showed in
Section 3, a deterministic real-time approach would fail to work for our small example, and the
problems associated with stochastic timing deviations would only increase in larger systems. Note
also that we could not rely exclusively on simulation during the synthesis phase, i.e., as a substitute
for analysis. Based on our timing information for single-run simulations, such an approach would
require over three months to synthesize our small example.
Hence, since we require coarse analytical estimates at the design stage, validating the solution
is essential. As a first pass, the obvious choice is discrete-event simulation. A reasonble set of
simulation runs requires several minutes, which can often be significantly cheaper than going directly
to integration. Discovering a design flaw after implementation can be a nasty proposition indeed,
e.g., when the hardware is found to be insufficient for the application hosted on it. The simulation
model provides a decent margin of confidence in the design's robustness, particularly since its
underlying assumptions are fundamentally different from those used in the analysis. Simply put,
for the sake of validity, two performance models are better than one.
However, simulation is not the end of the story. After all, our objective is to build the appli-
cation, by calibrating the kernels and drivers to use our analytically-derived parameters. At that
point, one subjects the system to the most important validity test of all: on-line profiling. Even
with a careful synthesis strategy, testing usually leads to some additional system tuning - to help
compensate for the imprecise modeling abstractions used during static design.
7 Conclusion
We presented a design scheme to achieve stochastic real-time performance guarantees, by adjusting
per-component load allocations and processing rates. The solution strategy uses several approximations
to avoid modeling the entire system; for example, in estimating end-to-end delay we use a
combination of queuing analysis, real-time scheduling theory, and simple probability theory. Our
search algorithm makes use of two heuristics, which help to significantly reduce the number of
feasible solutions checked.
In spite of the approximations, the simulation results are quite promising - they show that the
approximated solutions are reasonably tight; also, the resulting second-moment statistics show that
output-rates are relatively smooth.
Much work remains to be done. First, we plan on extending the model to include a more rigid
characterization of system overhead, due to varying degrees of device-level multiplexing. Currently
we assume that execution-time can be allocated in integral units; other than that, no device-specific
penalty functions are considered. We also plan on getting better approximations for the "handover-
time" between tasks in a chain, which will result in tighter analytical results.
We are also investigating new ways to achieve faster synthesis results. To speed up convergence,
one needs a metric which approximates "direction of improvement" over the solution space as a
whole. This would let the synthesis algorithm shoot over large numbers of incremental improvements
- and hopefully attain a quicker solution. However, we note that the problem is not trivial;
the solution space contains many non-linearities, with no ready-made global metric to unconditionally
predict monotonic improvement. Yet, we can potentially take advantage of some relatively
easy optimization strategies, such as hill-climbing and simulated annealing. The key to success lies
in finding a reasonable "energy" or "attraction" function - and not necessarily one that is exact.
Finally, we are currently deploying our design technique in several large-scale field tests, on distributed
applications hosted on SP-2 and Myrinet systems. Part of this project requires extending
the scheme to handle dynamic system changes - where online arrivals and departures are permit-
ted, and where the component-wise PDFs vary over time. To accomplish this, we are working on
self-tuning mechanisms, which get invoked when a chain's throughput degrades below a certain
threshold. This would trigger an on-line adjustment in a chain's allocated load, as well as its associated
frame size. Note that this is analogous to a common problem studied in the context of
computer networks: handling on-the-fly QoS renegotiations, to help smooth out fluctuating service
demands. In fact, we are investigating various strategies proposed for that problem, and evaluating
whether they can be modified for the more arcane (but equally challenging) domain of embedded
real-time systems.
--R
Preemptive Priority Based Scheduling: An Appropriate Engineering Approach.
Algorithms for scheduling real-time tasks with input error and end-to-end deadlines
A calculus for network delay
Analysis and Simulation of a Fair Queueing Algorithm.
Communication configurator for fieldbus: An algorithm to schedule tran smission of data and messages.
Transmission scheduling for fieldbus: A strategy to schedule data and messages on fieldbus with end-to-end constraints
Guaranteeing Real-Time Requirements with Resource-Based Calibration of Periodic Processes
Analyzing the real-time properties of a dataflow execution paradigm using a synthetic aperture radar application
A Hierarchical CPU Scheduler for Multi-media Operating Systems
Network Algorithms and Protocol for Multimedia Servers.
Fixed Priority Scheduling of Periodic Tasks with Varying Execution Priority.
The Art of Computer Systems Performance Analysis Techniques for Experimental Design
Deadline Assignment in a Distributed Soft Real-Time System
Visual assessment of a real-time system design : A case study on a cnc controller
An Optimal Algorithm for Scheduling Soft-Aperiodic Tasks in Fixed-Priority Preemptive Systems
A Note on the Preemptive Scheduling of Periodic
Scheduling Algorithm for Multiprogramming in a Hard Real-Time Environment
Processor Capacity Reserves: Operating System Support for Multimedia Applications.
A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks - The Single Node Case
A generalized processor sharing approach to flow control in integrated services networks - the multiple node case
Resource Conscious Design of Real-Time Systems: An End-to-End Approach
On Task Schedulability in Real-Time Control System
A Reservation-Based Algorithm for Scheduling Both Periodic and Aperiodic Real-Time Tasks
A Proportional Resource Allocation Algorithm for Real-Time
Analysis of hard real-time communication
Lottery Scheduling: Flexible Proportional-Share Management
Stride scheduling: Deterministic proportional-share resource management
Providing End-to-End Performance Guarantees Using Non-Work-Conserving Dis- ciplines
Providing End-to-End Statistical Performance Guarantees with Bounding Interval Dependent Stochastic Models
A New Traffic control Algorithm for Packet Switching Networks.
Statistical Analysis of Generalized Processor Sharing Scheduling Discipline.
--TR
--CTR
Dong-In Kang , Stephen Crago , Jinwoo Suh, Power-Aware Design Synthesis Techniques for Distributed Real-Time Systems, ACM SIGPLAN Notices, v.36 n.8, p.20-28, Aug. 2001 | distributed systems;embedded systems;soft real-time;statistical performance;design synthesis |
371529 | Implementing E-Transactions with Asynchronous Replication. | AbstractThis paper describes a distributed algorithm that implements the abstraction of e-Transaction: a transaction that executes exactly-once despite failures. Our algorithm is based on an asynchronous replication scheme that generalizes well-known active-replication and primary-backup schemes. We devised the algorithm with a three-tier architecture in mind: the end-user interacts with front-end clients (e.g., browsers) that invoke middle-tier application servers (e.g., web servers) to access back-end databases. The algorithm preserves the three-tier nature of the architecture and introduces a very acceptable overhead with respect to unreliable solutions. | Introduction
Until very recently, three-tier architectures were at the leading edge of development. Only a few
tools supported them, and only a small number of production-level applications implemented them.
Three-tier applications are now becoming mainstream. They match the logical decomposition of
applications (presentation, logic, and data) with their software and hardware structuring (PCs,
workstations, and clusters). Clients are diskless (e.g., browsers), application servers are stateless,
but contain the core logic of the application (e.g., web servers), and back-end databases contain
the state of the applications. Basically, the client submits a request to some application server,
on behalf of an end-user. The application server processes the client's request, stores the resulting
state in a back-end database, and returns a result to the client. This simple interaction scheme is
at the heart of the so-called e-Business game today.
Motivation. The partitioning of an application into several tiers provides for better modularity and
scalability. However, the multiplicity of the components and their interdependencies make it harder
to achieve any meaningful form of reliability. Current reliability solutions in three-tier architectures
are typically transactional [1, 2]. They ensure at-most-once request processing through some form
of "all-or-nothing" guarantee. The major limitation of those solutions is precisely the impossibility
for the client-side software to accurately distinguish the "all" from the "nothing" scenario. If a failure
occurs at the middle or back-end tier during request processing, or a timeout period expires at
the client side, the end-user typically receives an exception notification. This does not convey what
had actually happened, and whether the actual request was indeed performed or not. 1 In practice,
end-users typically retry the transaction, with the risk of executing it several times, e.g., having
1 The transactional guarantee ensures that if the request was indeed performed, all its e#ects are made durable
("all" scenario), and otherwise, all its e#ects are discarded ("nothing" scenario) [3].
the user charged twice. In short, current transactional technology typically ensures at-most-once
request processing and, by retrying transactions, end-users typically obtain at-least-once guaran-
tees. Ensuring exactly-once transaction processing is hard. Basically, some transaction outcome
information should be made highly available, but it is not clear exactly which information should be
preserved, where it should be stored, and for how long. The motivation of our work is to define and
implement the abstraction of exactly-once-Transaction (e-Transaction) in a three-tier architecture.
Intuitively, this abstraction masks (physical) transaction aborts. It adds a liveness dimension to
transactional systems that also includes the client side, and frees the end-user from the burden of
having to resubmit transactions.
Protocol. This paper presents a distributed protocol that implements the e-Transaction abstraction.
We integrate a replication scheme that guarantees the e-Transaction liveness property with a transactional
scheme that ensures the traditional safety counterpart. This integration involves the client,
the application servers, and the database servers. To deal with the inherent non-determinism of the
interaction with third-party databases, we make use of write-once registers (wo-register). These are
consensus-like abstractions that capture the nice intuition of CD-ROMs - they can be written once
but read several times. Building on such abstractions leads to a modular protocol, and enables us
to reuse existing results on the solvability of consensus in distributed systems, e.g., [4]. 2 Indirectly,
we contribute to better understand how the safety aspect of transactions can be practically mixed
with the liveness feature of replication, and how a consensus abstraction can help achieve that mix.
Related work. Considerable work has been devoted to transaction execution on replicated data [3].
However, we know of no approach to replicate the actual "transaction processing-state" in order
to ensure the fault-tolerance of the transaction itself, i.e., that it eventually commits exactly-once.
Traditionally, it is assumed that a transaction that cannot access "enough" replicas is aborted [3],
but the issue of how to reliably determine the transaction's outcome, and possibly retry it, is not
addressed. In fact, addressing this issue requires a careful use of some form of non-blocking trans-action
processing, with some highly available recovery information that reflects the "transaction-
processing state". In [6], the problem of exactly-once message delivery was addressed for communication
channels. The author pointed out the importance of reliably storing some "message
recovery information". In the context of exactly-once transaction processing, this recovery information
should represent the transaction-processing state. Several approaches were proposed in the
literature to store that state for recovery purposes, e.g., [7, 8, 9]. Nevertheless, those approaches
do not guarantee the high-availability of that state. Furthermore, they rely on disk storage at the
client or at some application server. Relying on the client's disk is problematic if the client is a Java
applet that does not have the right to access the disk. Solutions based on disk storage at a specific
application server would make that server host dependent, and three-tier architectures are considered
scalable precisely because they prevent any form of host dependence at the middle-tier [10].
Our e-Transaction protocol uses the very same replication scheme, both as a highly available storage
for the "transaction-processing state", and as an e#ective way to retry transactions behind the
scenes. In contrast to most replication schemes we know about [11, 12, 13, 14], we assume stateless
servers that interact with third-party databases - replication schemes have usually been designed in
a client-server context: servers are stateful but do not interact with third-party entities. Another
A wo-register can also be viewed as a distributed form of software counter [5].
characteristic of our replication scheme is its asynchronous nature: it tolerates unreliable failure
detection and may vary, at run-time, between some form of primary-backup [12] and some form of
active replication [11].
Practical considerations. Our e-Transaction protocol was designed with a very practical objective
in mind. In particular, we assume that the functionality of a database server is given: it is a state-
ful, autonomous resource that runs the XA interface [15] - the X/Open standard that database
vendors are supposed to comply with in distributed transaction-processing applications. We preserve
the three-tier nature of the applications by not relying on any disk access at the client site,
or any application server site. We do not make any assumption on the failure detection scheme
used by the client-side software to detect the crash of application servers, and we tolerate failure
suspicion mistakes among application servers. The overhead of our e-Transaction protocol is very
acceptable in a practical setting where application servers are run by the Orbix 2.3 Object Request
Broker [16], and database servers by the Oracle 8.0.3 database management system [17]. In terms
of the latency, as viewed by a client, our protocol introduces an overhead of about 16% over a
baseline protocol that does not o#er any reliability guarantee.
Roadmap. The rest of the paper is organized as follows. Section 2 defines our system model. Section
3 describes the e-Transaction problem. Section 4 describes our protocol and the assumptions
underlying its correctness. Finally, Section 5 puts our contribution in perspective through some
final remarks. Appendix 1 describes the pseudo-code used to express our protocol, Appendix 2
discusses the protocol correctness, and Appendix 3 the performance of its implementation.
Three-Tier Model
We consider a distributed system with a finite set of processes that communicate by message
passing. Processes fail by crashing. At any point in time, a process is either up or down. A crash
causes a transition from up to down, and a recovery causes the transition from down to up. The
crash of a process has no impact on its stable storage. When it is up, a process behaves according
to the algorithm that was assigned to it: processes do not behave maliciously.
In the following, we outline our representation of the three types of processes in a three-tier
application: clients, application servers, and database servers.
Clients
Client processes are denoted by c 1 , c 2 , . , c k (c i # Client). We assume a domain, "Request", of
request values, and we describe how requests in this domain are submitted to application servers.
Clients have an operation issue(), which is invoked with a request as parameter (e.g., on behalf
of an end-user). We say that the client issues a request when the operation issue() is invoked.
The issue() primitive is supposed to return a result value from the domain "Result". When it
does so, we say that the client delivers the result (e.g., to the end-user). A result is a value in the
"Result" domain, and it represents information computed by the business logic, such as reservation
number and hotel name, that must be returned to the user. In practice, the request can be a vector
of values. In the case of a travel application for instance, the request typically indicates a travel
destination, the travel dates, together with some information about hotel category, the size of a
car to rent, etc. A corresponding result typically contains information about a flight reservation, a
hotel name and address, the name of a car company, etc.
After being issued by a client, a request is processed without further input from the client.
Furthermore, the client issues requests one-at-a-time and, although issued by the same client, two
consecutive requests are considered to be unrelated. Clients cannot communicate directly with
databases, only through application servers.
We assume that each request and each result are uniquely identified. Furthermore, we assume
that every result is uniquely associated with a transaction. When we say that a result is committed
(resp. aborted), we actually mean that the corresponding transaction is committed (resp. aborted).
For presentation simplicity we assume that a result and the corresponding transaction have the same
identifier, and we simply represent such indentifiers using integers.
Application Servers
Application server processes are denoted by a 1 , a 2 , . , am (a i # AppServer). Application servers
are stateless in the sense that they do not maintain states accross request invocations: requests do
not have side-e#ects on the state of application servers, only on the database state. Thus, a request
cannot make any assumption about previous requests in terms of application-server state changes.
Having stateless application servers is an important aspect of three-tier applications. Stateless
servers do not have host a#nity, which means that we can freely migrate them. Moreover, fail-over
is fast because we do not have to wait for a server to recover its state. We do not model the chained
invocation of application servers. In our model, a client invokes a single application server, and this
server does not invoke other application servers. Chained invocation does not present additional
challenges from a reliability standpoint because application servers are stateless. We ignore this
aspect in our model to simplify the discussion.
Application servers interact with the databases through transactions. For presentation simplic-
ity, we only explicitly model the commitment processing, not the business logic or SQL queries
performed by application servers. We use a function, called compute(), to abstract over the (tran-
database manipulations performed by the business logic. In a travel example, compute()
would query the database to determine flight and car availabilities, and perform the appropriate
bookings. However, the compute() function does not commit the changes made to the database. It
simply returns a result. Since the commitment processing can fail, we may call compute() multiple
times for the same request. However, compute() is non-deterministic because its result depends on
the database state. We assume that each result returned by compute() is non-nil. In particular, we
model user-level aborts as regular result values. A user-level abort is a logical error condition that
occurs during the business logic processing, for example if there are no more seats on a requested
flight. Rather than model user-level aborts as special error values returned by compute(), we model
them as regular result values that the databases then can refuse to commit.
Every application server has access to a local failure detector module which provides it with
information about the crash of other application servers. Let a 1 and a 2 be any two application
servers. We say that server a 2 suspects server a 1 if the failure detector module of a 2 suspects a 1 to
have crashed. We abstract the suspicion information through a predicate suspect(). Let a 1 and a 2
be any two application servers. The execution of suspect(a 1 ) by server a 2 at t returns true if and
only if a 2 suspects a 1 at time t.
Database Servers
Database server processes are denoted by s 1 , s 2 , . , s n (s i # Server). Since we want our approach
to apply to o#-the-shelf database systems, we view a database server as an XA [15] engine. In
particular, a database server is a "pure" server: it does not invoke other servers, it only responds
to invocations. We do not represent full XA functionality, we only represent the transaction commitment
aspects of XA (prepare() and commit()). We use two primitives, vote() and decide(), to
represent the transaction commitment functionality. The vote() primitive takes as a parameter a
result identifier, and returns a vote in the domain Vote = {yes, no}. Roughly speaking, a yes vote
means that the database server agrees to commit the result (i.e., the corresponding transaction).
The decide() primitive takes two parameters: a result identifier and an outcome in the domain
abort}. The decide() primitive returns an outcome value such that: (a) if the
input value is abort, then the returned value is also abort; and (b) if the database server has voted
yes for that result, and the input value is commit, then the returned value is also commit. 3
3 The Exactly-Once Transaction Problem
Roughly speaking, providing the e-Transaction (exactly-once-Transaction) abstraction comes down
to ensure that whenever a client issues a request, then unless it crashes, there is a corresponding
result computed by an application server, the result is committed at every database server, and
then eventually delivered by the client. The servers might go through a sequence of aborted
intermediate results until one commits and the client delivers the corresponding result. Ensuring
database consistency requires that all database servers agree on the outcome of every result (abort or
commit). Client-side consistency requires that only a committed result is returned to the end-user.
In the following, we state the specification of the e-Transaction problem. More details on the
underlying intuition and the rationale behind the problem specification are given in [18]. For the
sake of presentation simplicity, but without loss of generality, we consider here only one client,
and assume that the client issues only one request. We assume the existence of some serializability
protocol [3]. We hence omit explicit identifiers to distinguish di#erent clients and di#erent requests,
together with identifiers that relate di#erent results to the same request.
We define the e-Transaction problem with three categories of properties: termination, agreement
, and validity . Termination captures liveness guarantees by preventing blocking situations.
Agreement captures safety guarantees by ensuring the consistency of the client and the databases.
Validity restricts the space of possible results to exclude meaningless ones.
. Termination.
(T.1) If the client issues a request, then unless it crashes, it eventually delivers a result;
(T.2) If any database server votes for a result, then it eventually commits or aborts the result.
. Agreement.
delivered by the client unless it is committed by all database servers;
commits two di#erent results;
decide di#erently on the same result.
3 In terms of XA, the vote() primitive corresponds to a prepare() operation while the decide() primitive is patterned
after the commit() operation.
. Validity.
If the client delivers a result, then the result must have been computed by an application
server with, as a parameter, a request issued by the client;
server commits a result unless all database servers have voted yes for that
result.
Termination ensures that a client does not remain indefinitely blocked (T.1). Intuitively, this
property provides at-least-once request processing guarantee to the end-user, and frees her from
the burden of having to retry requests. Termination also ensures that no database server remains
blocked forever waiting for the outcome of a result (T.2), i.e., no matter what happens to the
client. This non-blocking property is important because a database server that has voted yes for a
result might have locked some resources. These remain inaccessible until the result is committed
or aborted [3]. Agreement ensures the consistency of the result (A.1) and the databases (A.3). It
also guarantees at most-once request processing (A.2). The first part of Validity (V.1) excludes
trivial solutions to the problem where the client invents a result, or delivers a result without having
issued any request. The second part (V.2) conveys the classical constraint of transactional systems:
no result can be committed if at least some database server "refuses" to do so. Basically, and
as we point out in Section 5, the e-Transaction specification adds to the traditional termination
properties of distributed databases, properties that bridge the gap between databases and clients
on one hand, and between at-least-once and exactly-once on the other hand.
4 An Exactly-Once Transaction Protocol
Our protocol consists of several parts. One is executed at the client, one is executed at the application
servers, and one at the database servers (Figure 1). The client interacts with the application
servers, which themselves interact with database servers. The complete algorithms are given in
Figure
2, Figure 3, Figure 4, Figure 5, and Figure 6. We describe the pseudo-code used in those
algorithms in Appendix 1, and give their correctness proofs in Appendix 2.
Client Protocol
The client part of the protocol is encapsulated within the implementation of the issue() primitive
Figure
2). This primitive is invoked with a request as an input parameter and is supposed to
eventually return a result. Basically, the client keeps retransmitting the request to the application
servers, until it receives back a committed result. The client might need to go through several tries
(intermediate results) before it gets a committed result. To optimize the failure-free scenario, the
client does not initially send the request to all application servers unless it does not receive a result
after a back-o# period (line 7 in Figure 2).
Application Server Protocol
Application servers execute what we call an asynchronous replication protocol (Figure 5 and Figure
6). In a "nice" run, where no process crashes or is suspected to have crashed, the protocol
goes as follows. There is a default primary application server that is supposed to initially receive
the client's request. The primary application server computes a result for the client's request, and
orchestrates a distributed atomic commitment protocol among the database servers to commit or
abort that result. Then the application server informs the client of the outcome of the result. The
outcome might be commit or abort, according to the votes of the databases (Figure 1 (a) and (b)).
Any application server that suspects the crash of the primary becomes itself a primary and tries
to terminate the result (Figure 6). If the result was already committed, the new primary finishes
the commitment of that result and sends back the decision to the client (Figure 1 (c)). Otherwise,
the new primary aborts the result, and informs the client about the abort decision (Figure 1 d).
Some form of synchronization is needed because (1) the result computation is non-deterministic
and (2) several primaries might be performing at the same time - we do not assume reliable failure
detection -. We need to ensure that the application servers agree on the outcome of every result. We
factor out the synchronization complexity through a consensus abstraction, which we call write-once
registers (or simply wo-registers). A wo-register has two operations: read() and write(). Roughly
speaking, if several processes try to write a value in the register, only one value is written, and
once it is written, no other value can be written. A process can read that value by invoking the
operation read(). More precisely:
. Write() takes a parameter input and returns a parameter output. The returned parameter is
either input - the process has indeed written its value - or some other value already written
in the register.
. Read() returns a value written in the register or the initial value #. If a value v was written
in the register, then, if a process keeps invoking the read() operation, then unless the process
crashes, eventually the value returned is the value v.
Intuitively, the semantics of a wo-register looks very much like that of a CD-ROM. In fact,
a wo-register is a simple extension of a so-called consensus object [19]. We simply assume here
the existence of wait-free wo-registers [19]. It is easy to see how one could obtain a wait-free
implementation of a wo-register from a consensus protocol executed among the application servers
(e.g., [4]): every application server would have a copy of the register. Basically, writing a value in
the wo-register comes down to proposing that value for the consensus protocol. To read a value, a
process simply returns the decision value received from the consensus protocol, if any, and returns
# if no consensus has been triggered.
Database Server Protocol
Figure
3 illustrates the functionality of database servers. A database server is a pure server (not a
client of other servers): it waits for messages from application servers to either vote or decide on
results. The database server protocol has a parameter that indicates whether the protocol is called
initially or during recovery. The parameter is bound to the variable recovery, that is then used in
the body of the protocol to take special recovery actions (line 2 of Figure 3). During recovery, a
database server informs the application servers about its "coming back".
Correctness Assumptions
We prove the correctness of our protocol in Appendix 2. The proofs are based on the following
assumptions. We will discuss the practicality of these assumptions in Section 5.
client
Transactional
manipulation
databases
prepare
yes
ackack
commit
result
appServers
regD.write(result,commit)
request
a2 a3
client
Transactional
manipulation
databases
prepare
ackack
appServers
request
a2 a3
abort
abort
regD.write(nil,abort)
no
client
Transactional
manipulation
databases
prepare
yes
appServers
regD.write(result,commit)
request
a2 a3
crash
suspect
client
Transactional
manipulation
databases
appServers
request
a2 a3
crash
suspect
Fail-over with abort
Fail-over with commit
ack
commit
result
ack
abort
abort
(c) (d)
(a) Failure-free run with commit (b) Failure-free run with abort
Figure
1: Communication steps in various executions
Class ClientProtocol {
list of AppServer alist := theAppServers; /* list of all application servers */
AppServer a 1 := thePrimary; /* the default primary */
period := thePeriod; /* back-o# period */
issue(Request request) {
AppServer a i ; /* an application server */
Decision decision; /* a pair (result,outcome) */
begin
while true do
send [Request,request, j] to a 1 ;
3 timeout := period; /* set the timeout period */
4 wait until (receive [Result,j, decision] from a i ) or expires(timeout);
5 if expired(timeout) then
6 send [Request,request, j] to alist;
7 wait until (receive [Result,j, decision] from a i );
9 return(decision.result); /* delivers the result and exits */
Figure
2: Client algorithm
Class DataServer {
list of AppServer alist := theAppServers; /* list of all application servers */
main(Bool recovery) {
Outcome outcome; /* outcome of a result: commit or abort */
AppServer a i ; /* an application server */
Integer j; /* a result identifier */
begin
recovery from the initial starting case */
send [Ready] to alist; /* recovery notification */
3 while (true) do
(receive [Prepare,j] from a i )
6 send [Vote,j,this.vote(j)] to a
(receive [Decide,j, outcome] from a i );
9 send [AckDecide,j] to a
terminate(Integer j, Outcome outcome) {.} /* commit or abort a result */
vote(Integer determine a vote for a result */
Figure
3: Database server algorithm
Class AppServerProtocol {
Client c; /* the client */
list of AppServer alist := theAppServers; /* list of all application servers */
list of DataServer dlist := theDataServers; /* list of all database servers */
array of Decision WORegister regD; /* array of decision WORegisters */
array of AppServer WORegister regA; /* array of application server WORegisters */
main(array of Decision WORegister r A , AppServer WORegister r D ) {
begin
3 while (true) do
computation thread */
cleanning thread */
7 coend
terminate(Integer j, Decision decision) {
repeat
3 send [Decide,j, decision.outcome] to dlist;
4 wait until (for every d k # dlist:
5 (receive [AckDecide,j] or [Ready] from d k ));
6 until (received([AckDecide,j]) from every d k # dlist)
7 send [Result,j, decision] to c;
9 }
prepare(Integer {
send [Prepare,j] to dlist;
3 wait until (for every d k # dlist:
4 (receive [Vote,j, vote k ] or [Ready] from d k ));
5 if (for every d k # dlist: (received([Vote,j,yes]) from d k )) then
6 return(commit);
7 else return(abort);
clean() {.}
Figure
4: Application server algorithm
{
Request request; /* request from the client */
AppServer a i ; /* an application server */
Decision decision := (nil,abort); /* a pair (outcome,result) */
Integer j; /* a result identifier */
begin
while (true) do
(receive [Request,request, j] from c);
4 send [Result,j, decision] to c; /* the result is already committed */
5 else
6 a i := regA[j].write(this);
9 decision.outcome := this.prepare(j);
decision := regD[j].write(decision);
Figure
5: The computation thread
{
Decision decision := (nil,abort); /* a pair (outcome,result) */
AppServer a i ; /* an application server */
list of Integer clist; /* list of "cleaned" results */
Integer j; /* a result identifier */
begin
while (true) do
2 for every a i # alist /* cleanning all results initiated by a i */
5 while (regA[j].read() #) do
6 if (j /
# clist) and
7 decision := regD[j].write(nil,abort);
9 add j into clist;
Figure
The cleanning thread
We assume that a majority of application servers are correct: they are always up. The failure
detector among application servers is supposed to be eventually perfect in the sense of [4]. In other
words, we assume that the following properties are satisfied: (completeness) if any application
server crashes at time t, then there is a time t # > t after which it is permanently suspected by
every application server; (accuracy) there is a time after which no correct application server is ever
suspected by any application server. We also assume that all database servers are good: (1) they
always recover after crashes, and eventually stop crashing, and (2) if an application server keeps
computing results, a result eventually commits. 4
We assume that clients, application servers, and database servers, are all connected through
reliable channels. The guarantees provided by the reliable channel abstraction are captured by the
following properties: (termination) if a process p i sends message m to process p j , then unless p i
or eventually delivers m; (integrity) every process receives a message at most once,
and only if the message was previously broadcasts by some process (messages are supposed to be
uniquely identified).
Concluding Remarks
On the specification of e-Transactions. Intuitively, the e-Transaction abstraction is very desir-
able. If a client issues a request "within" an e-Transaction, then, unless it crashes, the request
is executed exactly-once, and the client eventually delivers the corresponding result. If the client
crashes, the request is executed at-most-once and the database resources are eventually released.
As conveyed by our specification in Section 3, the properties underlying e-Transactions encompass
all players in a three-tier architecture: the client, the application servers, and the databases. Not
surprisingly, some of the properties are similar to those of non-blocking transaction termination [3].
In some sense, those properties ensure non-blocking at-most-once. Basically, the specification of
e-Transactions extend them to bridge the gap between at-most-once and exactly-once semantics.
On the asynchrony of the replication scheme. The heart of our e-Transaction protocol is the asynchronous
replication scheme performed among the application servers. Roughly speaking, with a
"patient" client and a reliable failure detector, our replication scheme tends to be similar to a primary
backup scheme [12]: there is only one active primary at a time. With an "impatient" client,
or an unreliable failure detector, we may easily end up in the situation where all application servers
try to concurrently commit or abort a result. In this case, like in an active replication scheme [11],
there is no single primary and all application servers have equal rights. One of the characteristics
of our replication protocol is precisely that it may vary, at run-time, between those two extreme
schemes.
On the practicality of our protocol. Many of the assumptions we made are "only" needed to
ensure the termination properties of our protocol (Appendix 2). These include the assumption of
a majority of correct application servers, the assumption of an eventually perfect failure detector
among application servers, the assumption that every database server being eventually always up,
4 The assumption that results eventually commit does not mean that there will eventually be a seat on a full flight.
It means that an application server will eventually stop trying to book a seat on a full flight, and instead compute a
result that can actually run to completion, for example a result that informs the user of the booking problem.
and the liveness properties of wo-registers and communication channels. In other words, if any
of these properties is violated, the protocol might block, but would not violate any agreement
nor validity property of our specification (Appendix 2). In practice, these termination-related
assumptions need only hold during the processing of a request. For example, we only need to assume
that, for each request, a majority of application servers remains up, and every database server will
eventually stay up long enough to successfully commit the result of that request. 5 Furthermore, the
assumption of a majority of correct processes is only needed to keep the protocol simple: we do not
explicitly deal with application server recovery. Without the assumption of a majority of correct
processes, one might still ensure termination properties by making use of underlying building blocks
that explicitly handle recovery, as in [22, 23]. The assumption of reliable channels do not exclude
link failures, as long as we can assume that any link failure is eventually repaired. In practice, the
abstraction of reliable channels is implemented by retransmitting messages and tracking duplicates.
Finally, to simplify the presentation of our protocol, we did not consider garbage collection
issues. For example, we did not address the issue of cleanning the wo-register arrays. To integrate
a garbage collector task, one needs to state that the at-most-once guarantee is only ensured if the
client does not retransmit requests after some known period of time. Being able to state this kind
of guarantees would require a timed model, e.g., along the lines of [24].
On the failure detection schemes. It is important to notice that our protocol makes use of three
failure detection schemes in our architecture, and this is actually not surprising given the nature of
three-tier systems. (1) Among application servers, we assume a failure detector that is eventually
perfect in the sense of [4]. As we pointed out, failure suspicions do however not lead to any incon-
sistency. (2) The application servers rely on a simple notification scheme to tell when a database
server has crashed and recovered. In practice, application servers would detect database crashes
because the database connection breaks when the database server crashes. Application servers
would receive an exception (or error status) when trying to manipulate the database. This can be
implemented without requiring the database servers to know the identity of the application servers.
(3) Clients use a simple timeout mechanisms to re-submit requests. This design decision reflects
our expectation that clients can communicate with servers across the Internet, which basically gives
rise to unpredictible failure detection.
On the practicality of our implementation Our current implementation was built using o#-the-shelf
technologies: the Orbix 2.3 Object Request Broker [16] and the Oracle 8.0.3 database management
system [17]. Our prototype was however aimed exclusively for testing purposes. In terms of the
latency, as viewed by a client, our protocol introduces an overhead of about 16% over a baseline
protocol that does not o#er any reliability guarantee (see Appendix 3). This overhead corresponds
to the steady-state, failure and suspicion free executions. These are the executions that are the
most likely to occur in practice, and for which protocols are usually optimized. Nevertheless, for a
complete evaluation of the practicality of our protocol, one obviously needs to consider the actual
5 Ensuring the recovery of every database server (within a reasonable time delay) is typically achieved by running
databases in clusters of machines [20, 21]. With a cluster, we can ensure that databases always recover within a
reasonable delay, but we must still assume that the system reaches a "steady state" where database servers stay up
long enough so that we can guarantee the progress of the request processing. In an asynchronous system however,
with no explicit notion of time, the notion of long enough is impossible to characterize, and is simply replaced with
the term always.
response-time of the protocol in the case of various failure alternatives. This should go through
the use of underlying consensus protocols that are also optimized in the case of failures and failure
suspicions, e.g., [25, 23].
--R
"How microsoft transaction server changes the com programming model,"
Object Management Group
Concurrency Control and Recovery in Database Systems.
"Unreliable failure detectors for reliable distributed systems,"
"Supporting nondeterministic execution in fault-tolerant sys- tems,"
"Reliable messages and connection establishment,"
"Implementing recoverable requests using queues,"
"E#cient transparent application recovery in client-server information systems,"
"Integrating the object transaction service with the web,"
"Corba fault-tolerance: why it does not add up,"
"Replication management using the state machine approach,"
"The primary-backup approach,"
"The delta-4 approach to dependability in open distributed computing systems,"
"Semi-passive replication,"
x/Open Company Ltd
Orbix 2.2 Programming Guide
Oracle8 Application Developer's Guide.
"Exactly-once-transactions,"
"Wait-free synchronization,"
Clusters for High-Availability: A Primer of HP-UX Solutions
"The design and architecture of the microsoft cluster service-a practical approach to high-availability and scalability,"
"Failure detection and consensus in the crash-recovery model,"
"Lazy consensus,"
"The timed asynchronous model,"
"A simple and fast asynchronous consensus protocol based on a weak failure detector,"
--TR
--CTR
Francesco Quaglia , Paolo Romano, Reliability in three-tier systems without application server coordination and persistent message queues, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Paolo Romano , Francesco Quaglia , Bruno Ciciani, A Lightweight and Scalable e-Transaction Protocol for Three-Tier Systems with Centralized Back-End Database, IEEE Transactions on Knowledge and Data Engineering, v.17 n.11, p.1578-1583, November 2005
Wenbing Zhao , Louise E. Moser , P. Michael Melliar-Smith, Unification of Transactions and Replication in Three-Tier Architectures Based on CORBA, IEEE Transactions on Dependable and Secure Computing, v.2 n.1, p.20-33, January 2005
Svend Frlund , Rachid Guerraoui, e-Transactions: End-to-End Reliability for Three-Tier Architectures, IEEE Transactions on Software Engineering, v.28 n.4, p.378-395, April 2002
Roberto Baldoni , Carlo Marchetti, Three-tier replication for FT-CORBA infrastructures, SoftwarePractice & Experience, v.33 n.8, p.767-797, 10 July | transactions;exactly-once semantics;replication;fault-tolerance;end-to-end reliability |
371884 | The role of commutativity in constraint propagation algorithms. | Constraing propagation algorithms form an important part of most of the constraint programming systems. We provide here a simple, yet very general framework that allows us to explain several constraint propagation algorithms in a systematic way. In this framework we proceed in two steps. First, we introduce a generic iteration algorithm on partial orderings and prove its correctness in an abstract setting. Then we instantiate this algorithm with specific partial orderings and functions to obtain specific constraint propagation algorithms. In particular, using the notions commutativity and semi-commutativity, we show that the AC-3, PC-2, DAC, and DPC algorithms for achieving (directional) arc consistency and (directional) path consistency are instances of a single generic algorithm. The work reported here extends and simplifies that of Apt [1999a]. | Introduction
Constraint programming in a nutshell consists of formulating and solving so-called
constraint satisfaction problems. One of the most important techniques developed in this
area is constraint propagation that aims at reducing the search space while maintaining
equivalence.
Constraint propagation is a very widely used concept. For instance on AltaVista,
http://www.altavista.com/on November 19, 1999 the query "constraint prop-
agation" yielded 2344 hits. In addition, in the literature several other names have been
used for the constraint propagation algorithms: consistency, local consistency, consistency
enforcing, Waltz, filtering or narrowing algorithms. So the total number of hits
may well be larger than 4413, the number of hits for the query "NP-completeness".
Over the last twenty few years several constraint propagation algorithms were proposed
and many of them are built into the existing constraint programming systems.
These algorithms usually aim at reaching some form of "local consistency", a notion
that in a loose sense approximates the notion of "global consistency". In Apt [1] we
introduced a simple framework that allowed us to explain many of these algorithms
in a uniform way. In this framework the notion of chaotic iterations, so fair iterations
of functions, on Cartesian products of specific partial orderings played a crucial role.
In Monfroy and R-ety [14] this framework was modified to study distributed chaotic
iterations. This resulted in a general framework for distributed constraint propagation
algorithms.
We stated in Apt [1] that "the attempts of finding general principles behind the
constraint propagation algorithms repeatedly reoccur in the literature on constraint satisfaction
problems spanning the last twenty years" and devoted three pages to survey
this work. Two references that are perhaps closest to our work are Benhamou [3] and
Telerman and Ushakov [17].
These developments led to an identification of a number of mathematical properties
that are of relevance for the considered functions, namely monotonicity, inflationarity
and idempotence (see, e.g., Saraswat, Rinard and Panangaden [16] and Benhamou and
Older [4]). Here we show that also the notions of commutativity and so-called semi-
commutativity are important.
As in Apt [1], to explain the constraint propagation algorithms, we proceed here
in two steps. First, we introduce a generic iteration algorithm on partial orderings and
prove its correctness in an abstract setting. Then we instantiate this algorithm with specific
partial orderings and functions. The partial orderings will be related to the considered
variable domains and the assumed constraints, while the functions will be the ones
that characterize considered notions of local consistency in terms of fixpoints.
This presentation allows us to clarify which properties of the considered functions
are responsible for specific properties of the corresponding algorithms. The resulting
analysis is simpler than that of Apt [1] because we concentrate here on constraint propagation
algorithms that always terminate. This allows us to dispense with the notion of
fairness. On the other hand, we can now prove stronger results by taking into account
the commutativity and semi-commutativity information.
This article is organized as follows. First, in Section 2, drawing on the approach of
Monfroy and R-ety [14], we introduce a generic algorithm for the case when the partial
ordering is not further analyzed. Next, in Section 3, we refine it for the case when the
partial ordering is a Cartesian product of component partial orderings and in Section
4 explain how the introduced notions should be related to the constraint satisfaction
problems. These last two sections essentially follow Apt [1], but because we started
here with the generic iteration algorithms on arbitrary partial orders we built now a
framework in which we can discuss the role of commutativity.
In the next four sections we instantiate the algorithm of Section 2 or some of its refinements
to obtain specific constraint propagation algorithms. In particular, in Section
5 we derive algorithms for arc consistency and hyper-arc consistency. These algorithms
can be improved by taking into account information on commutativity. This is done in
Section 6 and yields the well-known AC-3 algorithm. Next, in Section 7 we derive an
algorithm for path consistency and in Section 8 we improve it, again by using information
on commutativity. This yields the PC-2 algorithm.
In Section 9 we clarify under what assumptions the generic algorithm of Section
2 can be simplified to a simple for loop statement. Then we instantiate this simplified
algorithm to derive in Section 10 the DAC algorithm for directional arc consistency and
in Section 11 the DPC algorithm for directional path consistency. Finally, in Section 12
we briefly discuss possible future work.
We deal here only with the classical algorithms that establish (directional) arc consistency
and (directional) path consistency and that are more than twenty, respectively
ten, years old. However, several more "modern" constraint propagation algorithms can
also be explained in this framework. In particular, in Apt [1, page 203] we derived from
a generic algorithm a simple algorithm that achieves the notion of relational consistency
of Dechter and van Beek [8]. In turn, by mimicking the development of Sections 10 and
11, we can use the framework of Section 9 to derive the adaptive consistency algorithm
of Dechter and Pearl [7].
Dechter [6] showed that this algorithm can be formulated in a very general
framework of bucket elimination that in turn can be used to explain such well-known
algorithms as directional resolution, Fourier-Motzkin elimination, Gaussian elimina-
tion, and also various algorithms that deal with belief networks.
Algorithms
Our presentation is completely general. Consequently, we delay the discussion of constraint
satisfaction problems till Section 4. In what follows we shall rely on the following
concepts.
Definition 1. Consider a partial ordering (D; v ) with the least element ? and a finite
set of functions F := ff on D.
- By an iteration of F we mean an infinite sequence of values d defined
inductively by
where each i j is an element of [1::k].
We say that an increasing sequence d 0 v d 1 v d of elements from D eventually
stabilizes at d if for some j - 0 we have d
In what follows we shall consider iterations of functions that satisfy some specific
properties.
Definition 2. Consider a partial ordering (D; v ) and a function f on D.
- f is called inflationary if x v f(x) for all x.
- f is called monotonic if x v y implies f(x) v f(y) for all x; y. 2
The following simple observation clarifies the role of monotonicity. The subsequent
result will clarify the role of inflationarity.
Consider a partial ordering (D; v ) with the least element
? and a finite set of monotonic functions F on D.
Suppose that an iteration of F eventually stabilizes at a common fixpoint d of the
functions from F . Then d is the least common fixed point of the functions from F .
Proof. Consider a common fixpoint e of the functions from F . We prove that d v e. Let
be the iteration in question. For some j - 0 we have d
It suffices to prove by induction on i that d i v e. The claim obviously holds for
holds for some i - 0. We have d
By the monotonicity of f j and the induction hypothesis we get f j (d i
since e is a fixpoint of f j . 2
We fix now a partial ordering (D; v ) with the least element ? and a set of functions
on D. We are interested in computing the least common fixpoint of
the functions from F . To this end we study the following algorithm that is inspired by
a similar algorithm of Monfroy and R-ety [14].
GENERIC ITERATION ALGORITHM (GI)
d := ?;
G
while G 6= ; do
choose G;
G
G
d := g(d)
od
where for all G; g; d the set of functions update(G; g; d) from F is such that
A. ff
B. implies that update(G;
C. implies that g 2 update(G;
Intuitively, assumption A states that update(G; g; d) at least contains all the functions
from F \Gamma G for which d is a fixpoint but g(d) is not. So at each loop iteration such
functions are added to the set G. In turn, assumption B states that no functions are added
to G in case the value of d did not change. Note that even though after the assignment
G
holds. So assumption A does not provide any information when g is to be added back
to G. This information is provided in assumption C.
On the whole, the idea is to keep in G at least all functions f for which the current
value of d is not a fixpoint.
An obvious example of an update function that satisfies assumptions A and B is
update(G;
where
However, this choice of the update function is computationally expensive because for
each function f in F \Gamma G we would have to compute the values f(g(d)) and f(d). In
practice, we are interested in some approximations of the above update function. We
shall deal with this matter in the next section.
We now prove correctness of this algorithm in the following sense.
Theorem 1 (GI).
(i) Every terminating execution of the GI algorithm computes in d a common fixpoint
of the functions from F .
(ii) Suppose that all functions in F are monotonic. Then every terminating execution
of the GI algorithm computes in d the least common fixpoint of the functions from
F .
(iii) Suppose that all functions in F are inflationary and that (D; v ) is finite. Then
every execution of the GI algorithm terminates.
Proof.
(i) Consider the predicate I defined by:
I := 8f
Note that I is established by the assignment G := F . Moreover, it is easy to check that
by virtue of assumptions A, B and C I is preserved by each while loop iteration. Thus
I is an invariant of the while loop of the algorithm. (In fact, assumptions A, B and C
are so chosen that I becomes an invariant.) Hence upon its termination
holds, that is
(ii) This is a direct consequence of (i) and the Stabilization Lemma 1.
(iii) Consider the lexicographic ordering of the strict partial orderings (D; =) and
defined on the elements of D \Theta N by
We use here the inverse ordering = defined by: d
Given a finite set G we denote by cardG the number of its elements. By assumption
all functions in F are inflationary so, by virtue of assumption B, with each while loop
iteration of the modified algorithm the pair
(d; card G)
strictly decreases in this ordering ! lex . But by assumption (D; v ) is finite, so (D; =)
is well-founded and consequently so is (D \Theta N ; ! lex ). This implies termination. 2
In particular, we obtain the following conclusion.
Corollary 1 (GI). Suppose that (D; v ) is a finite partial ordering with the least
element ?. Let F be a finite set of monotonic and inflationary functions on D. Then
every execution of the GI algorithm terminates and computes in d the least common
fixpoint of the functions from F . 2
In practice, we are not only interested that the update function is easy to compute
but also that it generates small sets of functions. Therefore we show how the function
update can be made smaller when some additional information about the functions in
F is available. This will yield specialized versions of the GI algorithm. First we need
the following simple concepts.
Definition 3. Consider two functions f; g on a set D.
We say that f and g commute if
- We call f idempotent if
The following result holds.
Theorem 2 (Update).
(i) If update(G; g; d) satisfies assumptions A, B and C, then so does the function
update(G;
where
is idempotent and otherwise
(ii) Suppose that for each g the set of functions Comm(g) from F is such that
- each element of Comm(g) commutes with g.
If update(G; g; d) satisfies assumptions A, B and C, then so does the function
update(G;
Proof. It suffices to establish in each case assumption A and C. Let
A := ff
(i) After introducing the GI algorithm we noted already that g 62 A. So assumption A
implies A ' update(G;
For assumption C it suffices to note that g(g(d)) 6= g(d) implies that g is not idem-
potent, i.e., that
(ii) Consider f 2 A. Suppose that f 2 Comm(g). Then
which is a contradiction. So f 62 Comm(g). Consequently, assumption A implies
A ' update(G;
For assumption C it suffices to use the fact that g 62 Comm(g). 2
We conclude that given an instance of the GI algorithm that employs a specific
update function, we can obtain other instances of it by using update functions modified
as above. Note that both modifications are independent of each other and therefore can
be applied together.
In particular, when each function is idempotent and the function Comm satisfies the
assumptions of (ii), then the following holds: if update(G; g; d) satisfies assumptions
so does the function update(G;
3 Compound Domains
In the applications we study the iterations are carried out on a partial ordering that is
a Cartesian product of the partial orderings. So assume now that the partial ordering
(D; v ) is the Cartesian product of some partial orderings (D
each with the least element ? i . So
Further, we assume that each function from F depends from and affects only certain
components of D. To be more precise we introduce a simple notation and terminology.
Definition 4. Consider a sequence of partial orderings (D
- By a scheme (on n) we mean a growing sequence of different elements from [1::n].
- Given a scheme s := on n we denote by (D s ; v s ) the Cartesian product
of the partial orderings (D
- Given a function f on D s we say that f is with scheme s and say that f depends
on i if i is an element of s.
- Given an n-tuple d := d from D and a scheme s := on n we
denote by d[s] the tuple d
l . In particular, for j 2 [1::n] d[j] is the j-th
element of d. 2
Consider now a function f with scheme s. We extend it to a function f + from D to
D as follows. Take d 2 D. We set
is the scheme obtained
by removing from the elements of s. We call f + the canonic extension of f to
the domain D.
any i not in the scheme s of f .
Informally, we can summarize it by saying that f + does not change the components on
which it does not depend. This is what we meant above by stating that each considered
function affects only certain components of D.
We now say that two functions, f with scheme s and g with scheme t commute if
the functions f
Instead of defining iterations for the case of the functions with schemes, we rather
reduce the situation to the one studied in the previous section and consider, equivalently,
the iterations of the canonic extensions of these functions to the common domain D.
However, because of this specific form of the considered functions, we can use now a
simple definition of the update function. More precisely, we have the following observation
Note 1 (Update). Suppose that each function in F is of the form f . Then the following
function update satisfies assumptions A, B and C:
update(G; depends on some i in s such that d[i] 6=
where g is with scheme s.
Proof. To deal with assumption A take a function f G such that f
e that coincides with d on all components that are in the
scheme of f .
Suppose now additionally that f (d). By the above (d) is not such
an e, i.e., (d) differs from d on some component i in the scheme of f . In other words,
f depends on some i such that d[i] 6= g + (d)[i]. This i is then in the scheme of g and
consequently
The proof for assumption B is immediate.
Finally, to deal with assumption C it suffices to note that
implies d, which in turn implies that
This, together with the GI algorithm, yields the following algorithm in which we
introduced a variable d 0 to hold the value of g + (d), and used F 0 := ff
the functions with schemes instead of their canonic extensions to D.
GENERIC ITERATION ALGORITHM FOR COMPOUND DOMAINS (CD)
while G 6= ; do
choose G; suppose g is with scheme s;
G
G depends on some i in s such that d[i] 6= d 0 [i]g;
od
The following corollary to the GI Theorem 1 and the Update Note 1 summarizes
the correctness of this algorithm. It corresponds to Theorem 11 of Apt [1] where the
iteration algorithms were introduced immediately on compound domains.
Corollary 2 (CD). Suppose that (D; v ) is a finite partial ordering that is a Cartesian
product of n partial orderings, each with the least element ? i with
F be a finite set of functions on D, each of the form f
Suppose that all functions in F are monotonic and inflationary. Then every execution
of the CD algorithm terminates and computes in d the least common fixpoint of the
functions from F . 2
In the subsequent presentation we shall deal with the following two modifications
of the CD algorithm:
- CDI algorithm. This is the version of the CD algorithm in which all the functions
are idempotent and the function update defined in the Update Theorem 2(i) is
used.
algorithm. This is the version of the CD algorithm in which all the functions are
idempotent and the combined effect of the functions update defined in the Update
Theorem 2 is used for some function Comm.
For both algorithms the counterparts of the CD Corollary 2 hold.
4 From Partial Orderings to Constraint Satisfaction Problems
We have been so far completely general in our discussion. Recall that our aim is to
derive various constraint propagation algorithms. To be able to apply the results of
the previous section we need to relate various abstract notions that we used there to
constraint satisfaction problems.
This is perhaps the right place to recall the definition and to fix the notation. Consider
a finite sequence of variables X := x respective
domains D := D associated with them. So each variable x i ranges over the
domain D i . By a constraint C on X we mean a subset of D 1
By a constraint satisfaction problem, in short CSP, we mean a finite sequence of
variables X with respective domains D, together with a finite set C of constraints, each
on a subsequence of X . We write it as hC
Consider now an element d := d Dn and a subsequence
of X . Then we denote by d[Y ] the sequence d i 1
By a solution to we mean an element d 2 D 1 \Theta
Dn such that for each constraint C 2 C on a sequence of variables Y we have
We call a CSP consistent if it has a solution. Two CSP's P 1 and P 2 with the
same sequence of variables are called equivalent if they have the same set of solutions.
This definition extends in an obvious way to the case of two CSP's with the same sets
of variables.
Let us return now to the framework of the previous section. It involved:
(i) Partial orderings with the least elements;
These will correspond to partial orderings on the CSP's. In each of them the original
CSP will be the least element and the partial ordering will be determined by the
local consistency notion we wish to achieve.
(ii) Monotonic and inflationary functions with schemes;
These will correspond to the functions that transform the variable domains or the
constraints. Each function will be associated with one or more constraints.
(iii) Common fixpoints;
These will correspond to the CSP's that satisfy the considered notion of local consistency
Let us be now more specific about items (i) and (ii).
Re: (i)
To deal with the local consistency notions considered in this paper we shall introduce
two specific partial orderings on the CSP's. In each of them the considered CSP's
will be defined on the same sequences of variables.
We begin by fixing for each set D a collection F(D) of the subsets of D that includes
D itself. So F is a function that given a set D yields a set of its subsets to which
belongs.
When dealing with the notion of hyper-arc consistency F(D) will be simply the set
P(D) of all subsets of D but for specific domains only specific subsets of D will be cho-
sen. For example, to deal with the the constraint propagation for the linear constraints
on integer interval domains we need to choose for F(D) the set of all subintervals of
the original interval D.
When dealing with the path consistency, for a constraint C the collection F(C)
will be also the set P(C) of all subsets of C. However, in general other choices may
be needed. For example, to deal with the cutting planes method, we need to limit our
attention to the sets of integer solutions to finite sets of linear inequalities with integer
coefficients (see Apt [1, pages 193-194]).
Next, given two CSP's, OE := hC
n i, we write OE v d / iff
- the constraints in C 0 are the restrictions of the constraints in C to the domains
n .
Next, given two CSP's, OE := hC
In what follows we call v d the domain reduction ordering and v c the constraint
reduction ordering. To deal with the arc consistency, hyper-arc consistency and directional
arc consistency notions we shall use the domain reduction ordering, and to deal
with path consistency and directional path consistency notions we shall use the constraint
reduction ordering.
We consider each ordering with some fixed initial CSP P as the least element. In
other words, each domain reduction ordering is of the form
and each constraint reduction ordering is of the form
Re: (ii)
The domain reduction ordering and the constraint reduction ordering are not directly
amenable to the analysis given in Section 3. Therefore, we shall rather use equivalent
partial orderings defined on compound domains. To this end note that hC
This equivalence means that for
the domain reduction ordering (fP with the Cartesian product of the
partial orderings
Additionally, each CSP in this domain reduction ordering is uniquely determined by
its domains and by the initial P . Indeed, by the definition of this ordering the constraints
of such a CSP are restrictions of the constraints of P to the domains of this CSP.
Similarly,
This allows us for to identify the constraint reduction ordering
with the Cartesian product of the partial orderings
Also, each CSP in this constraint reduction ordering is uniquely
determined by its constraints and by the initial P .
In what follows instead of the domain reduction ordering and the constraint reduction
ordering we shall use the corresponding Cartesian products of the partial orderings.
So in these compound orderings the sequences of the domains (respectively, of the con-
straints) are ordered componentwise by the reversed subset ordering '. Further, in each
component ordering (F(D); ') the set D is the least element.
The reason we use these compound orderings is that we can now employ functions
with schemes, as used in Section 3. Each such function f is defined on a sub-Cartesian
product of the constituent partial orderings. Its canonic extension f introduced in
Section 3, is then defined on the "whole" Cartesian product.
Suppose now that we are dealing with the domain reduction ordering with the least
(initial) CSP P and that
Then the sequence of the domains (D uniquely determine a CSP in
this ordering and the same for (D 0
, and a fortiori f , can be
viewed as a function on the CSP's that are elements of this domain reduction ordering.
In other words, f can be viewed as a function on CSP's.
The same considerations apply to the constraint reduction ordering. We shall use
these observations when arguing about the equivalence between the original and the
final CSP's for various constraint propagation algorithms.
The considered functions with schemes will be now used in presence of the componentwise
ordering '. The following observation will be useful.
Consider a function f on some Cartesian product
that f is inflationary w.r.t. the componentwise ordering ' if for all (X
Also, f is monotonic w.r.t. the componentwise ordering ' if for all (X
i for all i 2 [1::m], the
following holds: if
In other words, f is monotonic w.r.t. ' iff it is monotonic w.r.t. This reversal of
the set inclusion of course does not hold for the inflationarity notion.
5 A Hyper-arc Consistency Algorithm
We begin by considering the notion of hyper-arc consistency of Mohr and Masini [13]
(we use here the terminology of Marriott and Stuckey [11]). The more known notion of
arc consistency of Mackworth [10] is obtained by restricting one's attention to binary
constraints. Let us recall the definition.
Definition 5.
- Consider a constraint C on the variables x with the respective domains
that is C 'D 1 \Theta \Delta \Delta \Delta \Theta Dn . We call C hyper-arc consistent if for every
there exists d 2 C such that a = d[i].
We call a CSP hyper-arc consistent if all its constraints are hyper-arc consistent. 2
Intuitively, a constraint C is hyper-arc consistent if for every involved domain each
element of it participates in a solution to C.
To employ the CDI algorithm of Section 3 we now make specific choices involving
the items (i), (ii) and (iii) of the previous section.
Re: (i) Partial orderings with the least elements.
As already mentioned in the previous section, for the function F we choose the
powerset function P , so for each domain D we put F(D) := P(D).
Given a CSP P with the sequence D of the domains we take the domain
reduction ordering with P as its least element. As already noted we can identify this
ordering with the Cartesian product of the partial orderings (P(D i
[1::n]. The elements of this compound ordering are thus sequences (X
respective subsets of the domains D by the reversed
subset ordering '.
Re: (ii) Monotonic and inflationary functions with schemes.
Given a constraint C on the variables y respective domains
we abbreviate for each j 2 [1::k] the set fd[j] j d 2 Cg to \Pi j (C). Thus \Pi j (C) consists
of all j-th coordinates of the elements of C. Consequently, \Pi j (C) is a subset of
the domain E j of the variable y j .
We now introduce for each i 2 [1::k] the following function - i on
where
That is,
Cg. Each function - i is associated
with a specific constraint C. Note that X 0
so each function - i boils down to a
projection on the i-th component.
Re: (iii) Common fixpoints.
Their use is clarified by the following lemma that also lists the relevant properties
of the functions - i (see Apt [1, pages 197 and 202]).
common fixpoint of all functions -
associated with the constraints from C.
(ii) Each projection function - i associated with a constraint C is
inflationary w.r.t. the componentwise ordering ',
monotonic w.r.t. the componentwise ordering ',
By taking into account only the binary constraints we obtain an analogous characterization
of arc consistency. The functions - 1 and - 2 can then be defined more directly
as follows:
Cg, and
Cg.
Fix now a CSP P . By instantiating the CDI algorithm with
associated with a constraint of Pg
and with each ? i equal to D i we get the HYPER-ARC algorithm that enjoys following
properties.
Theorem 3 (HYPER-ARC Algorithm). Consider a CSP P := hC
each D i is finite.
The HYPER-ARC algorithm always terminates. Let P 0 be the CSP determined by
P and the sequence of the domains D 0
n computed in d. Then
is the v d -least CSP that is hyper-arc consistent,
is equivalent to P . 2
Due to the definition of the v d ordering the item (i) can be rephrased as follows.
Consider all hyper-arc consistent CSP's that are of the form hC
and the constraints in C 0 are the restrictions of the
constraints in C to the domains D 0
n . Then among these CSP's P 0 has the largest
domains.
Proof. The termination and (i) are immediate consequences of the counterpart of the
CD Corollary 2 for the CDI algorithm and of the Hyper-arc Consistency Lemma 2.
To prove (ii) note that the final CSP P 0 can be obtained by means of repeated
applications of the projection functions - i starting with the initial CSP P . (Conforming
to the discussion at the end of Section 4 we view here each such function as a function
on CSP's). As noted in Apt [1, pages 197 and 201]) each of these functions transforms
a CSP into an equivalent one. 2
6 An Improvement: the AC-3 Algorithm
In this section we show how we can exploit an information about the commutativity of
the - i functions. Recall that in Section 3 we modified the notion of commutativity for
the case of functions with schemes. We now need the following lemma.
Lemma 3 (Commutativity). Consider a CSP and two constraints of it, C on the variables
on the variables z
(i) For [1::k] the functions - i and - j of the constraint C commute.
(ii) If the variables y i and z j are identical then the functions - i of C and - j of E
commute.
Proof.
(i) It suffices to notice that for each k-tuple X of subsets of the domains of
the respective variables we have
where
and where we assumed that i ! j.
(ii) Let the considered CSP be of the form hC Assume that
some common variable of y is identical to the variable x h .
Further, let Sol(C; E) denote the set of d 2 D 1 Dn such that d[s] 2 C and
s is the scheme of C and t is the scheme of E.
Finally, let f denote the - i function of C and g the - j function of E. It is easy to
check that for each n-tuple X of subsets of D respectively, we have
where
)):It is worthwhile to note that not all pairs of the - i and - j functions commute.
Example 1.
(i) First, we consider the case of two binary constraints on the same variables. Consider
two variables, x and y with the corresponding domains D x := fa; bg, D y := fc; dg
and two constraints on x; y: C 1 := f(a; c); (b; d)g and C 2 := f(a; d)g.
Next, consider the - 1 function of C 1 and the - 2 function of C 2 . Then applying
these functions in one order, namely - 2 - 1 , to (D x ; D y ) yields D x unchanged, whereas
applying them in the other order, - 1 - 2 , yields D x equal to fbg.
(ii) Next, we show that the commutativity can also be violated due to sharing of a
single variable. As an example take the variables x; z with the corresponding domains
D x := fa; bg, D y := fbg, D z := fc; dg, and the constraint C 1 := f(a; b)g on x; y and
(b; d)g on x; z.
Consider now the - +function of C 1 and the - +function of C 2 . Then applying
these functions in one order, namely -
- +, to (D x ; D y ; D z ) yields D z equal to fcg,
whereas applying them in the other order, -
Fix now a CSP. We derive a modification of the HYPER-ARC algorithm by instantiating
this time the CDC algorithm. As before we use the set of functions F 0 :=
associated with a constraint of Pg and each ? i equal to D i . Additionally
we employ the following function Comm, where - i is associated with a constraint
C:
associated with the constraint Cg
associated with a constraint E and
the i-th variable of C and the j-th variable of E coincideg.
By virtue of the Commutativity Lemma 3 each set Comm(g) satisfies the assumptions
of the Update Theorem 2(ii).
By limiting oneself to the set of functions - 1 and - 2 associated with the binary
constraints, we obtain an analogous modification of the corresponding arc consistency
algorithm.
Using now the counterpart of the CD Corollary 2 for the CDC algorithm we conclude
that the above algorithm enjoys the same properties as the HYPER-ARC algorithm, that
is the counterpart of the HYPER-ARC Algorithm Theorem 3 holds.
Let us clarify now the difference between this algorithm and the HYPER-ARC algorithm
when both of them are limited to the binary constraints.
Assume that the considered CSP is of the form hC ; DEi. We reformulate the above
algorithm as follows. Given a binary relation R, we put
For F 0 we now choose the set of the - 1 functions of the constraints or relations from
the set
is a binary constraint from Cg
is a binary constraint from Cg.
Finally, for each - 1 function of some C 2 S 0 on x; y we define
is the - 1 function of C T g
is the - 1 function of some E 2 S 0 on x; z where z 6j yg.
Assume now that
for each pair of variables x; y at most one constraint exists on x; y. (1)
Consider now the corresponding instance of the CDC algorithm. By incorporating
into it the effect of the functions - 1 on the corresponding domains, we obtain the following
algorithm known as the AC-3 algorithm of Mackworth [10].
We assume here that DE := x
AC-3 ALGORITHM
is a binary constraint from Cg
is a binary constraint from Cg;
while S 6= ; do
choose C 2 suppose C is on x
if D i changed then
S is on the variables
od
It is useful to mention that the corresponding reformulation of the HYPER-ARC
algorithm differs in the second assignment to S which is then
S is on the variables z where y is x i or z is x i g:
So we "capitalized" here on the commutativity of the corresponding projection
functions - 1 as follows. First, no constraint or relation on x i ; z for some z is added
to S. Here we exploited part (ii) of the Commutativity Lemma 3.
Second, no constraint or relation on x added to S. Here we exploited part (i)
of the Commutativity Lemma 3, because by assumption (1) C T is the only constraint
or relation on x coincides with the - 2 function of C.
In case the assumption (1) about the considered CSP is dropped, the resulting algorithm
is somewhat less readable. However, once we use the following modified definition
of Comm(- 1
is the - 1 function of some E 2 S 0 on x; z where z 6j yg
we get an instance of the CDC algorithm which differs from the AC-3 algorithm in
that the qualification "where y 6j x j " is removed from the definition of the second
assignment to the set S.
7 A Path Consistency Algorithm
The notion of path consistency was introduced in Montanari [15]. It is defined for special
type of CSP's. For simplicity we ignore here unary constraints that are usually
present when studying path consistency.
Definition 6. We call a CSP P normalized if for each subsequence X of its variables
there exists at most one constraint on X in P .
Given a normalized CSP and a subsequence X of its variables we denote by CX the
unique constraint on the variables X if it exists and otherwise the "universal" relation
on X that equals the Cartesian product of the domains of the variables in X . 2
Every CSP is trivially equivalent to a normalized CSP. Indeed, for each subsequence
X of the variables of P such that a constraint on X exists, we just need to replace the
set of all constraints on X by its intersection. Note that the universal relations CX are
not constraints of the normalized CSP.
To simplify the notation given two binary relations R and S we define their composition
by
R
Note that if R is a constraint on the variables x; y and S a constraint on the variables
z, then R \Delta S is a constraint on the variables x; z.
Given a subsequence x; y of two variables of a normalized CSP we introduce a
"supplementary" relation C y;x defined by
Recall that the relation C T was introduced in the previous section. The supplementary
relations are not parts of the considered CSP as none of them is defined on a
subsequence of its variables, but they allow us a more compact presentation. We now
introduce the following notion.
Definition 7. We call a normalized CSP path consistent if for each subset fx; y; zg of
its variables we have
:In other words, a normalized CSP is path consistent if for each subset fx; y; zg of
its variables the following holds:
if (a; c) 2 C x;z , then there exists b such that (a; b) 2 C x;y and (b; c) 2 C y;z .
In the above definition we used the relations of the form C u;v for any subset fu; vg
of the considered sequence of variables. If u; v is not a subsequence of the original sequence
of variables, then C u;v is a supplementary relation that is not a constraint of the
original CSP. At the expense of some redundancy we can rewrite the above definition so
that only the constraints of the considered CSP and the universal relations are involved.
This is the contents of the following simple observation that will be useful later in this
section.
(Alternative Path Consistency). A normalized CSP is path consistent iff for each
subsequence z of its variables we have
x z
y
Fig. 1. Three relations on three variables
Figure
1 clarifies this observation. For instance, an indirect path from x to y via z
requires the reversal of the arc (y; z). This translates to the first formula.
Recall that for a subsequence x; z of the variables the relations C x;y ; C x;z and
C y;z denote either constraints of the considered normalized CSP or the universal binary
relations on the domains of the corresponding variables.
Given a subsequence x; z of the variables of P we now introduce three functions
on P(C x;y ) \Theta P(C x;z ) \Theta P(C y;z
f z
x;y
f y
Finally, we introduce common fixpoints of the above defined functions. To this end
we need the following counterpart of the Hyper-arc Consistency Lemma 2.
Lemma 4 (Path Consistency).
(i) A normalized CSP hC
fixpoint of all functions (f z
associated with the sub-sequences
z of its variables.
(ii) The functions f z
x;y , f y
x;z and f x
are
inflationary w.r.t. the componentwise ordering ',
monotonic w.r.t. the componentwise ordering ',
Proof. (i) is a direct consequence of the Alternative Path Consistency Note 2. The
proof of (ii) is straightforward. These properties of the functions f z
x;y , f y
x;z and f x
were already mentioned in Apt [1, page 193].We now instantiate the CDI algorithm with the set of functions
z is a subsequence of the variables of P and f 2 ff z
each ? i equal to C i .
Call the resulting algorithm the PATH algorithm. It enjoys the following properties.
Theorem 4 (PATH Algorithm). Consider a normalized CSP P := hC
Assume that each constraint C i is finite.
The PATH algorithm always terminates. Let P 0 := hC 0
where the
sequence of the constraints C 0
k is computed in d. Then
is the v c -least CSP that is path consistent,
is equivalent to P .
As in the case of the HYPER-ARC Algorithm Theorem 3 the item (i) can be rephrased
as follows. Consider all path consistent CSP's that are of the form hC 0
has the largest constraints.
Proof. The proof is analogous to that of the HYPER-ARC Algorithm Theorem 3.
To prove (ii) we now note that the final CSP P 0 can be obtained by means of
repeated applications of the functions f z
x;y , f y
x;z and f x
y;z starting with the initial CSP P .
(Conforming to the discussion at the end of Section 4 we view here each such function
as a function on CSP's). As noted in Apt [1, pages 193 and 195]) each of these functions
transforms a CSP into an equivalent one. 2
8 An Improvement: the PC-2 Algorithm
As in the case of the hyper-arc consistency we can improve the PATH algorithm by
taking into account the commutativity information.
Fix a normalized CSP P . We abbreviate the statement "x; y is a subsequence of the
variables of P " to x OE y. We now have the following lemma.
Lemma 5 (Commutativity). Suppose that x OE y and let z; u be some variables of P
such that fu; zg " fx; ;. Then the functions f z
x;y and f u
x;y commute.
In other words, two functions with the same pair of variables as a subscript commute
Proof. The following intuitive argument may help to understand the subsequent, more
formal justification. First, both considered functions have three arguments but share
exactly one argument and modify only this shared argument. Second, both functions
are defined in terms of the set-theoretic intersection operation """ applied to two, un-
changed, arguments. This yields commutativity since """ is commutative.
In the more formal argument note first that the "relative" positions of z and of u
w.r.t. x and y are not specified. There are in total three possibilities concerning z and
three possibilities concerning u. For instance, z can be "before" x , "between" x and y
or "after" y. So we have to consider in total nine cases.
In what follows we limit ourselves to an analysis of three representative cases. The
proof for the remaining six cases is completely analogous.
Case 1. y OE z and y OE u.
x y
z
Fig. 2. Four variables connected by directed arcs
It helps to visualize these variables as in Figure 2. Informally, the functions f z
x;y
and f u
x;y correspond, respectively, to the upper and lower triangle in this figure. The
fact that these triangles share an edge corresponds to the fact that the functions f z
x;y and
f u
x;y share precisely one argument, the one from P(C x;y ).
Ignoring the arguments that do not correspond to the schemes of the functions f z
x;y
and f u
x;y we can assume that the functions (f z
are both defined on
Each of these functions changes only the first argument. In fact, for all elements
of, respectively, P(C x;y ); P(C x;z ); P(C y;z ); P(C x;u ) and P(C y;u ), we have
(f z
Case 2. x OE z OE y OE u.
The intuitive explanation is analogous as in Case 1. We confine ourselves to noting
that (f z
are now defined on
but each of them changes only the second argument. In fact, we have
(f z
Case 3. z OE x and y OE u.
In this case the functions (f z
are defined on
but each of them changes only the third argument. In fact, we have
(f z
now instantiate the CDC algorithm with the same set of functions F 0 as in Section
7. Additionally, we use the function Comm defined as follows, where x OE y and
where z 62 fx; yg:
Comm(f z
Thus for each function g the set Comm(g) contains precisely
where m is the number of variables of the considered CSP. This quantifies the maximal
"gain" obtained by using the commutativity information: at each "update" stage of the
corresponding instance of the CDC algorithm we add up to m \Gamma 3 less elements than in
the case of the corresponding instance of the CDI algorithm considered in the previous
section.
By virtue of the Commutativity Lemma 5 each set Comm(g) satisfies the assumptions
of the Update Theorem 2(ii). We conclude that the above instance of the CDC
algorithm enjoys the same properties as the original PATH algorithm, that is the counterpart
of the PATH Algorithm Theorem 4 holds. To make this modification of the PATH
algorithm easier to understand we proceed as follows.
Each function of the form f u
x;y where x OE y and u 62 fx; yg can be identified with
the sequence x; u; y of the variables. (Note that the "relative" position of u w.r.t. x and
y is not fixed, so x; u; y does not have to be a subsequence of the variables of P.) This
allows us to identify the set of functions F 0 with the set
Next, assuming that x OE y, we introduce the following set of triples of different
variables of P :
xg.
Informally, V x;y is the subset of V 0 that consists of the triples that begin or end
with either x; y or x. This corresponds to the set of functions in one of the following
forms: f y
u;y and f y
u;x .
The above instance of the CDC algorithm then becomes the following PC-2 algorithm
of Mackworth [10]. Here initially
while do
choose
apply f u
x;y to its current domains;
od
Here the phrase "apply f u
x;y to its current domains" can be made more precise if the
"relative" position of u w.r.t. x and y is known. Suppose for instance that u is "before"
x and y. Then f u
x;y is defined on P(C u;x ) \Theta P(C u;y ) \Theta P(C x;y ) by
f u
x;y
so the above phrase "apply f u
x;y to its current domains" can be replaced by the assign-
ment
Analogously for the other two possibilities.
The difference between the PC-2 algorithm and the corresponding representation
of the PATH algorithm lies in the way the modification of the set V is carried out. In
the case of the PATH algorithm the second assignment to V is
9 Simple Iteration Algorithms
Let us return now to the framework of Section 2. We analyze here when the while loop
of the GENERIC ITERATION ALGORITHM GI can be replaced by a for loop. First, we
weaken the notion of commutativity as follows.
Definition 8. Consider a partial ordering (D; v ) and functions f and g on D. We say
that f semi-commutes with g (w.r.t. v ) if f(g(x)) v g(f(x)) for all x. 2
The following lemma provides an answer to the question just posed. Here and elsewhere
we omit brackets when writing repeated applications of functions to an argument.
Lemma 6 (Simple Iteration). Consider a partial ordering (D; v ) with the least element
?. Let F := f be a finite sequence of monotonic, inflationary and idempotent
functions on D. Suppose that f i semi-commutes with f j for i ? j, that is,
is the least common fixpoint of the functions from F . 2
Proof. We prove first that for i 2 [1::k] we have
Indeed, by the assumption (2) we have the following string of inclusions, where the last
one is due to the idempotence of the considered functions:
Additionally, by the inflationarity of the considered functions, we also have for
[1::k]
is a common fixpoint of the functions from F . This means that
the iteration of F that starts with ?, f k (?), f eventually
stabilizes at f 1 (?). By the Stabilization Lemma 1 we get the desired conclusion.The above lemma provides us with a simple way of computing the least common
fixpoint of a set of finite functions that satisfy the assumptions of this lemma, in particular
condition (2). Namely, it suffices to order these functions in an appropriate way
and then to apply each of them just once, starting with the argument ?.
To this end we maintain the considered functions not in a set but in a list. Given a
non-empty list L we denote its head by head(L) and its tail by tail(L). Next, given a
sequence of elements an with n - 0, we denote by [a an ] the list formed
by them. If this list is empty and is denoted by [
an an
The following algorithm is a counterpart of the GI algorithm. We assume in it that
condition (2) holds for the functions f
SIMPLE ITERATION ALGORITHM (SI)
d := ?;
for to k do
d := g(d)
od
The following immediate consequence of the Simple Iteration Lemma 6 is a counterpart
of the GI Corollary 1.
Corollary 3 (SI). Suppose that (D; v ) is a partial ordering with the least element
?. Let F := f be a finite sequence of monotonic, inflationary and idempotent
functions on D such that (2) holds. Then the SI algorithm terminates and computes in
d the least common fixpoint of the functions from F . 2
Note that in contrast to the GI Corollary 1 we do not require here that the partial ordering
is finite. Because at each iteration of the for loop exactly one element is removed
from the list L, at the end of this loop the list L is empty. Consequently, this algorithm
is a reformulation of the one in which the line
for to k do
is replaced by
while
So we can view the SI algorithm as a specialization of the GI algorithm of Section
2 in which the elements of the set of functions G (here represented by the list L) are
selected in a specific way and in which the update function always yields the empty
set.
In Section 3 we refined the GI algorithm for the case of compound domains. An
analogous refinement of the SI algorithm is straightforward and omitted. In the next
two sections we show how we can use this refinement of the SI algorithm to derive two
well-known constraint propagation algorithms.
Directional Arc Consistency Algorithm
We consider here the notion of directional arc consistency of Dechter and Pearl [7]. Let
us recall the definition.
Definition 9. Assume a linear ordering OE on the considered variables.
- Consider a binary constraint C on the variables x; y with the domains D x and D y .
We call C directionally arc consistent w.r.t. OE if
y,
So out of these two conditions on C exactly one needs to be checked.
We call a CSP directionally arc consistent w.r.t. OE if all its binary constraints are
directionally arc consistent w.r.t. OE. 2
To derive an algorithm that achieves this local consistency notion we first characterize
it in terms of fixpoints. To this end, given a P and a linear ordering OE on its variables,
we rather reason in terms of the equivalent CSP P OE obtained from P by reordering its
variables along OE so that each constraint in P OE is on a sequence of variables x
such that x 1 OE x
The following simple characterization holds.
Lemma 7 (Directional Arc Consistency). Consider a CSP P with a linear ordering
OE on its variables. Let P OE := hC directionally arc
consistent w.r.t. OE iff (D fixpoint of the functions -
associated
with the binary constraints from P OE . 2
We now instantiate in an appropriate way the SI algorithm for compound domains
with all the - 1 functions associated with the binary constraints from P OE . In this way
we obtain an algorithm that achieves for P directional arc consistency w.r.t. OE. First,
we adjust the definition of semi-commutativity to functions with different schemes. To
this end consider a sequence of partial orderings (D
Cartesian product (D; v ). Take two functions, f with scheme s and g with scheme t.
We say that f semi-commutes with g (w.r.t. v semi-commutes with w.r.t.
v , that is if
for all Q 2 D.
The following lemma is crucial.
Lemma 8 (Semi-commutativity). Consider a CSP and two binary constraints of it,
C 1 on u; z and C 2 on x; y, where y OE z.
Then the - 1 function of C 1 semi-commutes with the - 1 function of C 2 w.r.t. the
componentwise ordering '.
Proof. Denote by f u;z the - 1 function of C 1 and by f x;y the - 1 function of C 2 . The
following cases arise.
Case 1.
Then the functions f u;z and f x;y commute since their schemes are disjoint.
Case 2. fu; zg " fx; yg 6= ;.
Subcase 1.
Then the functions f u;z and f x;y commute by virtue of the Commutativity Lemma
3(ii).
Subcase 2. y.
Let the considered CSP be of the form hC We can
rephrase the claim as follows, where we denote now f u;z by f y;z : For all (X
To prove it note first that for some
. We now have
where
and
whereas
where
By the Hyper-arc Consistency Lemma 2(ii) each function - i is inflationary and
monotonic w.r.t. the componentwise ordering '. By the first property applied to f y;z
we have
so by the second property applied to f x;y we have X 0
This establishes the claim.
Subcase 3. z = x.
This subcase cannot arise since then the variable z precedes the variable y whereas
by assumption the converse is the case.
Subcase 4. z = y.
We can assume by Subcase 1 that u 6= x. Then the functions f u;z and f x;y commute
since each of them changes only its first component.
This concludes the proof. 2
Consider now a CSP P with a linear ordering OE on its variables and the corresponding
CSP P OE . To be able to apply the above lemma we order the - 1 functions of
the binary constraints of P OE in an appropriate way. Namely, given two
associated with a constraint on u; z and g associated with a constraint on x; y, we put f
before g if y OE z.
More precisely, let x xn be the sequence of the variables of P OE . So x 1 OE x 2 OE
the list Lm consist of the - 1 functions of those binary
constraints of P OE that are on x j ; xm for some x j . We order each list Lm arbitrarily.
Consider now the list L resulting from appending Ln ; in that order, so
with the elements of Ln in front. Then by virtue of the Semi-commutativity Lemma 8 if
the function f precedes the function g in the list L, then f semi-commutes with g w.r.t.
the componentwise ordering '.
We instantiate now the refinement of the SI algorithm for the compound domains
by the above-defined list L and each ? i equal to the domain D i of the variable x i . We
assume that L has k elements. We obtain then the following algorithm.
DIRECTIONAL ARC CONSISTENCY ALGORITHM (DARC)
for to k do
suppose g is with scheme s;
od
This algorithm enjoys the following properties.
Theorem 5 (DARC Algorithm). Consider a CSP P with a linear ordering OE on its
variables. Let P OE := hC
The DARC algorithm always terminates. Let P 0 be the CSP determined by P OE and
the sequence of the domains D 0
n computed in d. Then
is the v d -least CSP in fP that is directionally arc consistent
w.r.t. OE,
is equivalent to P . 2
The termination and (i) are immediate consequences of the counterpart of the SI
Corollary 3 for the SI algorithm refined for the compound domains and of the Directional
Arc Consistency Lemma 7.
The proof of (ii) is analogous to that of the HYPER-ARC Algorithm Theorem 3(ii).
Note that in contrast to the HYPER-ARC Algorithm Theorem 3 we do not need to
assume here that each domain is finite.
Assume now that for each pair of variables x; y of the original CSP P there exists
precisely one constraint on x; y. The same holds then for P OE . Suppose that P OE :=
Denote the unique constraint of P OE on x
The above DARC algorithm can then be rewritten as the following algorithm known as
the DAC algorithm of Dechter and Pearl [7]:
for j := n to 2 by \Gamma1 do
for
od
od
11 DPC: a Directional Path Consistency Algorithm
In this section we deal with the notion of directional path consistency defined in Dechter
and Pearl [7]. Let us recall the definition.
Definition 10. Assume a linear ordering OE on the considered variables. We call a normalized
CSP directionally path consistent w.r.t. OE if for each subset fx; y; zg of its
variables we have
This definition relies on the supplementary relations because the ordering OE may
differ from the original ordering of the variables. For example, in the original ordering
z can precede x. In this case C z;x and not C x;z is a constraint of the CSP under
consideration.
But just as in the case of path consistency we can rewrite this definition using the
original constraints only. In fact, we have the following analogue of the Alternative Path
Consistency Note 2.
Note 3 (Alternative Directional Path Consistency). A normalized CSP is directionally
path consistent w.r.t. OE iff for each subsequence x; y; z of its variables we have
x:Thus out of the above three inclusions precisely one needs to be checked.
As before we now characterize this local consistency notion in terms of fixpoints.
To this end, as in the previous section, given a normalized CSP P we rather consider
the equivalent CSP P OE . The variables of P OE are ordered according to OE and on each
pair of its variables there exists a unique constraint.
The following counterpart of the Directional Arc Consistency Lemma 7 is a direct
consequence of the Alternative Directional Path Consistency Note 3.
Lemma 9 (Directional Path Consistency). Consider a normalized CSP P with a linear
ordering OE on its variables. Let P OE := hC directionally
path consistent w.r.t. OE iff (C fixpoint of all functions (f z
associated with the subsequences x; z of the variables of P OE . 2
To obtain an algorithm that achieves directional path consistency we now instantiate
in an appropriate way the SI algorithm. To this end we need the following lemma.
Consider a normalized CSP and two subsequences
of its variables, x Suppose that u OE z.
Then the function f z
semi-commutes with the function f u
x2 ;y2 w.r.t. the componentwise
ordering '.
Proof. The following cases arise.
Case 1.
In this and other cases by an equality between two pairs of variables we mean that
both the first component variables, here x 1 and x 2 , and the second component variables,
here y 1 and y 2 , are identical.
In this case the functions f z
x1 ;y1 and f u
x2 ;y2 commute by virtue of the Commutativity
Lemma 5.
Case 2.
Ignoring the arguments that do not correspond to the schemes of the functions f z
and f u
x2 ;y2 we can assume that the functions (f z
are both defined
on
The following now holds for all elements respectively, P(C x1 ;y1 ),
(f z
Case 3.
Again ignoring the arguments that do not correspond to the schemes of the functions
f z
x1 ;y1 and f u
x2 ;y2 we can assume that the functions (f z
are both
defined on
The following now holds for all elements respectively, P(C x1 ;y1 ),
(f z
Case 4. u)g.
Then in fact
since by assumption of the lemma the variable z differs from each of the variables
u. Thus the functions f z
x;y and f u
x;y commute since their schemes are disjoint.
This concludes the proof. 2
Consider now a normalized CSP P with a linear ordering OE on its variables and the
corresponding CSP P OE . To be able to apply the above lemma we order in an appropriate
way the f t
r;s functions, where the variables s; t are such that r OE s OE t. Namely, we
put f z
x1 ;y1 before f u
More precisely, let x xn be the sequence of the variables of P OE , that is x 1 OE
the list Lm consist of the functions f xm
for
some x i and x j . We order each list Lm arbitrarily and consider the list L resulting from
appending in that order. Then by virtue of the Semi-commutativity
Lemma 9 if the function f precedes the function g in the list L, then f semi-commutes
with g w.r.t. the componentwise ordering '.
We instantiate now the refinement of the SI algorithm for the compound domains
by the above-defined list L and each ? i equal to the constraint C i . We assume that
L has k elements. This yields the DIRECTIONAL PATH CONSISTENCY ALGORITHM
(DPATH) that, apart from of the different choice of the constituent partial orderings,
is identical to the DIRECTIONAL ARC CONSISTENCY ALGORITHM DARC of the previous
section. Consequently, the DPATH algorithm enjoys analogous properties as the
DARC algorithm. They are summarized in the following theorem.
Theorem 6 (DPATH Algorithm). Consider a CSP P with a linear ordering OE on its
variables. Let P OE := hC
The DPATH algorithm always terminates. Let P 0 := hC 0
where the
sequence of the constraints C 0
k is computed in d. Then
is the v c -least CSP in fP that is directionally path consistent
w.r.t. OE,
is equivalent to P . 2
As in the case of the DARC Algorithm Theorem 5 we do not need to assume here
that each domain is finite.
Assume now that that x is the sequence of the variables of P OE . Denote the
unique constraint of P OE on x
The above DPATH algorithm can then be rewritten as the following algorithm known
as the DPC algorithm of Dechter and Pearl [7]:
for m := n to 3 by \Gamma1 do
for
for
od
od
od
Conclusions
In this article we introduced a general framework for constraint propagation. It allowed
us to present and explain various constraint propagation algorithms in a uniform way.
We noted already in Apt [1] that using such a single framework we can easier automatically
derive, verify and compare these algorithms. In the meantime the work of Monfroy
and R-ety [14] showed that this framework also allows us to parallelize constraint propagation
algorithms in a simple and uniform way. Additionally, as already noted to large
extent in Benhamou [3], such a general framework facilitates the combination of these
algorithms, a property often referred to as "solver cooperation".
By starting here the presentation with generic iteration algorithms on arbitrary partial
orders we clarified the role played in the constraint propagation algorithms by the
notions of commutativity and semi-commutativity. This in turn allowed us to provide
rigorous and uniform correctness proofs of the AC-3, PC-2, DAC and DPC algorithms.
In turn, by focusing on constraint propagation algorithms that always terminate we
could dispense with the notion of fairness considered in Apt [1].
The line of research presented here could be extended in a number of ways. First, it
would be interesting to find examples of existing constraint propagation algorithms that
could be improved by using the notions of commutativity and semi-commutativity.
Second, as already stated in Apt [1], it would be useful to explain in a similar way
other constraint propagation algorithms such as the AC-4 algorithm of Mohr and Henderson
[12], the AC-5 algorithm of Van Hentenryck, Deville and Teng [18], the PC-4
algorithm of Han and Lee [9], or the GAC-4 algorithm of Mohr and Masini [13]. The
complication is that these algorithms operate on some extension of the original CSP.
In fact, recently, Rosella Gennari (private communication) used the framework of this
paper to explain the AC-4 and AC-5 algorithms.
Finally, it would be useful to apply the approach of this paper to derive constraint
propagation algorithms for the semiring-based constraint satisfaction framework of
Bistarelli, Montanari and Rossi [5] that provides a unified model for several classes
of "nonstandard" constraints satisfaction problems.
Acknowledgements
Victor Dalmau and Rosella Gennari pointed out to us that in Apt [2] Assumptions A
and B on page 4 are not sufficient to establish Theorem 1. The added now Assumption
C was suggested to us by Rosella Gennari.
--R
The essence of constraint propagation.
the rough guide to constraint propagation.
Heterogeneous constraint solving.
Applying interval arithmetic to real
Bucket elimination: A unifying framework for structure-driven inference
Local and global relational consistency.
Comments on Mohr and Henderson's path consistency algorithm.
Consistency in networks of relations.
Programming with Constraints.
Artificial Intelligence
Good old discrete relaxation.
Chaotic iteration for distributed constraint propagation.
Networks of constraints: Fundamental properties and applications to picture processing.
Semantic foundations of concurrent constraint programming.
Data types in subdefinite models.
A generic arc-consistency algorithm and its specializations
--TR
Arc and path consistence revisited
Network-based heuristics for constraint-satisfaction problems
Comments on Mohr and Henderson''s path consistency algorithm
An optimal <italic>k</>-consistency algorithm
The semantic foundations of concurrent constraint programming
A generic arc-consistency algorithm and its specializations
Local and global relational consistency
Semiring-based constraint satisfaction and optimization
Chaotic iteration for distributed constraint propagation
Using MYAMPERSANDldquo;weakerMYAMPERSANDrdquo; functions for constraint propagation over real numbers
The essence of constraint propagation
Bucket elimination
A coordination-based chaotic iteration algorithm for constraint propagation
The Rough Guide to Constraint Propagation
Constraint Propagation for Soft Constraints
Arc Consistency Algorithms via Iterations of Subsumed Functions
Heterogeneous Constraint Solving
Data Types in Subdefinite Models
--CTR
Laurent Granvilliers , Frdric Benhamou, Algorithm 852: RealPaver: an interval solver using constraint satisfaction techniques, ACM Transactions on Mathematical Software (TOMS), v.32 n.1, p.138-156, March 2006
Monfroy , Carlos Castro, Basic components for constraint solver cooperations, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Zhendong Su , David Wagner, A class of polynomially solvable range constraints for interval analysis without widenings, Theoretical Computer Science, v.345 n.1, p.122-138, 21 November 2005
S. Bistarelli , R. Gennari , F. Rossi, General Properties and Termination Conditions for Soft Constraint Propagation, Constraints, v.8 n.1, p.79-97, January
Krzysztof R. Apt , Sebastian Brand, Schedulers for rule-based constraint programming, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Sebastian Brand , Krzysztof R. Apt, Schedulers and redundancy for a class of constraint propagation rules, Theory and Practice of Logic Programming, v.5 n.4-5, p.441-465, July 2005
Antonio J. Fernndez , Patricia M. Hill, An interval constraint system for lattice domains, ACM Transactions on Programming Languages and Systems (TOPLAS), v.26 n.1, p.1-46, January 2004 | constraint propagation;generic algorithms;commutativity |
372075 | Total System Energy Minimization for Wireless Image Transmission. | In this paper, we focus on the total-system-energy minimization of a wireless image transmission system including both digital and analog components. Traditionally, digital power consumption has been ignored in system design, since transmit power has been the most significant component. However, as we move to an era of pico-cell environments and as more complex signal processing algorithms are being used at higher data rates, the digital power consumption of these systems becomes an issue. We present an energy-optimized image transmission system for indoor wireless applications which exploits the variabilities in the image data and the wireless multipath channel by employing dynamic algorithm transformations and joint source-channel coding. The variability in the image data is characterized by the rate-distortion curve, and the variability in the channel characteristics is characterized by the path-loss and impulse response of the channel. The system hardware configuration space is characterized by the error-correction capability of the channel encoder/decoder, number of powered-up fingers in the RAKE receiver, and transmit power of the power amplifier. An optimization algorithm is utilized to obtain energy-optimal configurations subject to end-to-end performance constraints. The proposed design is tested over QCIF images, IMT-2000 channels and 0.18m, 2.5 V CMOS technology parameters. Simulation results over various images, various distances, two different channels, and two different rates show that the average energy savings in utilizing a total-system-energy minimization over a fixed system (designed for the worst image, the worst channel and the maximum distance) are 53.6% and 67.3%, respectively, for short-range (under 20 m) and long-range (over 20 m) systems. | Introduction
There is widespread use of portable wireless technology in the form of cellular phones, wireless
networks, wireless surveillance systems and other wireless devices. In order to avoid frequent
recharging and possible down-time, the power consumption of these systems must be minimized.
Currently, most low-power wireless systems are designed by minimizing the transmit power, since
it is usually the most significant component of the total system power consumption. In addition
to hardware design techniques, powerful channel-coding algorithms provide coding gain and significantly
reduce transmit power. Complex source coding algorithms provide the reduction in data
rates necessary for e#cient use of bandwidth. Traditionally, the power consumption due to these
complex signal processing algorithms has been considered negligible. However, as we move towards
an era of micro-cells and pico-cells, the power consumption of the baseband processing unit becomes
comparable to that of the power amplifier. E#orts such as Bluetooth [1] for short-range radio links
in portable devices such as mobile PCs and mobile phones, as well as the HomeRF [2] open industry
specification for RF digital communications in the home, are some examples of steps toward
pico-cell communications. Therefore, techniques which jointly minimize the power consumption in
the power amplifier and the baseband processing unit are desirable.
Increasing the complexity of the channel coding algorithm to increase coding gain and lower
power increases the baseband processing power. Similarly, at short distances, using a less
complex source coder which keeps some redundancy in the encoded stream may allow us to use
a less complex channel coder, thereby providing more energy for the power amplifier. In [3], Lan
and Tewfik found that, for low transmission power, a less e#cient source coder, which consumes
less power and achieves less compression, combined with a channel coder that adds very little
redundancy, is energy-optimal. Therefore, the task of minimizing the baseband processing power
and minimizing the power amplifier power are inseparable.
The joint minimization of the total system power is subject to the system resource constraint
of bandwidth and the performance constraint of end-to-end image quality. For source image data
requiring a high rate to achieve a desired image quality, the channel coder can only introduce a
small amount of redundancy due to the bandwidth constraint, and more power may be drawn by
the power amplifier. There are operating conditions for which coding gain is more important than
transmit power, and other operating conditions for which leaving the redundancy in the source
consumes less power than adding redundancy in the channel coder. An adaptive technique which
chooses the optimal system configuration based on the input and channel condition is necessary to
achieve maximum performance in all regimes.
In this work, we focus on the design of a reconfigurable wireless image transmission system that
achieves low power by exploiting both the variabilities in the image data and the multipath wireless
channel. Reconfigurable digital signal processing (RDSP) [4] has been proposed as a low-power
technique that exploits variabilities in the environment. Energy savings are achieved by tailoring
the architecture to the environment. In [5]- [6], dynamic algorithm transforms (DAT) were proposed
as a systematic framework for designing low-power reconfigurable signal processing systems. DAT
requires the definition of: (1) input state-space, (2) configuration-space, (3) energy models and (4)
DSP algorithm performance models. The input state-space models the input variabilities, and the
configuration-space is the set of possible hardware configurations. Energy models and performance
models are employed to obtain estimates of the energy consumption and a performance metric
(such as distortion or bit-error rate).
In order to exploit the relationship between the configuration space, the input space and the
DSP algorithm performance metric, we employ recent advances in joint source-channel coding
past work on joint source-channel coding has shown that the
tradeo# between data and redundancy can be exploited to design optimal realizable systems [8],
[9], [10]. The application of joint source-channel coding in heterogeneous, multi-media environments
leads to general matching techniques, which have been the focus of some current research [7], [11],
[12]. In [7], we developed a general matching technique which can be applied to a wide variety of
source coders and channel coders. These methods maximize the end-to-end quality of a transmitted
image subject to constraints on the transmit power and bandwidth. In this work, we combine these
methods with techniques for RDSP to jointly optimize the baseband processing power and the
transmit power of a reconfigurable system.
The proposed system can be motivated as follows. In Figure 1(a), we have plotted the rate-distortion
curves for di#erent images encoded via a set partitioning in hierarchical trees (SPIHT) [13]
encoder, which is a state-of-the-art compression scheme. It can be seen that the encoder needs to
operate at di#erent rates to obtain a specified distortion (in terms of peak signal-to-noise-ratio
PSNR) of 35dB. Similarly, channel variabilities are characterized by di#erent values of E b /N 0
(where E b is the energy per bit and N 0 is the noise power spectral density) in Figure 1(b). Each
curve in Figure 1(b) corresponds to a di#erent number of correctable errors, t, for a Reed-Solomon
code. It can be seen that the channel encoder needs to operate at a di#erent t (for di#erent
to meet a specified bit-error rate (BER) constraint of 10 -3 . Thus, if the image changes from
"coastguard" to "carphone" and the channel E b /N 0 changes from 5dB to 7dB, then to keep the
PSNR the same, one must adjust the source rate, the number of correctable errors, the transmit
power, and the number of RAKE fingers (in case of multipath channels). There are numerous
choices for these parameters that can meet the end-to-end distortion constraint. The optimal
parameters which minimize the total power consumption are obtained via joint source-channel
coding techniques and dynamic algorithm transforms developed in this paper.
In this paper, we develop a technique for optimizing power consumption in flexible systems with
performance constraints and time-varying inputs. We apply our energy minimization technique to
image transmission over an indoor wireless link to demonstrate the power savings due to our
approach. The optimization problem is formulated in terms of the system parameters and includes
the reconfigurable blocks. For the image transmission example, the source codec, channel codec,
RAKE receiver and power amplifier are the potentially reconfigurable blocks. Energy consumption
is minimized by solving an optimization problem which has energy per pixel as the objective
function and constraints on the desired distortion and maximum available bandwidth. The proposed
reconfigurable system is tested via the evaluation methodology in [14].
The rest of the paper is organized as follows. In the next section, we present an overview of
the system. In section 3, we present the optimization for determining the optimal configuration
parameters. In section 4, we present the key ingredients such as energy and performance models.
The simulation setup and results are presented in section 5.
We consider the indoor wireless communication system shown in Figure 2 in which two terminals
communicate with each other. Using DAT and JSCC methods, we formulate the problem of
e#cient image transmission subject to an end-to-end performance requirement. We define (a) an
input state space S, (b) a configuration space C, (c) models for the DSP algorithms in order to
estimate distortion D, and (d) energy models for estimating energy consumption E .
We also make a few assumptions regarding the problem which make our technique more relevant.
We consider a system operating (1) over short distances of less than 100m between the terminals (so
that the processing power is comparable to the transmit power), (2) in slow/block fading channels
(so that the channel impulse response remains constant for the duration of an image), and (3) with a
low bit-rate feedback channel which correctly updates the channel condition and image variabilities
for each image.
In this section, we present S, C, and the energy optimization problem for a wireless image
transmission system. In section 4.2, models are presented for end-to-end system performance, and
energy models are presented for various system components.
2.1 Input State Space
The input state space S is a collection of all input states or scenarios of interest for which reconfiguration
is desired. The size of S depends upon the input variabilities and the granularity of the
reconfigurable hardware. For the problem of wireless image transmission, the inputs which can
a#ect the power consumption or the system performance are the channel response, and the image
variability.
Based on our general approach to optimization of image transmission systems with rate and
energy constraints in [7], we characterize the image variability in terms of the operational rate-distortion
(R-D) curve (see Figure 1(a)). The R-D curve determines the (zero-error) bit-rate necessary
to achieve a particular mean-squared error distortion. Characterizing the image variability in
terms of the source R-D curve enables us to use a general JSCC formulation of the tradeo# between
data and redundancy.
The state vector s for the wireless image transmission system is:
The h i s are the complex gains in the multipath channel impulse response,
is the delay corresponding to the i th path. In addition to multipath, there is typically also
multiuser interference which can be exploited by employing a reconfigurable multiuser receiver. In
Source Rate (bits per pixel)
Average
Distortion
per
pixel
Akiyo Carphone
Coastguard
Bit
Rate
(a) (b)
Figure
1: (a) Source variabilities (rate-distortion curves) and (b) channel variabilities (bit-error-rate
curves).
Images out
Images in
Images out
Images in
Indoor wireless
multipath channel
Feedback
Configurations
Figure
2: Indoor wireless image transmission system.
this paper, for the sake of simplicity, we exploit only the variabilities due to source and multipath
channels and consider the worst-case multiuser interference.
2.2 Configuration Space
We must define the set of possible adaptations of the system to the variable input. The configuration
space C is the set of hardware configurations for the terminal. The definition of configuration vector
and size of C depends on the DSP algorithms used in the system and the flexibility of the hardware
platform. The maximum benefit of the system optimization approach depends on the size of C.
For the image transmission problem, we consider the reconfigurable system shown in Figure 3.
In this block diagram, the blocks within dotted lines in Figure 3(a) are reconfigurable and the others
are fixed/hardwired. The analog and RF blocks (not shown) would include a CDMA modulator,
an RF modulator and demodulator, square-root raised cosine filters, an analog-to-digital converter,
a digital-to-analog converter and a low-noise amplifier. These blocks are not reconfigurable and are
assumed to have fixed energy consumption. The energy consumption due to these blocks is not
included in the calculations.
The source codec is implemented on a programmable processor and is assumed to have a fixed
energy consumption. All other digital blocks except the source codec are assumed to have an ASIC
implementation in 0.18-m, 2.5V standard-cell CMOS technology.
For the source codec we use the well-known SPIHT coder [13] which has good performance on
natural images. For transform-based source coders such as SPIHT, most of the energy consumption
is due to the frequency transform or subband decomposition, and energy consumption due to
quantization is small. Since the wavelet transform component of the SPIHT coder is fixed, fixed
energy consumption is a good assumption.
For the channel codec, we employ Reed-Solomon block codes to provide error correction for
bursty error channels. In section 4.1, we describe a reconfigurable architecture for the codec in
which the energy consumption is proportional to the redundancy added by the codec.
The power limitations of the terminal depend on whether it is a mobile or a base station. A
mobile utilizes battery power which has limited life, whereas a base has access to a stationary power
source with unlimited life. We assume that both mobiles and bases have a fixed maximum transmit
power. The configuration vectors for the mobile (m) and the base-station (b), respectively, are
defined as follows:
R s is the source rate (in bits per pixel), t enc is the maximum number of correctable symbol
symbols per block at the channel encoder, P t is the transmit power (or output power delivered
by the power amplifier), and c rake is the configuration vector for the RAKE receiver (c
implies that i th finger is powered down). The transmit parameters of the mobile are R s , t enc , and
. The receive parameters of the mobile are the number of protection symbols per block at the
channel decoder t dec , and c rake . The parameter c rake depends on the transmit power of the other
device and the channel condition, and t dec is same as t enc of the other device (either the base-station
or the other mobile). Further, M and B are the configuration-spaces (defined as set of all possible
configuration vectors) for the mobile and base-station, respectively.
Figure
3(b) shows the block diagram of the controller. The controller adapts the system to
the changes in the input by reconfiguring or changing the parameters of the various blocks. The
first block in the controller quantizes the input variabilities as determined by the source coder and
channel decoder. The energy-optimum configuration vectors are either obtained by an optimization
algorithm running in real-time or from a precalculated look-up table.
2.3 Energy Optimization Problem
We develop the energy optimization problem by expressing the total energy consumption and
end-to-end performance of the system in terms of the system configuration and the given input.
We first describe the general optimization problem for terminal to terminal communication, then
follow up with simplifications for di#erent scenarios. The most general problem is two-way mobile-
to-mobile communication, since the configuration vector for the base is a special case of the mobile
configuration vector where the transmit power and number of RAKE fingers are free variables.
2.3.1 General optimization scenario : mobile-to-mobile scenario
Consider two-way mobile-to-mobile communication. The state vectors for the system, s m1 and
s m2 , describe: (1) the rate-distortion points of the image transmitted by mobile 1 , and the channel
response from mobile 1 to mobile 2 , and (2) the rate-distortion points of the image transmitted
by mobile 2 , and the channel response from mobile 2 to mobile 1 , respectively. We determine the
configuration vectors, m1 opt (s m1 ) and m2 opt (s m2 ), which minimize the energy consumption per
pixel at each mobile while satisfying constraints on (1) distortion Dm1 per pixel for the image
transmitted by mobile 1 (distortion is defined as mean-squared error between the original image at
the transmitter input and the reconstructed image at the receiver output) and distortion Dm2 per
pixel for the image transmitted by mobile 2 , and (2) total rate Rm1,tot and Rm2,tot (total rate is
defined as the sum of source and channel bits per pixel) at mobile 1 and at mobile 2 , respectively.
Therefore, the energy optimization problem can be written as
Rm1,tot (m1) # R o1 ,
Rm2,tot (m2) # R o2 , (5)
where E(m1,m2) is the sum of the energy consumption at each mobile
The energy consumption at mobile 1 is given by
fixed are the energy consumption of the RS encoder, RS
decoder, RAKE receiver, power amplifier, and fixed components respectively. The RS encoder
power consumed in the mobile is for encoding the data to be transmitted. The RS decoder power
consumed at mobile 1 is for decoding the received data from mobile 2 . At present, we do not include
fixed due to the source encoder and the other hardwired blocks in the optimization problem.
Any flexibility in these fixed system components would increase the benefits of an optimization
approach. The optimization problem in (5) represents the core of DAT where energy is minimized
subject to a system-level performance constraint. The constraints themselves are satisfied via joint
source-channel coding algorithms.
Assuming that the two users are exchanging independent information, the energy optimization
above can be partitioned into two one-way communication problems, one for each direction. For
the image transmitted from mobile 1 to mobile 2 ,
Rm1,tot (m1) # R o1 . (7)
Mobile 1 receives channel information from mobile 2 and uses it to compute R s , t m1
enc
its own
configuration as well as c rake for mobile 2 . Mobile 2 combines the c rake information transmitted
by mobile 1 with R s , t m2
to obtain its configuration vector. An optimization algorithm for
determining the configuration vectors m1 opt and m2 opt is described in the next section.
A performance criterion which may be more desirable to the end user is the probability of
delivering a particular quality of service. The user may desire that the distortion is above some
level D which is close to zero. The optimization problem then becomes
Rm1,tot (m1) # R o1 ,
Rm2,tot (m2) # R o2 , (8)
This problem separates into an optimization problem for the image transmitted by mobile 1 and an
optimization problem for the image transmitted by mobile 2 , where P (D > D o ) is the probability
that the rate corresponding to the desired maximum distortion D o is successfully decoded. Joint
source-channel coding does not play a significant role in this situation because the source coder
must operate at a specified rate R s (as determined by the rate-distortion points) and the channel
codec is energy-optimized. In this scenario, it is possible that for large values of P the
bandwidth is under-utilized.
2.3.2 Special Case: Mobile-to-Base-Station Scenario
Consider two-way communication between a mobile and a base station. The optimization problem
for the system is just a special case of (5) in which one of the mobiles, say mobile 2 , is a base-
station. As in the mobile-to-mobile scenario, the system optimization divides into two problems,
one for the base station, and one for the mobile. Because the power consumption at the base
station is not considered an issue here, these problems are special cases of the problem in (7)
with
enc ), and E rake (c rake ) at the base station all set to zero. Otherwise, the
optimization problem and the algorithm with solution remain the same as in the general case. So,
for the image transmitted from the mobile over the uplink channel,
R m,tot (m) # R
2.3.3 Optimization Example
To illustrate the necessity of the optimization and to clarify our methodology, we provide the simple
example of one-way mobile-to-mobile communication with a configuration space of three choices.
Figure
4 shows the total energy consumed at distances of 10m and
50m for images "carphone" and "coastguard" by the three di#erent configurations
3 with the parameters listed in Table 1. Note that R s is not listed, since it is related to t enc by
the rate constraint R tot # R
Configuration
Table
1: Configuration space parameters.
Configuration supports a PSNR= 35dB for both images at 10m, but not at 50m. Configuration
supports "coastguard" at 50m and 10m, but does not support "carphone" at either
distance because the rate R tot is too small. Configuration m 3 supports both images at both dis-
tances, but is not energy-e#cient for short distances. Because of image variability and channel
variability, reconfiguration is required for energy e#ciency. Optimal configuration parameters can
be determined either by brute-force search or with an optimization algorithm. When the sizes of
the configuration space and the input state space are small, a brute-force search is feasible; oth-
erwise, an optimization algorithm is required. In this paper, the configuration space has a size of
ENCODER
t dec
components
components
CHANNEL
(RS)
(RS)
CHANNEL
DECODER
DECODER
IN
IMAGES
IMAGES
OUT
ENCODER
RECEIVER
ANTENNA
U
R
POWER
AMPLIFIER
(a)
s
from channel
from source
codec
codec
OPTIMIZATION ALGORITHM
configuration
energy-optimum
(b)
Figure
3: A reconfigurable multimedia system: (a) the transceiver and (b) the controller.
Total Energy (Joules)
PSNR
carphone 10m
coastguard 10m
carphone 50m
coastguard 50m
Figure
4: Performance of three configurations defined in Table 1.
corresponding to the variation in protection symbols - variation
in P t - the variation in c rake and the state space has a size of 9 504 - corresponding to
the number of rate-distortion points - the length of the channel response.
3 Optimization Algorithm
All the energy optimization problems described are solved by a feasible directions algorithm [15].
At each step of the algorithm, the gradient #E of the objective function E(t m
respect to t m
enc and P t and a feasible direction chosen so that the objective
function decreases (i.e #Ed # 0). The update at each step is
dt
The derivatives in (11) are computed numerically as dE
dt
E(t,P t +P t,min )-E(t,P t )
, where P t,min is the minimum transmit power. The parameters # > 0, and
determine the stepsize and the angle between the direction and the gradient, respectively.
For the image transmission system studied here, we chose to be a unit
step.
Taking a step in the feasible direction does not guarantee that the distortion constraint is met.
Assuming that we start at an initial state [t m
which satisfies the constraint, at each update,
the distortion is measured and compared to the constraint. If the distortion constraint is not met,
then the update is not taken and the stepsize # is decreased towards zero until the distortion
constraint is met. For the image transmission system studied here, we chose the modification
reset the parameter to for each new gradient computation. Many
variations of this simple feasible directions algorithm are possible.
To determine the optimal c rake , we apply the optimization algorithm for each of the possible
values and choose the best one. To apply the optimization algorithm to the P (fail) case, the
distortion constraint is replaced with the probability constraint
(R < RDo
where R is the number of consecutive bits decoded correctly, and RDo is the rate corresponding to
distortion D determined by the rate-distortion points.
The energy and performance models employed in the estimation of E(m,b), E(m1,m2), D(m, s m ),
and D(b, s b ) are described next.
4 Energy and Performance Models
In this section, we present relationships of the energy consumption and distortion to the configuration
signals. The high-level estimates of the energy consumption are obtained via energy models
for the power-hungry blocks in the RS codec, RAKE receiver, and power amplifier. These models
are obtained via real-delay simulations [16] of the hardware blocks employing 0.18-m, 2.5V
CMOS standard cells obtained from [17]. Similarly, performance models of the source coder and
the channel coder are employed to estimate the average distortion per pixel for the various states
and configurations.
4.1 Energy Models
Energy models are employed to relate the configuration vector to a high-level estimate of the energy
consumption of the algorithm. It is well known that the energy consumption of the digital circuits
is dependent upon the input signal statistics. However, in the case of Galois field multipliers and
adders, it is found that the energy consumption varies by less than 5%, when the correlation of
the input bit-stream is varied from 0.0 to 0.9. This is because when the correlated bit-stream
is converted to Galois field symbols, the correlations among successive bits are lost, thus making
Galois field symbols uncorrelated. Therefore, in the following, we present energy models which are
independent of the input signal statistics, but are functions of input precisions. The energy models
for the RS codec, the RAKE receiver, and the power amplifier are presented next.
1. RS Encoder/Decoder
The complete block diagram of the Galois field components and the RS encoder and decoder are
derived in the Appendix and can also be found in [18]. We summarize the results here to maintain
the flow of the discussion. The energy models for adder, multiplier and inverse blocks over Galois
field are derived by simulating these blocks via a real-delay gate-level simulator MED [16]
and are given by
where m is the number of bits per symbol. These models are employed to obtain an estimate of
the energy consumption of the RS encoder and decoder in di#erent configurations. The energy
consumption of a bit-parallel RS encoder architecture is given by
Energy consumption of the RS decoder is due to the syndrome computation (SC) block,
Berlekamp-Massey (BM) block and Chien-Forney (CF) block and is given by
The energy consumption of the RS codec is obtained as a sum of E rsenc and E rsdec in (16) and
(17) as follows:
2. RAKE Receiver
Energy consumption of the RAKE receiver in Figure 5(a) is dependent upon the powered-up
fingers (i.e., taps for which c rake,i = 1). Figure 5(b) shows the architecture of a finger in a RAKE
receiver. The received signal r[n] is correlated with the delayed spreading sequences s
and then multiplied by a complex gain h # i . The outputs from all the fingers are added together and
passed to the slicer to obtain the bits. If L is the number of fingers, then the energy consumption
of the RAKE receiver is given by
is the energy consumed by the i th finger of the RAKE receiver. The adder and
multiplier energy models [5] are obtained via real-delay gate-level simulations of the blocks designed
for 0.18-m, 2.5V CMOS technology from [17].
3. Power Amplifier
Energy consumption of the power amplifier is characterized by its power-aided-e#ciency (PAE)
# (defined as the ratio of the output power P t to the power drawn from the supply). Power
amplifiers are typically designed to maximize # at the maximum output power P t,max , with # being
a decreasing function of P t . The functional relationship between # and P t (also called a power-
aided-e#ciency curve) is usually provided in vendor data-books. For the digitally-programmable
power amplifier in [19], the e#ciency #(P t ) of this power amplifier can be approximated as follows:
where # max is the maximum e#ciency, and P t,max is the maximum transmit power. If f bit is the
bit-rate, then the power amplifier energy consumption per bit E pa is given by
4.2 Rate Models
Performance models are employed to relate the configuration vector to the performance metrics such
as distortion, bit error rate (BER) or signal-to-noise ratio (SNR). In the following, we present
performance models for the source coder, channel coder, RAKE receiver and power amplifier.
1. Source Coder
The performance metric for the image transmission system is the end-to-end average distortion
per pixel. The SPIHT source coder produces a progressive bitstream which improves the quality of
the decoded image as more bits are received correctly. When a bit error occurs in the transmission,
all future bits are lost due to an embedded property of the bitstream. So, the end-to-end average
distortion per pixel can be computed [7] as follows:
where M is the number of codewords transmitted per image, D(i) is the distortion value if only
the first i codewords are correctly received, p(i) is the probability of receiving the first i codewords
correctly and the (i +1) th codeword in error, and D(M) is the residual distortion because of a finite
source rate R s . The D(i) values are obtained from the image-specific operational rate-distortion
curve of the SPIHT coder. The p(i) values are obtained from the error probability of the channel
code. For block-error codes, the probability of receiving i consecutive blocks correctly is given by:
where p e,c is the probability of error for the channel code.
2. Source Estimation: Operational Rate-Distortion
The variability in the input data is captured by operational rate-distortion points. There are
several ways of estimating these rate-distortion points. The straight-forward approach is to apply
the source encoder and decoder at various rates, calculate the distortion between the original and
decoded images, and interpolate between these (r, d) points. In [20], Lin et al. use a cubic spline
interpolation to get smooth rate-distortion curves for gradient-based optimization algorithms. Since
multiple (r, d) points must be measured by encoding and decoding at these rates, the processing
power required for the computation is significant.
We consider some simplifications and special cases. For transform-based source coders such as
SPIHT and JPEG, the error in the encoded image is primarily due to quantization, especially at
high rates. The transform-domain quantization error energy at various rates provides an estimate
of the operational rate-distortion points. In the SPIHT coder, wavelet transform coe#cients are
encoded hierarchically according to bit-planes, starting from the most significant bit-plane. The
compressed bitstream has the embedded property of containing all the lower rates. Due to the
embedded nature of this compressed bitstream, shortening the compressed bitstream is the same
as compressing to the lower rate. In terms of estimating the (r, d) points, this implies that the
encoder need only be run at the maximum rate.
We can compute the error energy at the encoder progressively from lower rates to higher rates
as follows. Let {c 0
D } be the set of original wavelet coe#cients, where D is the number of
pixels in the image. Let n be the number of bits to encode the largest coe#cient. At zero source
rate, the distortion energy
is the total energy in the wavelet coe#cients. When
the most significant bit-plane is encoded, the new distortion energy
where the
coe#cients are
The indicator function I(c) just determines whether the particular coe#cient c has a 1 in the
most-significant bit-plane. By expanding and collecting terms, we can write
is the number of coe#cients with a 1 in the most-significant bit-plane.
For the coe#cients not being updated, so the summation is over relatively few values.
The summation can easily be computed by shifting the coe#cients being updated, masking o# the
error, and accumulating (M does not have to be computed explicitly). For the next most significant
bit-plane, we update E 1 based on the number of coe#cients with 1 at this bit-plane and so on.
The introduction of this calculation does not require any additional memory, since only the original
coe#cient values need to be accessible. To convert error energy to distortion, we simply divide by
the number of total image pixels. The rate at which a particular distortion occurs is known as a
consequence of the encoding (length of the output bu#er). We also point out the update above can
occur at finer steps than at each bit-plane. So, the complete rate-distortion curve can be found
using this technique in real-time to progressively update the error energy as more significant bits
are encoded.
3. Channel Coder
The performance metric for the channel coder is the probability of error p e,c . For a Reed-Solomon
code with codeword length of 2 m
correction capability of t symbols,
we have [21]
where p e,s is the symbol error probability and is computed as:
where p e,b is the uncoded bit error probability computed as (for AWGN channels and binary phase
shift signaling scheme) [22]
where SNR o is the signal-to-noise ratio at the output of the RAKE receiver, and Q[x] is the
probability that a standard Gaussian random variable has value greater than x.
4. Power Amplifier and RAKE Receiver
The performance for the power amplifier and RAKE receiver is defined by SNR o and is computed
as:
c rake,i |h i | 2 , (30)
are the transmit power and noise power, respectively, PL(d) is the propagation
loss for distance d between the transmitter and receiver, L is the number of fingers in the RAKE
receiver, c rake,i is the configuration signal for i th RAKE finger, and h i is the channel coe#cient for
the i th path.
5 Application to Indoor O#ce Channels
In this section, we simulate the proposed reconfigurable multimedia system in an indoor o#ce
environment. Our goals are to determine the improvement obtained in a realistic setting using
our total system optimization. We compare with both conventional worst-case fixed design and
a system employing power control. We find that significant power reduction is possible in the
short range. These results are understood better by studying more detailed behavior such as
comparing the power amplifier power consumption with the power consumption of the digital
blocks, power consumption of the individual blocks, and the variability of the optimal parameters
over the images and over distance. Section 5.1 presents the simulation setup and simulation results
follow in section 5.2.
5.1 Simulation Setup
The proposed receiver is simulated for an indoor o#ce environment. The propagation models
for indoor o#ce channels are obtained from the IMT-2000 evaluation methodology [14]. In these
models, the propagation e#ects are divided into two distinct types: (1) mean path loss and (2)
variation in the signal due to multipath e#ects. The mean path loss PL for the indoor o#ce
environment is modeled as follows:
where d is the transmitter and receiver separation (in meters) and q is number of floors in the path.
This represents a worst-case model from an interference point of view. We assume that distance d
can vary from 2m to 100m.
The multipath e#ect is modeled via a channel impulse response which is characterized by a
delay-line model given in (2). The mean values of the relative delays
are specified in Table 2 [14]. Channel models A and B are the low delay spread and median
delay spread cases, respectively. Each of these two channels can occur in an actual scenario with
probability of
Path Channel A Channel B
Relative delay (ns) Average power (dB) Relative delay (ns) Average power (dB)
Table
2: Indoor o#ce multipath channels [14].
We assume that we are transmitting a database of images with 176 - 144 pixels per frame in
quarter common intermediate format (QCIF) obtained from [23].
It was found that a PSNR of 35dB provided a desirable level of visual quality and a total rate
R o of 3 bits per pixel (bpp) is su#cient to obtain this PSNR. Some other system parameters are
given in Table 3.
Rates Bit rate 760.32 kbits/sec (binary phase shift keying)
Chip rate 12.16 Mchips/sec (length spreading sequences)
Power Maximum e#ciency Low Power: P
Amplifier
RAKE receiver and 0.18-, 2.5V
Channel Codec CMOS Technology
Low-noise amplifier Noise figure=5 dB, B.W.=12.16 MHz
Constraints D (in terms of PSNR)
bits per pixel (bpp)
Table
3: System parameters.
5.2 Simulation Results
The real novelty/impact of the work is shown by the di#erence in power consumption between
optimized and fixed systems. The energy savings are obtained by reconfiguring the system to
take advantage of image and channel variabilities. The contribution of the input variability to the
reconfiguration is shown in the variation of the parameters in the configuration vector.
The proposed mobile-to-mobile image transmission system is tested for distances from 2m to
100m, multipath channels A and B described in the previous subsection, QCIF images "akiyo",
"carphone", "claire", "coastguard", "container", "hall objects", "mother and daughter" and
"silent", total rate R o of 3 bpp, and PSNR of 35dB.
To show the overall impact of the optimization approach, we compare the optimized system
to two fixed systems for the end-to-end distortion criterion and the P (fail) criterion. Then, the
contribution of each component of the system is analyzed in detail to show the origin of the power-consumption
savings.
1. Comparison with Fixed Systems
Figure
6 shows a comparison between the long-range optimized system and the long-range
transmit-power-controlled system (averaged over both channels, all the images, and both PSNR
constraints) where both systems employ the high-power amplifier designed for long distances. A
significant point is that the fixed system is not a worst-case comparison. The fixed system utilizes
the feedback channel to account for the variation of transmit power over distance but does not
account for the variation in the digital block parameters (as in the optimized system). The average
total power savings of 15.6% between the two systems arises due to reconfiguration of the source
rate, RS coder, and RAKE fingers.
The power amplifier at the transmitter is a significant component of the power consumption in
the image transmission system. The e#ciency of the power amplifier ranges from 2.2% at the lowest
transmit power to its highest e#ciency of # 50% at the maximum transmit power, P t,max .
When operating the system over distances from 2m to 100m, the significant variation in transmit
power from 10-W to 5mW lowers the e#ciency of the entire system. A plausible alternative is to
use a low-power amplifier for short distances and a high-power amplifier for longer distances.
Figure
7 shows a comparison between the short-range optimized system and the short-range
transmit-power-controlled system. The average total power savings due to optimization is 49.4%.
Comparing Figure 7 with Figure 6, we see that the low-power transmission system consumes less
than one-third of the total energy consumed in the original system designed for longer distances.
For this particular system, the choice of the power amplifier plays the most significant role in the
e#ciency of the transmission system.
Now, we compare the optimized system to a worst-case system which does not employ power
FINGER L
re(h
im(h
re(s
im(s
im(s
im(r[n])
re(r[n])
re(s
(a) (b)
Figure
5: The RAKE receiver: (a) block diagram and (b) architecture of a finger.
Distance (meters)
Total
Energy
(Joules)
Optimized System
Fixed System
Figure
Comparison between transmit-power-controlled system and optimized system over long
range.
Distance (meters)
Total
Energy
(Joules)
Optimized System
Fixed System
Figure
7: Comparison between transmit-power-controlled system and optimized system over short
range.
control. Figure 8 shows a comparison of the PSNR (averaged over both channels and all the images)
in the optimized system designed for 30dB and 35dB and a fixed system designed for the worst-case
image, maximum distance, and worst-case channel. The fixed system has constant performance
of at least 0.7dB above the optimized system, which achieves the PSNR constraint nearly with
equality. Although the fixed system performs slightly better than the optimized system in terms
of PSNR, it consumes three times more energy.
2. Fraction of total energy consumed in each component
The fraction of energy consumption in each component of the transmission system is shown
in
Figure
9. At short distances, the RAKE receiver and transmit power are the most significant
since the BERs at this distance are relatively low so as to not require additional channel coding.
At a distance of 20m, the fraction of digital energy reaches a peak of 29%, where the RS coder is
able to lower BERs by introducing channel coding. At distances over 20m, the fraction of digital
energy decreases since more transmit energy is required to compensate for path loss. More complex
channel coding techniques such as convolutional coding for the inner system may be used to increase
the range of distances over which digital power consumption is significant and hence benefit from
the techniques presented here.
3. Variation of t and P t
Figures
show the variation of the optimal parameters for the reconfigured system
over distance and over the various images. The significant variation of the t and P t parameters at
large distances shows the necessity of an optimization algorithm for those parameters. The optimal
number of RAKE fingers is found to be either 1 for short distances or 2 for long distances. The
number of RAKE fingers makes a significant impact on the total power consumption since the
RAKE consumes significant power. In addition, the variation of the number of RAKE fingers also
a#ects the Energy/bit and consequently the other parameters.
4. P (fail)-Optimized Systems
The energy consumption due to the various blocks is shown in Figure 12 for the constraints
averaged over all the images for R Channel A. The energy consumption of the RAKE
receiver is the same for all the four constraints and constant over the short distances (less than
15m) and long distances (over 15m). For the PSNR= 35dB constraint, the energy consumption in
Distance (meters)
PSNR
fixed system
optimized system
fixed system
optimized system
Figure
8: Comparison between fixed system and optimized system performance.
1000.10.30.50.70.9Distance (meters)
Fraction
of
Energy
Consumption
in
Various
Blocks
Figure
9: Fraction of energy consumed in each component.
Distance (meters)
Protection
Akiyo
Carphone
Claire
Coastguard
Container
Hall objects
Mother and daughter
Silent
1000.20.611.41.8Distance (meters)
Number
of
fingers,
|c
rake
|
Akiyo
Carphone
Claire
Coastguard
Container
Hall objects
Mother and daughter
Silent
(a) (b)
Figure
10: Optimized system parameters: (a) protection symbols and (b) RAKE fingers.
Distance (meters)
Transmit
Power,
Pt
Akiyo
Carphone
Claire
Coastguard
Container
Hall objects
Mother and daughter
Silent
Figure
11: Transmit power consumed in the optimized system.
Distance (meters)
Energy
consumption
in
various
blocks
Figure
12: Energy consumed in each component.
the RS codec is smaller than for the PSNR= 30dB constraint at distances over 45m. The higher
source rate required to achieve the larger PSNR lowers the channel coding rate. At both PSNRs,
the energy consumption of the system achieving the lower P probability is only slightly
larger than the system achieving P due to the threshold characteristic of the RS coder.
The BER of the RS coder as a function of the SNR has a fast transition from
with only a small change in the SNR.
6 Conclusion
Total-system-energy minimization of a wireless image transmission system is achieved by dynamically
reconfiguring the architecture to exploit the variabilities in the image data and the multipath
wireless channel. The optimal configuration parameters for the reconfigurable system are chosen to
performance constraints by trading o# the energy consumption of the digital blocks and the
power amplifier. Application of the optimization techniques to the indoor o#ce environment shows
that the fraction of digital energy consumption to the total energy consumption in the optimized
system can range from 29% at a distance of 20m to 14% at 100m. The reduction in total energy
consumption in the optimized system averaged (equiprobably) over channels A and B, all distances,
both PSNR constraints, and both rates, is 53.6% and 67.3% for a short-range system (under 20m)
and long-range system (over 20m), respectively, over a fixed system designed for the worst-case
image, distance, and channel. In comparison to a fixed system employing power control to change
transmit power, but with fixed digital power, the reduction in total energy consumption drops to
49.4% and 15.6%, respectively, for the short-range and long-range systems. The most significant
part of the energy consumption was due to the ine#ciency of the power amplifier at short dis-
tances. Using a power amplifier with a lower maximum transmit power P decreased
the energy consumption of the system by 60%. Future work on reconfigurable architectures for
source coders (especially video coders) and more complex channel coders such as rate-punctured
convolutional coders (RCPC) will increase the power consumption of the digital blocks and increase
the benefits of a total-system-energy minimization approach.
--R
"Adaptive low power multimedia wireless communications,"
"Reconfigurable processing : The solution to low-power programmable DSP,"
"Dynamic algorithm transforms for low-power reconfigurable adaptive equalizers,"
"Dynamic algorithm transformations (DAT) : A systematic approach to low-power reconfigurable signal processing,"
"Joint source channel matching for a wireless communications link,"
"Joint source and noisy channel trellis encoding,"
"Quantizing for noisy channels,"
"Optimal block cosine transform image coding for noisy channels,"
"Combined source-channel coding for transmission of video over a slow-fading Rician channel,"
"Joint source-channel coding for progressive transmission of embedded source coders,"
"A new fast and e#cient image codec based on set partitioning in hierarchical trees,"
Association of Radio Industries and Businesses (ARIB)
MA: Athena Scientific
"Statistical estimation of the switching activity in digital circuits,"
"Low-power channel coding via dynamic reconfiguration,"
"A single-chip 900-MHz spread spectrum wireless transceiver in 1-m CMOS - part I: Architecture and transmitter design,"
"Cubic spline approximation of rate and distortion functions for MPEG video,"
Theory and Practice of Error Control Codes.
ftp://sotka.
"E#cient semisystolic architectures for finite-field arith- metic,"
--TR
Statistical estimation of the switching activity in digital circuits
Dynamic algorithm transformations (DAT)MYAMPERSANDmdash;a systematic approach to low-power reconfigurable signal processing
Joint Source-Channel Coding for Progressive Transmission of Embedded Source Coders
Reconfigurable Processing
--CTR
Nilanjan Banerjee , Georgios Karakonstantis , Kaushik Roy, Process variation tolerant low power DCT architecture, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Kostas E. Psannis , Yutaka Ishibashi, MPEG-4 interactive video streaming over wireless networks, Proceedings of the 9th WSEAS International Conference on Computers, p.1-7, July 14-16, 2005, Athens, Greece
Radu Marculescu , Massoud Pedram , Joerg Henkel, Distributed Multimedia System Design: A Holistic Perspective, Proceedings of the conference on Design, automation and test in Europe, p.21342, February 16-20, 2004
Huaming Wu , Alhussein A. Abouzeid, Error resilient image transport in wireless sensor networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.15, p.2873-2887, October 2006 | energy minimization;low power;joint source-channel coding;indoor wireless transmission |
372085 | Exploring Hypermedia Processor Design Space. | Distributed hypermedia systems that support collaboration are important emerging tools for creation, discovery, management and delivery of information. These systems are becoming increasingly desired and practical as other areas of information technologies advance. A framework is developed for efficiently exploring the hypermedia design space while intelligently capitalizing on tradeoffs between performance and area. We focus on a category of processors that are programmable yet optimized to a hypermedia application.The key components of the framework presented in this paper are a retargetable instruction-level parallelism compiler, instruction level simulators, a set of complete media applications written in a high level language, and a media processor synthesis algorithm. The framework addresses the need for efficient use of silicon by exploiting the instruction-level parallelism found in media applications by compilers that target multiple-instruction-issue processors.Using the developed framework we conduct an extensive exploration of the design space for a hypermedia application. We find that there is enough instruction-level parallelism in the typical media and communication applications to achieve highly concurrent execution when throughput requirements are high. On the other hand, when throughput requirements are low, there is little value in multiple-instruction-issue processors. Increased area does not improve performance enough to justify the use of multiple-instruction-issue processors when throughput requirements are low.The framework introduced in this paper is valuable in making early architecture design decisions such as cache and issue width trade-off when area is constrained, and the number of branch units and instruction issue width. | Introduction
In the last decade multimedia services have found increase use in application systems such as education and
training, office and business, information and point of sales [26]. In the recent years, the wide spread use of the
World Wide Web has produced a fertile ground for multimedia services [4, 5].
Hypermedia represents a combination of hypertext and multimedia technologies [17, 20, 29]. The concept of
hypertext was proposed more than 50 years ago by V. Bush [6] but T. Nelson [37] is generally credited as the first
to use the term "hypertext" [18].
There are two very different models for hypermedia [24]. One model uses hypermedia to deliver rigidly constrained
embedded applications, e.g. stand-alone CD-ROM based applications. The other model is distributed
hypermedia as a wide-area system for information discovery and management, e.g. World Wide Web [5] and
Hyper-G [23].
The common model for distributed hypermedia information systems have three distinct roles that are supported:
passive participant, information/service provider, and active participation [24]. The last role is an emerging model
for collaboration. From the hypermedia processor designer's standpoint, the roles hypermedia systems must support
present a unique challenge since most media tasks are computationally demanding and execute concurrently,
yet require reliable and predictable operation. For example, tasks such as video and audio encoding/decoding, text
processing, image processing, authentication and encryption/decryption should run in parallel to support hyper-media
systems for collaboration.
We present an approach to distributed hypermedia system design space exploration for applications that support
collaboration roles. We focus on a category of processors that are programmable yet optimized to run a hypermedia
application. The approach utilizes advances in compiler technology and architectural enhancements.
Advances in compiler technology for instruction-level parallelism (ILP) have significantly increased the ability
of a microprocessor to exploit the opportunities for parallel execution that exist in various programs written in
high-level languages. State-of-the-art ILP compiler technologies are in the process of migrating from research
labs to product groups [3, 12, 21, 33, 34]. At the same time, a number of new microprocessor architectures have
been introduced. These devices have hardware structures that are well matched to most ILP compilers. Architectural
enhancements found in commercial products include predicated instruction execution, VLIW execution
and split register files [8, 43]. Multi-gauge arithmetic (or variable-width SIMD) is found in the family of MPACT
architectures from Chromatic [22] and the designs from MicroUnity [19]. Most of the multimedia extensions of
programmable processors also adopt this architectural enhancement [28, 39].
The key components of the framework presented in this paper are a retargetable ILP compiler, instruction level
simulators, a set of complete media applications written in a high level language and a media processor synthesis
algorithm.
We discuss the related works and our contributions in Section 2. Section 3 presents the preliminary materials
including the area model, the media application set and the experiment platform such as tools and procedures of
measuring application characteristics using the tools. Section 4 formulates the search problem and establishes
its complexity. Based on the problem formulation, we lay out the overall approach to area efficient hypermedia
processor synthesis. The solution space exploration strategy and algorithm is described in Section 5. The tools and
algorithms are extensively studied through experimentation in Section 6. Finally, Section 7 draws conclusions.
2. Related Works and Our Contributions
There are many articles that summarize research efforts in hypermedia systems [17, 20, 29]. The most common
Internet-based hypermedia technologies, World Wide Web [5] and Hyper-G [23], represent distributed hypermedia
systems that are widely available.
The concept of distributed hypermedia systems that support collaboration are emerging as an important idea.
Distributed hypermedia systems are becoming increasingly desired and practical as other areas of information
technologies advance [24].
Since the early 90's, there has been a number of efforts related to the design of application-specific programmable
processors and application-specific instruction sets. Comprehensive survey of the works on computer-aided design
of application specific programmable processors can be found in the literature ([16], [38], and [35]). In particular,
a great deal of effort has been made in combining retargetable compilation technologies and design of instruction
sets [1], [32], [42], [31], [30]. Several research groups have published results on the topic of selecting and designing
instruction set and processor architecture for a particular application domains [44], [25]. Potkonjak and
Wolf introduced hard real-time multitask ASIC design methodology by combining techniques from hard real-time
scheduling and behavioral synthesis [40, 41].
Early work in the area of processor architecture synthesis tended to employ ad hoc methods on small code kernels,
in large part due to the lack of good retargetable compiler technology. Conte and Mangione-Smith [9] presented
one of the first efforts to consider large application codes written in a high-level language (SPEC). While they had
a similar goal to ours, i.e. evaluating performance efficiency by including hardware cost, their evaluation approach
was substantially different. Conte, Menezes, and Sathaye [10] further refined this approach to consider power
consumption. Both of these efforts were limited by available compiler technology, and used a single applications
binary scheduled for a scalar machine for execution on superscalar implementations. Fisher, Faraboschi and Desoli
[13] studied the variability of applications-specific VLIW processors using a highly advanced and retargetable
compiler. However, their study considered small program kernels rather than complete applications. They also
focused on finding the best possible architecture for a specific application or workload, rather than understanding
the difference between best architectures across a set of applications.
Unfortunately, however, we know of no work addressing synthesis of distributed hypermedia processors that can
support collaboration. As quality requirements changes, it is necessary to take into account not only timing
and synchronization requirements, but also throughput requirements when designing a hypermedia processor. The
benefits of advances in compiler technology and architectures also should be incorporated in designing hypermedia
systems as low-level programming is increasingly less practical.
We focus on a category of processors that are programmable yet optimized to a hypermedia application for varying
degrees of throughput. We use a state-of-the-art ILP compiler and simulation tools to evaluate performance of
media applications on a variety of multiple-instruction-issue processors. We develop an efficient optimal solution
algorithm for the synthesis of hypermedia processors. Although the synthesis problem is NP-complete, we find
that the optimal solution algorithm is practical for a typical hypermedia scenario.
The design space exploration experiment reported in this paper does not include dynamic resource allocation where
instances of media tasks arrive and resources are dynamically allocated for arriving tasks. Our main objective is
to study the viability of design space exploration for hypermedia processors using media applications written in a
high-level language.
3. Preliminaries
In this section we provide the definitions of terms and present existing foundations. After describing the target
architecture and the area model, the media workload is introduced. In the last subsection, we explain the experimental
platform including tools and procedures of measuring application characteristics using the tools.
3.1. Definitions and Assumptions
We use the terms task, individual application and media application interchangeably to refer to a task in a hypermedia
application. We assume that an integrated circuit can be physically partitioned to run different tasks in
parallel. Hence, each partition should be a complete processor. Processor, machine and machine configuration
are used interchangeably throughout the paper. Depending on the context they are used in, they refer either to a
single partition within a processor for a hypermedia application or to the entire set of processors for a hypermedia
application.
On the run time measurement platform, individual applications are divided into fixed run time units called quanta.
In our experiment, we use as one quantum length the time taken to encode or decode 4 MPEG-2 frames. For a
typical throughput of 15 frames per second, the quantum size is approximately 0.267 seconds. The performance
constraints used to drive our experiments are based on the number of processor cycles equivalent to at least one
quantum.
cjpeg
rawcaudio
pegwitenc pegwitdec
djpeg
rawdaudio
video
stream
image
stream
audio
stream
video
stream
image
stream
audio
stream
Ecoding Side Decoding Side
Figure
1. A hypermedia scenario
3.2. Target Architecture
The target architecture we use resembles a multiprocessor system with shared memory except that all the processors
for a given usage scenario are laid out on a single die. A media task is assigned to its dedicated processor.
More than one media application can be assigned to a processor if the given performance constraints are guaranteed
to be met, i.e. all media tasks on the processor must be finished within a given time limit. Shared memory
is used for data communication between tasks, with each processor maintaining its own cache. When multiple
media applications are assigned to a single processor, flushing and refilling of the cache is accurately incorporated
into the run time measurement platform.
As defined previously, we divide tasks into quanta. One of the benefits of using the notion of a quantum is that
it simplifies synchronization of several applications running on multiple processors. Because most hypermedia
applications have real-time performance characteristics, the user receives no benefit from performance that exceeds
the specified goals. Hence, the use of quanta equivalent to the longest time frame with which tasks should be
synchronized gives a convenient task assignment unit to allocate resources.
Figure
1 shows one hypermedia scenario that we will use to evaluate our framework. The scenario consists of
8 media applications, which are separated into two distinctive sets of media tasks: the encoder group and the
decoder group. The figure illustrates synchronization boundaries between media tasks. For example, during the
first quantum, video, audio or still images are encoded. At the next quantum, the encoded data sets are encrypted,
while new video, audio and still images are encoded in pipeline fashion.
3.3. Area Model
We use the Intel StrongArm SA-110 as the baseline architecture for our analysis [36]. The device is a single-issue
processor with the classic five stage pipeline. The SA-110 has an instruction issue unit, integer execution unit,
integer multiplier, memory management unit for data and instructions, cache structures for data and instructions,
Configuration Issue IALU Branch Mem Cache Total
(4, 4, 1, 4, 2, 2) 5.0 10.0 1.25 20.0 4.55 43.28
Table
1. Processor configuration examples and their area estimates (mm 2 configuration
consists of (issue width, number of ALUs, number of branch units, number of memory units, size of
instruction cache(KB), size of data cache(KB))
and additional units such as phase locked loop (PLL). It is fabricated in a 0.35-m three-metal CMOS process
with 0.35V thresholds and 2V nominal supply voltage.
We develop a simple area model based on SA-110. The area of the chip is 49.92mm 2 (7.8mm \Theta 6.4mm).
Approximately 25% of the die area is devoted to the core (12.48mm 2 ). The issue unit and branch unit occupies
approximately 5% of the die area (2.50mm 2 ). The integer ALU and load/store unit consume roughly 5% of the
die area (2.50mm 2 ). The DMMU and IMMU (data and instruction MMU) occupies roughly 10% (5mm 2 ) of the
area. The rest of the core area is used by other units such as the write buffers and bus interface controller. We
assume that the area of miscellaneous units is relatively stable in the sense that it does not change as we increase
the issue width or cache sizes. We further assume a VLIW issue unit area model which is generally of complexity
O(n).
The model is based on area models used in [13] and [2]. In particular, we use partitioned register files and multi-cluster
machines to simplify the model. Consequently, the areas of components such as datapath and register files,
that are generally of super-linear complexity, can be assumed to be of linear complexity. The chip area model is
linear. Cycle speed can remain constant across various machine configurations when multi-cluster machines are
used. The model may not be extremely accurate, but it may be good enough to demonstrate the framework without
going into the details of building the entire machine. The area of an arbitrarily configured VLIW machine is given
by
branch A branch nmemAmem +A misc
The terms n issue , A issue , nALU , AALU , n branch , A branch , nmem , Amem and A misc are the issue width, the baseline
issue unit area, the number of ALUs, the area of a single ALU, the number of branch units, the branch unit area,
the number of memory units, the area of single memory unit and miscellaneous area, respectively.
We did not include floating-point units in any machine configurations because the applications we used have only
$define ISSUE 1
$define IALU 1
$define BRANCH 1
$define MODEL superscalar
# Enumerate resources
(Resources declaration
mem 0[0.$ISSUE$]
end)
Figure
2. An example High-Level Machine Description (HMDES)
integer operations. All functional units have multipliers. Cache area is calculated using the Cache Design Tools
[14]. We use the same cache parameters for all cache configurations except their size and the number of read/write
ports: 64 bytes per line and single bank. The external bus width is 64 bits and latency is 4 cycles. We assume that
the number of read/write ports are the same as the number of functional units.
A set of example area estimates for superscalar machines with different cache and core configurations are shown
in
Table
1. In the rest of the paper we describe a machine configuration by a 6-tuple as shown in Table 1.
3.4. Media Applications
The set of media applications used in this experiment is composed of complete applications which are publically
available and coded in a high-level language. We use 8 applications culled from available image processing,
communications, cryptography and DSP applications. Brief summaries of applications and data used are shown
in
Table
3. More detailed descriptions of the applications can be found in [27].
3.5. Experiment Platform
We use the IMPACT tool suit [7] to measure run times of media applications on various machine configura-
tions. The IMPACT C compiler is a retargetable compiler with code optimization phases especially developed for
multiple-instruction-issue processors. The target machine for the IMPACT C can be described using the high-level
machine description language (HMDES). A high-level machine description supplied by a user is compiled by the
IMPACT machine description language compiler. Figure 2 shows an example HMDES file.
IMPACT provides cycle-level simulation of both the processor architecture and implementation. The optimized
code is consumed by the Lsim simulator. At simulation time, Lsim takes cache structure information provided by
a user.
Figure
3 shows the flow of simulation using IMPACT tools.
Source program written
in C
IMPACT compiler
Simulation results
Pcode, Hcode, Lcode
High-levl Machine description
(HMDES) Cycle time
IMPACT simulator
Icache & Dcache
configuration
Limpact+Lsuperscalar
7 executables
IMPACT HMDES compiler
MDES
Figure
3. Performance measurement flow using IMPACT tools
Configuration Area (mm 2 ) A1 A2 A3
M3 (4, 4, 1, 4, 2,
Table
2. An illustrative run time examples: a machine configuration consists of (issue width, number of ALUs,
number of branch units, number of memory units, size of instruction cache(KB), size of data cache(KB))
Application Instr. a Source Description Data file b Data Description
JPEG encoder 13.9 Independent JPEG image 101,484 PPM (bit map)
JPEG decoder 3.8 JPEG Group encoding/decoding 5,756 JPEG compressed
MPEG encoder 1,121.3 MPEG Simul- MPEG-2 movie 506,880 YUV 4 frames
MPEG decoder 175.5 ation Group encoding/decoding 34,906 MPEG-2 (http://www.mpeg2.de/)
Pegwit encryption 34.0 George Barwood encryption/ 91,503 plain ASCII
Pegwit decryption 18.5 decryption 91,537 Pegwit encrypted
ADPCM encoder 6.8 Jack Jansen speech compression 295,040
ADPCM decoder 5.9 and decompression 73,760 ADPCM encoded
Table
3. A brief description of applications and data used in the experiment
a Dynamic instruction count measured using SpixTools on SPARC5 (in millions)
b in bytes
4. Problem Formulation
Informally the problem can be stated as follows. For a given set of media applications and their performance
constraints, synthesize an area optimal processor that guarantees the timing requirements of the applications.
Under the assumptions given in Section 3, run times of each media task on each architecture in consideration are
measured. The mesaured run times can be organized in a table as shown in Table 2.
One obvious, albeit sub-optimal, solution can be obtained directly from the table by selecting best processors for
each individual task. Subsequently, an individual task is assigned to a corresponding processor that guarantees run
time constraints. For example, from Table 2 we can choose processors M 1
and M 2
for media applications A 1
and A 3
, respectively. This simple-minded solution can be good enough when we cannot find much parallelism
across all the tasks and the run times of applications on selected processors are similar. That is, if we cannot reduce
run times of applications by increasing resources of a processor. If no significant parallelism is present in tasks, a
multiple-instruction-issue processor is little better than a single-instruction-issue processor. In other words, it is not
possible to reduce the run times of individual task and to assign more than one task to a multiple-instruction-issue
processor, while satisfying timing requirements, just because it has more resources. This approach, however, will
result in grossly inefficient solutions if run times are not similar across tasks. Assuming that M 3 is not available
in the example, the above mentioned solution is the optimum solution.
Unfortunately, as alluded before, the solution for the problem is no longer trivial when there is a need to run
more than one task on a processor. This situation results when we can find enough parallelism across tasks
and performance requirements are such that we can justify the use of multiple-instruction-issue processors, thus
requiring merging more than one tasks into a single processor. For example, from Table 2, we can choose M 1
and
since A 2
and A 3
can run on M 3
in turn without violating timing requirements. The sum of areas of the two
processors (57.29 mm 2 ) is smaller than that of the simple-minded solution (65.53 discussed above. In this
case, the problem size is much bigger than the simple-minded case. This is similar to the Bin Packing Problem
(BPP) [11]. In fact, we can transform the BPP to this problem in polynomial time as will be shown next in this
section. Under this scenario, we have subsets (power set of the task set) that should be considered to
choose up to n processors, where n is the number of tasks.
We now define the problem using more formal Garey-Johnson format [15].
Selection Problem
Instance: Given a set T of n media applications, a i , m, the
run times e ij of the media applications a i , on the machines c j ,
C and E,
Question: Is there a multisubset (subset in which more than one instance of a processor can be included) M of k
processors, c p ,
- C and max j2M
is
the area of the machine c j and t j is the set of tasks that are assigned to the machine j.
Theorem. The Selection Problem is NP-complete.
Proof. The Bin Packing Problem can be mapped to a special case of the Selection Problem. For a given task set
and an integer k 2 feje objects to 2 (excluding the
empty set from the power set of T ) of T . The k bins in the BPP are mapped to the k processors.
Note, however, that the simulation time to measure run times of each task on each processor takes well over two
weeks depending on the experiment setup, it is well worth to try to devise an efficient algorithm to obtain the
optimum solutions. In the following section, we explain an efficient optimal algorithm for the problem.
5. System Synthesis
In this section we informally explore the processor selection space and describe the framework of the approach.
We elaborate an efficient algorithm for optimum solutions.
5.1. Global Design Flow
We collect run times (expressed as a number of cycles) of the media applications on 175 different machine configurations
(25 cache configurations for 7 processor configurations). First we build executables of the media
applications on seven different architectures. we did not modify application in any way to affect the compiler ability
to find available ILP. In fact, we use the applications "out of the box" except that we eliminate graphical screen
outputs. All the compilation was done with advanced scheduling features designed for multiple-issue-machines
turned on.
The processors considered are machines with a single branch unit and one of the one-, two-, four-, and eight-issue
units, machines with two branch units and one of the four- and eight-issue units, and machines with four branch
units and a eight-issue unit. The IMPACT compiler generates aggressively optimized code to increase achieved
ILP for each core. The optimized code is consumed by the Lsim simulator. We simulate the applications for a
number of different cache configurations. For each executable of a benchmark, we simulate 25 combinations of
instruction cache and data cache ranging from (512 bytes, 512 bytes) to (8 KB, 8 KB).
After all the simulations are completed, we run an efficient selection algorithm to select machine configuration
sets under various performance constraints. Figure 4 shows the global flow of design process.
5.2. Synthesis Algorithm
In the following section we develop a branch and bound based algorithm for the selection of area minimal hyper-media
processor configurations. From the run times measured as illustrated in Table 2, the algorithm examines all
Performance constraints
Machine models and
cache configurations
Find minimum area
configuration for all
elements in P(A) which
meet the constraint
Compute run times for
all elements in P(A) Selection algorithm
Compile, Simulation
and measure run
times
Figure
4. Global design flow. E is the set of run times and P(A) is the power set of a given media tasks
possible combinations of media task run times to find a best processor configurations.
Before entering the branch and bound loop, the algorithm finds the area optimal processor for each element of the
power set of the media applications. This step eliminates all processors that are sub-optimal in terms of running
partial task sets the size of which ranges from 1 to n, n being the number of media tasks. If, for a subset of tasks,
we cannot find a processor that can satisfy a given timing constraint, then there is no feasible solution that run the
particular subset of tasks on a processor. The algorithm is given in Figure 5 and 6.
The run time of the algorithm is dependent on two factors: the size of a given task set and the timing constraint.
It is apparent that the size of a given task set affects the run time as the combination of tasks is exponential with
respect to the number of tasks. As we lower the timing constraint, the number of feasible processors increases.
This increase results in limited pruning of possible combinations of tasks in early stages of the algorithm. In
our experiment, which is run on a SPARC4 machines and reported later, the run times of the algorithm ranges
from less than one second for the strictest timing constraint to approximately 50 seconds for the least strict timing
constraint.
6. Experimental Results
We evaluate the framework presented in this paper by conducting an experiment with a set of 8 media applications
shown in Figure 1. The range of the performance constraints we examined is from 5:7 \Theta 10 6 to 4:47 \Theta 10 7 cycles,
which is the maximum amount of time allotted to finish processing a quantum worth of computation. In our
experiment, a quantum size is equal to 0.267 seconds, which implies that the speed constraint of the processors is
ranging from 21.37MHz to 167.6 MHz.
In this experiment, for the tightest constraint (5.7 million cycles) six processor components are selected. As we
allow more cycle times to finish a quantum, we need less processors and less cache memory. Note that processor
configurations with same numbers (e.g., 1 st under the constraint 5.7 million cycles and 1 st processor under the
constraint 16.2 millon cycles) are not necessarily the same in terms of the issue unit width, the number of branch
units, and so forth.
Figure
7 and Table 4 shows changes in the number of selected processors and overall area as the performance
a set of media applications;
a set of processors;
set of cache configurations;
Construct Run Time Table( A, M, C );
set of A \Gamma fg;
set of timing constraints;
for each timing constraint t 2 T
Branch and Bound(S; t);
Construct Run Time T able(A; M;C)
for each a 2 A
for each
generate an executable;
for each c 2 C
time of a ij with k cache;
return
for each element s
find the minimum area processor that can run
all tasks in s in t
include the processor to
return
Figure
5. Synthesis of hypermedia processors
SET Branch and Bound(S; t)
best node;
Generate New Nodes(S; t; stack; best; node);
while( s is not empty )
take a node out of stack;
best = Cost( node );
best
else
Generate new nodes(S; t; stack; best; node);
return best node;
Generate New Nodes(S; t; stack; best; node)
for each s 2 S
best )
insert s into stack;
Figure
6. Synthesis of hypermedia processors (Continued)
Peformance constraint (number of cycles in millions)
Area
6th processor
5th processor
4th processor
3rd processor
2nd processor
1st processor
Figure
7. Minimum area configurations for a range of cycle time constrains2610145.7 11.7 17.7 23.7 29.7 35.7 41.7
Performance constraint (number of cycles in millions)
Issue
width
6th Processor
5th Processor
4th Processor
3rd Processor
2nd Processor
1st Processor
Figure
8. Issue widths of minimum area configurations for a range of cycle time constrains
Performance constraint (number of cycles in milions)
Number
of
branch
units
6th Processor
5th Prcoessor
4th Processor
3rd Processor
2nd Processor
1st Processor
Figure
9. Branch widths of minimum area configurations for a range of cycle time constrains
constraint loosens. Looser timing results in reduction of the total number of processors used and also the total area
of the processor. For a lower performance requirement more applications can be fit onto single small processor
rather than separate multi-issue processors. This phenomenon can be also be seen on Figure 8 which shows
the issue width of processors as the performance constraint varies. Single issue processors are dominant in the
performance constraint range from 1:47 \Theta 10 7 to 4:47 \Theta 10 7 number of cycles. The reason is that single issue
machines give the most performance per area than wider issue machines. The available ILP in the applications
does not compensate the increase of the area of wider issue units. Therefore, wide issue machines only appear to
be viable choices between 5:7 \Theta 10 6 and 1:17 \Theta 10 7 where speed is more critical.
The size of branch units also shows the similar characteristics. Figure 9 shows that only the extreme case where
are available would require 2 branch units on a single processor. The main reason for this result
is that the higher number of branch units only appear in wide issue machines and for those machines, the available
ILP of the applications do not give the equal amount of speed up.
Figure
show the total size of instruction cache (I-cache) and data cache (D-cache) as the performance
constraint varies. It seems that they are varying randomly. However, the general trend is that the cache requirement
goes down as we loosen the constraint. Examining the results further reveals that all the bumps where the total
size goes from local minimum to local maximum occur exactly when the number of processors is reduced. The
area of a processor saved by reducing the issue width is transfered to a larger cache area so that the applications
can be run faster and fit onto smaller number of processors. Table 5 summarizes instruction and data cache sizes
Cycles 1 st 2 nd 3 rd 4 th 5 th 6 th Sum
5.7 68.2657 61.8987 80.9638 25.0326 21.0357 19.0315 276.228
7.2 38.6433 31.4651 17.0273 68.2657 17.0273 - 172.4287
8.7 14.0115 23.0284 14.5148 14.5148 27.866 31.4651 125.4006
10.2 17.5236 14.0115 14.0115 38.6433 26.0623 - 110.2522
13.2 25.0326 18.0268 16.0226 17.0273 - 76.1093
19.2 16.0226 17.0273 18.0268 - 51.0767
22.2 14.5148 15.5194 15.018 - 45.0522
25.2 21.0357 19.0315 - 40.0672
26.7 21.0357 16.0226 - 37.0583
28.2 19.0315 16.0226 - 35.0541
34.2 14.5148 15.018 - 29.5328
37.2 14.0115 14.5148 - 28.5263
40.2 14.0115 14.0115 - 28.023
41.7 14.0115 14.0115 - 28.023
43.2 14.0115 14.0115 - 28.023
Table
4. Required area (mm 2 ) to meet performance constraints
Instruction
cache
size
(Kbytes) 6th Processor
5th Processor
4th Processor
3rd Processor
2nd Processor
1st Processor
Figure
10. I-cache size of minimum area configurations for a range of cycle time constrains
required to meet a variety of cycle time constraints.
Figure
12 and 13 show the required cache amounts for the encoder group and decoder group (see Figure 1 for
the explanation of the terms), respectively. We see that the D-cache size is roughly the same for encoding and
decoding since the working set and the temporal locality is small. This is not the case for I-cache, since the
encoding applications are more computationally intensive and have bigger inner loops. For example, mpeg2enc
requires 50% more cycles to encode the same number of frames than mpeg2dec to decode on same processor
configuration. Thus, higher I-cache and area usage is seen on encoding applications than decoding at the same
performance constraint. Figure 14 shows the area used for encoding, decoding and the combined sets. Encoding
set require slightly more area than the decoding set. Since the combined set includes all the applications and have
more configuration combination to choose from, the total area used is less than the sum of the area of separated
encoding and decoding sets.
7. Conclusion
A distributed hypermedia system that supports collaboration is an emerging tool for creation, discovery, management
and delivery of information. Distributed hypermedia systems are becoming increasingly desired and practical
as other areas of information technologies advance.
The advances in compiler technology and architectural enhancements found in commercial DSPs motivated this
Data
cache
size
(Kbytes)
6th Processor
5th Processor
4th Processor
3rd Processor
2nd Processor
1st Processor
Figure
11. D-cache size of minimum area configurations for a range of cycle time
Performance constraint (number of cycles in millions)
Instruction
cache
size
(Kbytes) Encoding apps
Decoding apps
Combined apps
Figure
12. Required cache amount of minimum area configurations for encoding tasks
Cycles 1 st 2 nd 3 rd 4 th 5 th 6 th Sum
I D I D I D I D I D I D I D
16.2
28.2
37.2
41.7
43.2
Table
5. Cache variation
Data
cache
size
(Kbytes)
Encoding apps
Decoding apps
Combined apps
Figure
13. Required cache amount of minimum area configurations for decoding tasks
work. Thus, the approach presented in this paper makes use of a state-of-the-art ILP compiler, simulators, the
notion of multiple-instruction-issue processors and an efficient algorithm to combine processors optimized for an
individual task or a set of tasks to minimize the area of resulting hypermedia processor. Although the selection
problem is shown to be NP-complete, we found that the optimal solutions can be obtained in reasonable run time
for practically sized problems.
Using the developed framework we conduct an extensive exploration of area optimal system design space exploration
for a hypermedia application. We found that there is enough ILP in the typical media and communication
applications to achieve highly concurrent execution when throughput requirements are high. On the other hand,
when throughput requirements are low, there is no need to use multiple-instruction-issue processors as they provide
no desirable benefits. This phenomenon is due to the fact that increased area does not produce enough performance
gains to justify the use of multiple-instruction-issue processors when throughput requirements are low.
The framework introduced in this paper is valuable in making early design decisions such as architectural configuration
trade-offs including the cache and issue width trade-off under area constraint, and the number of branch
units and issue width.
--R
Instruction set design and optimizations for address computation in
Performance and cost analysis of the execution stage of superscalar microprocessors.
Treegion scheduling for highly parallel processors.
World wide web: The information universe.
The world wide web.
As we may think.
IMPACT: An architectural framework for multiple-instruction-issue processors
A VLIW architecture for a trace scheduling compiler.
Determining cost-effective multiple issue processor designs
A technique to determine power-efficient
Introduction to algorithms.
Trace scheduling: A technique for global microcode compaction.
Computer Architecture: Pipelined and Parallel Processor Design.
Computers and Intractability: A Guide to the Theory of NP-Completeness
Embedded software in real-time signal processing systems: design technologies
Design issues for a Dexter-based hypermedia system
Reflections on note-cards: Seven issues for the next generation of hypermedia systems
MicroUnity's MediaProcessor architecture.
The Amsterdam hypermedia model: Adding time and context to the dexter model.
Highly concurrent scalar processing.
A universal hypermedia systems.
Distributed hypermedia.
Heterogeneous built-in resiliency of application specific programmable processors
Standardizing hypermedia information objects.
A tool for evaluating and synthesizing multimedia and communications systems.
Media processing: A new design target.
Viewing dexter with open eyes.
Retargetable generation of code selectors from HDL processor models.
Instruction selection using binate covering for code size optimization.
The superblock: An effective technique for VLIW and superscalar compilation.
Effective compiler support for predicated execution using the Hyperblock.
A filestructure for the complex
Embedded software in real-time signal processing systems: application and architecture trends
MMX technology extension to the Intel architecture.
Cost optimization in ASIC implementation of periodic hard real-time systems using behavioral synthesis techniques
Heuristic techniques for synthesis of hard real-time DSP application specific systems
Memory bank and register allocation in software synthesis for ASIPs.
TI's new
An evolution programming approach on multiple behaviors for the design of application specific programmable processors.
--TR
A VLIW architecture for a trace scheduling compiler
Reflections on NoteCards: seven issues for the next generation of hypermedia systems
Introduction to algorithms
IMPACT
Effective compiler support for predicated execution using the hyperblock
The superblock
Design issues for a Dexter-based hypermedia system
The Amsterdam hypermedia model
Viewing Dexter with open eyes
The World-Wide Web
Memory bank and register allocation in software synthesis for ASIPs
Instruction selection using binate covering for code size optimization
Cost optimization in ASIC implementation of periodic hard-real time systems using behavioral synthesis techniques
Custom-fit processors
MediaBench
Computer Aided Design of Fault-Tolerant Application Specific Programmable Processors
Computer Architecture
Computers and Intractability
MicroUnity''s MediaProcessor Architecture
MMX Technology Extension to the Intel Architecture
Hardware-Software Interactions on Mpact
Treegion Scheduling for Highly Parallel Processors
An evolution programming approach on multiple behaviors for the design of application specific programmable processors
Retargetable Generation of Code Selectors from HDL Processor Models
A technique to determine power-efficient, high-performance superscalar processors
Complex information processing
Instruction Set Design and Optimizations for Address Computation in DSP Architectures | hypermedia processor;synthesis framework;workload characterization;instruction-level parallelism |
372816 | Rationalising the Renormalisation Method of Kanatani. | The renormalisation technique of Kanatani is intended to iteratively minimise a cost function of a certain form while avoiding systematic bias inherent in the common method of minimisation due to Sampson. Within the computer vision community, the technique has generally proven difficult to absorb. This work presents an alternative derivation of the technique, and places it in the context of other approaches. We first show that the minimiser of the cost function must satisfy a special variational equation. A Newton-like, fundamental numerical scheme is presented with the property that its theoretical limit coincides with the minimiser. Standard statistical techniques are then employed to derive afresh several renormalisation schemes. The fundamental scheme proves pivotal in the rationalising of the renormalisation and other schemes, and enables us to show that the renormalisation schemes do not have as their theoretical limit the desired minimiser. The various minimisation schemes are finally subjected to a comparative performance analysis under controlled conditions. | Introduction
Many problems in computer vision are readily formulated as the need to minimise a
cost function with respect to some unknown parameters. Such a cost function will
often involve (known) covariance matrices characterising uncertainty of the data and
will take the form of a sum of quotients of quadratic forms in the parameters. Finding
the values of the parameters that minimise such a cost function is often difficult.
One approach to minimising a cost function represented as a sum of fractional expressions
is attributed to Sampson. Here, an initial estimate is substituted into the denominators
of the cost function, and a minimiser is sought for the now scalar-weighted
numerators. This procedure is then repeated using the newly obtained estimate until
convergence is obtained. It emerges that this approach is biased. Noting this, Kenichi
Kanatani developed a renormalisation method whereby an attempt is made at each iteration
to undo the biasing effects. Many examples may be found in the literature of
problems benefiting from this approach.
In this work, we carefully analyse the renormalisation concept, and place it in the
context of other approaches. We first specify the general problem form and an associated
cost function to which renormalisation is applicable. We then show that the
cost function minimiser must satisfy a particular variational equation. Interestingly,
we observe that the renormalisation estimate is not a theoretical minimiser of the cost
function, and neither are estimates obtained via some other commonly used methods.
This is in contrast to a fundamental numerical scheme that we present.
New derivations are given for Kanatani's first-order and second-order renormalisation
schemes, and several variations on the theme are proposed. This serves as a
rationalising of renormalisation, making recourse to various statistical concepts. Experiments
are carried out on the benchmark problem of estimating ellipses from synthetic
data points and their covariances. The renormalisation schemes are shown to
perform better than more traditional methods in the face of data exhibiting noise that
is anisotropic and inhomogeneous. None of the methods outperforms the relatively
straightforward fundamental numerical scheme.
Problem Formulation
A wide class of computer vision problems may be couched in terms of an equation of
the form
Here is a vector representing unknown parameters;
is a vector representing an element of the data (for example, the locations of a pair of
corresponding points); and is a vector with the data transformed
in such a way that: (i) each component is a quadratic form in the compound
vector one component of u(x) is equal to 1. An ancillary constraint may
also apply that does not involve the data, and this can be expressed as
for some scalar-valued function . The estimation problem can now be stated as fol-
lows: Given a collection image data, determine 6= 0 satisfying (2)
such that (1) holds with x replaced by x i for 1 i n. When n > l and noise is
present, the corresponding system of equations is overdetermined and as such may fail
to have a non-zero solution. In this situation, we are concerned with finding that best
fits the data in some sense. The form of this vision problem involving (known) co-variance
information was first studied in detail by Kanatani [12], and later by various
others (see, e.g., [4, 13,14,20, 21]).
Conic fitting is one problem of this kind [2, 23]. Two other conformant problems
are estimating the coefficients of the epipolar equation [6], and estimating the coefficients
of the differential epipolar equation [3, 22]. Each of these problems involves an
ancillary cubic constraint. The precise way in which these example problems accord
with our problem form is described in a companion work [4].
3 Cost Functions and Estimators
A vast class of techniques for solving our problem rest upon the use of cost functions
measuring the extent to which the data and candidate estimates fail to satisfy (1). If-
for simplicity-one sets aside the ancillary constraint, then, given a cost function
corresponding estimate b is defined by
6=0
Since (1) does not change if is multiplied by a non-zero scalar, it is natural to demand
that b should satisfy (3) together with all the b
, where is a non-zero scalar. This is
guaranteed if J is -homogeneous:
Henceforth only -homogeneous cost functions will be considered. The assignment
of b (uniquely defined up to a scalar factor) to x will be termed the J-based
estimator of .
Once an estimate has been generated by minimising a specific cost function, the
ancillary constraint (if it applies) can further be accommodated via an adjustment pro-
cedure. One possibility is to use a general scheme delivering an 'optimal correction'
described in [12, Subsec. 9.5.2]. In what follows we shall confine our attention to the
estimation phase that precedes adjustment.
3.1 Algebraic Least Squares Estimator
A straightforward estimator is derived from the cost function
JALS
where A
l Here each summand T A i
is the square of the algebraic distance j T u(x i )j. Accordingly, the JALS -based estimate
of is termed the algebraic least squares (ALS) estimate and is denoted b ALS . It
is uniquely determined, up to a scalar factor, by an eigenvector of
associated
with the smallest eigenvalue [4].
3.2 Approximated Maximum Likelihood Estimator
The ALS estimator treats all data as being equally valuable. When information about
the measurement errors is available, it is desirable that it be incorporated into the estimation
process. Here we present an estimator capable of informed weighting. It is
based on the principle of maximum likelihood and draws upon Kanatani's work on
geometric fitting [12, Chap. 7].
The measurement errors being generally unknowable, we regard the collective data
sample value taken on by an aggregate of vector-valued random
variables We assume that the distribution of exactly
specified but is an element of a collection fP j 2 Hg of candidate distributions,
with H the set of all (n
The candidate distributions are to be such that if a distribution P is in effect, then each
n) is a noise-driven, fluctuating quantity around x i .
We assume that the data come equipped with a collection
positive
definite k k covariance matrices. These matrices constitute repositories of prior
information about the uncertainty of the data. We put the x i
in use by assuming that,
for each 2 H, P is the unique distribution satisfying the following conditions:
for any the random vectors x i and x j (or equivalently,
the noises behind x i and x j ) are stochastically independent;
for each normal distribution
with mean value vector x i and covariance matrix x i
, that is:
Each distribution P will readily be described in terms of a probability density
function (PDF) (~ x
Resorting to the principle of maximum
likelihood, we give the greatest confidence to that choice of for which the
likelihood function 7! f(x attains a maximum. Using the explicit
form of the PDF's involved, one can show that the maximum likelihood estimate is the
parameter b ML at which the cost function
attains a minimum [4, 12]. Each term in the above summation represents the squared
Mahalanobis distance between x i and x i . Note that the value of b
ML remains unchanged
if the covariance matrices are multiplied by a common scalar.
The parameter naturally splits into two parts: 1
These parts encompass the principal parameters and nuisance
parameters, respectively. We are mostly interested in the 1 -part of b ML , which we
call the maximum likelihood estimate of and denote b ML . It turns out that b
ML
can be identified as the minimiser of a certain cost function which is directly derivable
from JML . This cost function does not lend itself to explicit calculation. However, a
tractable approximation [4] can be derived in the form of the function
where @x vector y and any k k
matrix , we let
and next, for each
then JAML can be simply
written as
The JAML -based estimate of will be called the approximated maximum likelihood
(AML) estimate and will be denoted b AML .
It should be observed that JAML can be derived without recourse to principles of
maximum likelihood by, for example, using a gradient weighted approach that also
incorporates covariances. Various terms may therefore be used to describe methods
that aim to minimise a cost function such as JAML , although some of the terms may
not be fully discriminating. Candidate labels include 'heteroscedastic regression' [13],
'weighted orthogonal regression' [1, 9], and `gradient weighted least squares' [24].
3.3 Variational Equation
Since b AML is a minimiser of JAML , we have that, when the following
equation holds:
Here @ JAML denotes the row vector of the partial derivatives of JAML with respect to
. We term this the variational equation. Direct computation shows that
where X is the symmetric matrix
Thus (7) can be written as
This is a non-linear equation and is unlikely to admit solutions in closed form.
Obviously, not every solution of the variational equation is a point at which the
global minimum of JAML is attained. However, the solution set of the equation provides
a severely restricted family of candidates for the global minimiser. Within this
set, the minimiser is much easier to identify.
4 Numerical schemes
Closed-form solutions of the variational equation may be infeasible, so in practice
b AML has to be found numerically. Throughout we shall assume that b AML lies close
to b ALS . This assumption is to increase the chances that any candidate minimiser obtained
via a numerical method seeded with b ALS coincides with b AML .
4.1 Fundamental Numerical Scheme
A vector satisfies (9) if and only if it falls into the null space of the matrix X .
Thus, if k 1 is a tentative guess, then an improved guess can be obtained by picking
a vector k from that eigenspace of X k 1
which most closely approximates the null
space of X ; this eigenspace is, of course, the one corresponding to the eigenvalue
closest to zero. It can be proved that as soon as the sequence of updates converges,
the limit is a solution of [4]. The fundamental numerical scheme implementing the
above idea is presented in Figure (1). The algorithm can be regarded as a variant of the
Newton-Raphson method.
1. Set
2. Assuming that k 1 is known, compute the matrix X k 1
3. Compute a normalised eigenvector of X k 1
corresponding to
the eigenvalue closest to zero and take this eigenvector for k .
4. If k is sufficiently close to k 1 , then terminate the procedure;
otherwise increment k and return to Step 2.
Figure
1: Fundamental numerical scheme.
4.2 Sampson's Scheme
Let
and let
be the modification of JAML (; x which the variable in the denominators
of all the contributing fractions is "frozen" at the value . For simplicity, we
abbreviate J 0
was the first to propose a scheme aiming to minimise a function
involving fractional expressions, such as JAML (although his cost functions did not
incorporate covariance matrices). Sampson's scheme (SMP) applied to JAML takes
b ALS for an initial guess 0 , and given k 1 generates an update k by minimising
the cost function 7! J 0
Assuming that the sequence f k g converges,
the Sampson estimate is defined as b . Note that each function
AML (; k 1 ) is quadratic in . Finding a minimiser of such a function is straight-
forward. The minimiser k is an eigenvector of M k 1
corresponding to the smallest
eigenvalue; moreover, this eigenvalue is equal to J 0
scheme is summarised in Figure (2).
1. Set
2. Assuming that k 1 is known, compute the matrix M k 1
3. Compute a normalised eigenvector of M k 1
corresponding to
the smallest (non-negative) eigenvalue and take this eigenvector
for k .
4. If k is sufficiently close to k 1 , then terminate the procedure;
otherwise increment k and return to Step 2.
Figure
2: Sampson's scheme.
A quick glance shows that this scheme differs from the fundamental numerical
scheme only in that it uses matrices of the form M instead of matrices of the form
. These two types of matrix are related by the formula X
Letting k !1 in (11) and taking into account
the equality J 0
we see that b SMP satisfies
[M JAML ()I l
where I l is the l l identity matrix. We call this the Sampson equation. Note that it is
different from the variational equation (9) and that, as a result, b SMP is not a genuine
minimiser of JAML .
Renormalisation
The matrices M E and M JAML ()I l underlying the variational equation
and the Sampson equation (12) can be viewed as modified or normalised forms
of M . As first realised by Kanatani [12], a different type of modification can be
proposed based on statistical considerations. The requirement is that the modified or
renormalised M be unbiased in some sense. Using the renormalised M , one can
formulate an equation analogous to both the variational and Sampson equations. This
equation can in turn be used to define an estimate of .
Here we rationalise the unbiasing procedure under the condition that noise in an
appropriate statistical model is small. In the next section, various schemes will be presented
for numerically computing the parameter estimate defined in this procedure. A
later section will be devoted to the derivation of an unbiasing procedure appropriate for
noise that is not necessarily small, and to the development of schemes for numerically
computing the parameter estimate defined in this more general procedure.
5.1 Unbiasing M
Regard the given data value taken on by the random vectors
introduced earlier. Suppose that
Form the following random version of M
with 'true' value
In view of (4), A(x i
0: On the other hand, since each rank-one matrix A(x i ) is non-negative definite, and
since also each B(x
) is non-negative definite 1 , M is non-negative definite. As
the A(x i ) are independent, M is generically positive definite, with E
0:
Thus on average T M does not attain its 'true' value of zero, and as such is biased.
The bias can be removed by forming the matrix
The terms E
can be calculated explicitly. There is a matrix-valued function
to be specified later, such that, for each
As
Thus B(x; ) is not positive definite, but merely non-negative definite.
The unbiased M can be written as
The random matrix Y is a raw model for obtaining a fully deterministic modification
of M . For each
Guided by (14), we take
for a modified M . Somewhat surprisingly, this choice turns out not to be satisfactory.
The problem is that while the A i do not change when the x i
are multiplied by a common
scalar, the D i do change. A properly designed algorithm employing a modified
should give as outcomes values that remain intact when all the x i
are multiplied
by a common scalar. This is especially important if we aim not only to estimate the
parameter, but also to evaluate the goodness of fit. Therefore further change to the
numerators of the fractions forming Y is necessary.
The dependence of D(x;) on is fairly complex. To gain an idea of what
needs to be changed, it is instructive to consider a simplified form of D(x;). A
first-order (in some sense) approximation to D(x;) is, as will be shown shortly, the
defined in (6). The dependence of B(x; ) on is simple: if is
multiplied by a scalar, then B(x;) is multiplied by the same scalar. This suggests
we introduce a compensating factor J com (; x com () in short, with the
property that if the x i
are multiplied by a scalar, then J com () is multiplied by the
inverse of this scalar. With the help of J com (), we can form, for each
renormalised numerator T A i J com () T B i and can next set
where M is given in (10) and N is defined by
The numerators in (16) are clearly scale invariant. Note that J com () plays a role
similar to that played by the factors ( T A i )=( the formula for X given
in (8). Indeed, if the x i
are multiplied by , then so are the B i , and consequently
the are multiplied by 1 . The main difference between J com ()
and the ( T A i )=( latter fractions change with the index i, while
J com () is common for all the numerators involved.
To find a proper expression for J com (), we take a look at X for inspiration. Note
that, on account of (8), T X 0: By analogy, we demand that T Y 0: This
equation together with (16) implies that
T N
=n
It is obvious that J com () thus defined has the property required of a compensating fac-
tor. Moreover, this form of J com () is in accordance with the unbiasedness paradigm.
Indeed, if we form the random version of J com
insofar as E
abbreviating J com (; x
to J com (),
and further
0:
We see that Y given by
is unbiased, which justifies the design of Y .
Since, in view of (19), J com () is equal to 1 in the mean, the difference between
and
is blurred on average. Thus the refined renormalisation based on (16) is close in spirit
to our original normalisation based on (15).
5.2 Renormalisation Equation
The renormalisation equation
is an analogue of the variational and Sampson equations alike. It is not naturally derived
from any specific cost function, and, as a result, it is not clear whether it has
any solution at all. A general belief is that in the close vicinity of b ALS there is a
solution and only one. This solution is termed the renormalisation estimate and is denoted
REN . Since the renormalisation equation is different from the variational and
distinct both from b AML and b SMP . It should be stressed
that the difference between b REN and b AML may be unimportant as both these estimates
can be regarded as first-order approximations to b ML and hence are likely to be
statistically equivalent.
In practice, b REN is represented as the limit of a sequence of successive approximations
to what b REN should be. If, under favourable conditions, the sequence is
convergent, then the limit is a genuine solution of (20). Various sequences can be taken
to calculate b REN in this way. The simplest choice results from mimicking the fundamental
numerical scheme as follows. Take b ALS to be an initial guess 0 . Suppose that
an update k 1 has already been generated. Form Y k 1
, compute an eigenvector of
corresponding to the eigenvalue closest to zero, and take this eigenvector for k .
If the sequence f k g converges, take the limit for b REN . As we shall see shortly, b REN
thus defined automatically satisfies (20). This recipe for calculating b REN constitutes
what we term the renormalisation algorithm.
6 First-Order Renormalisation Schemes
First-order renormalisation is based on the formula
as already pointed out in Subsection 5.1. To justify this formula, we retain the sequence
of independent random vectors as a model for our data
assume that distribution P for some
make a fundamental assumption to the effect that the noise driving each x i is small.
For simplicity, denote x i by x, and contract x i
to . Since the noise driving x is
small, we can replace u(x) by the first-order sum in the Taylor expansion about x,
next, taking into account that T 0, we can write
Hence
and further, in view of (5) and (6),
which, on account of (13), establishes (21).
With the formula (21) validated, we can safely use Y in the form given in (16)
(with J com given in (18)). The respective renormalisation estimate will be called the
first-order renormalisation estimate and will be denoted b REN1 .
6.1 The FORI Scheme
By introducing an appropriate stopping rule, the renormalisation algorithm can readily
be adapted to suit practical calculation. In the case of first-order renormalisation, the
resulting method will be termed the first-order renormalisation scheme, Version I, or
simply the FORI scheme. It is given in Figure (3).
1. Set
ALS .
2. Assuming that k 1 is known, compute the matrix Y k 1
using
(16).
3. Compute a normalised eigenvector of Y k 1
corresponding to the
eigenvalue closest to zero and take this eigenvector for k .
4. If k is sufficiently close to k 1 , then terminate the procedure;
otherwise increment k and return to Step 2.
Figure
3: First-order renormalisation scheme, Version I.
6.2 The FORII Scheme
The FORI scheme can be slightly modified. The resulting first-order renormalisation
scheme, Version II, or the FORII scheme, is effectively one of two schemes proposed
by Kanatani [12, Chap. 12] (the other concerns second-order renormalisation).
Introduce a function
com
Abbreviating J 0
com
com (; ), let
com (; )B i
com (; )N :
It is immediately verified that
for each and each 6= 0. We also have that
com
and
As previously, take b ALS to be an initial guess 0 . Suppose that an update k 1 has
already been generated. Then
be a normalised eigenvector of Y k 1
cor-
responding to the smallest eigenvalue k . In view of (25), the update Y k
is straight-forwardly
generated from the updates M k
, and c k . It turns out that, under a
certain approximation, c k can be updated directly from c k 1 rather than by appealing
to the above formula.
We have
Substituting k for and k 1 for in (22), we obtain
The last two equations imply that
Now, assume that J 0
since both terms are close to
J com ( k ), this is a realistic assumption. Under this assumption we have c
and equation (26) becomes
This is a formula for successive updating of the c k . Defining consecutive Y k with
the help of M k
and c k as in (25), and proceeding as in the FORI scheme, we
obtain a sequence f k g. If it converges, we take the corresponding limit for b REN1 . It
can be shown that b REN1 thus defined satisfies the renormalisation equation. The first-order
renormalisation scheme, Version II, or the FORII scheme, based on the above
algorithm is summarised in Figure (4).
6.3 The FORIII Scheme
With the help of the function J 0
com , yet another defining sequence can be constructed.
Take b ALS for an initial guess 0 . Suppose that an update k 1 has already been
generated. Define k to be the minimiser of the function 7! J 0
com (; k 1
6=0
com (; k 1
Since
com
ALS and c
2. Assuming that k 1 and c k 1 are known, compute the matrix
3. Compute a normalised eigenvector of M k 1
cor-
responding to the smallest eigenvalue k and take this eigenvector
for k . Then define c k by (27).
4. If k is sufficiently close to k 1 , then terminate the procedure;
otherwise increment k and return to Step 2.
Figure
4: First-order renormalisation scheme, Version II.
where
T N
com (; )N
T N
it follows that k satisfies
Assuming that the sequence f k g converges, let b
clearly,
b REN1 satisfies
which is an equation equivalent to (20). Note that in this method b REN1 is defined as
the limit of a sequence of minimisers of cost functions. As such the algorithm is similar
to Sampson's algorithm, but the latter, of course, uses different cost functions.
The minimisers k can be directly calculated. To see this, rewrite (28) as
com
We see that k is an eigenvector and J 0
com (; k 1 ) is a corresponding eigenvalue of
the linear pencil P defined by
( a real
If is any eigenvector of P k 1 with eigenvector
necessarily, J 0
com
com
conclude that J 0
is an eigenvector of P k 1 corresponding
.
2. Assuming that k 1 is known, compute the matrices M k 1
and N k 1
3. Compute a normalised eigenvector of the eigenvalue problem
corresponding to the smallest eigenvalue and take this eigenvector for k .
4. If k is sufficiently close to k 1 , then terminate the procedure; otherwise
increment k and return to Step 2.
Figure
5: First-order renormalisation scheme, Version III.
to the smallest eigenvalue. This observation leads to the first-order renormalisation,
Version III, or the FORIII scheme, given in Figure 5. The matrices N k 1
are singular,
so the eigenvalue problem for P k 1 is degenerate. A way of reducing this problem
to a non-degenerate one, based on the special form of the matrices M and N , is
presented in [4].
7 Second-Order Renormalisation
Second-order renormalisation rests on knowledge of the exact form of D(x). Here
we first determine this form and next use it to evolve a second-order renormalisation
estimate and various schemes for calculating it.
7.1 Calculating D(x;)
Determining the form of D(x;) is tedious but straightforward. We commence by
introducing some notation.
be the vector of variables. Append to this vector a unital
component, yielding
be the vector of carriers. Given the special form of
u(x) as described in Section 2, each u
can be expressed as
where K
In what follows we adopt
Einstein's convention according to which the summation sign for repeated indices is
omitted. With this convention, equation (29) becomes
Let x be a random Gaussian k vector with mean x and covariance matrix
Clearly, y has
for the mean, and the defined by
for the covariance matrix.
Since
we have
Now
By a standard result about moments of the multivariate normal distribution,
In view of (31),
and so
Hence
be the l l matrices defined by
d
d 2;
With these matrices, (32) can be written as
Hence
which is the desired formula.
7.2 Redefining J com () and Y
We retain the framework of Subsection 5.1, but use the full expression for D(x;) instead
of the first-order approximation. We aim to modify, for each
T A i into a term similar to T A i T D i (recall that D
)),
remembering the need for suitable compensation for scale change. The main problem
now is that D(x;) does not change equivariantly with . Under the scale change
7! , the two components of D(x;) defined in (34) undergo two different
transformations:
a solution as follows. We introduce a compensating factor J com (; x
J com () in short, with the property that if the x i
are multiplied by , then J com () is
multiplied by 1 . We place this factor in front of D
its square in
front of D
), and form a modified numerator as follows:
This numerator is obviously invariant with respect to scale change. In analogy to (17),
we introduce
and, in analogy to (16), let
Demanding again that T Y we obtain the following quadratic equation for
J com ():
This equation has two solutions
com
As M is positive definite, and N 1; and N 2; are non-negative definite, we have
com () 0. Since the compensating factor used in the first-order
renormalisation is non-negative, we take, by analogy, J
com () to be a compensating
factor and denote it by J com (); thus
J com
Multiplying both numerator and denominator of J com by [( T N
we see that
J com
If T N 2; T M is small compared to ( T N then we may readily infer
that
J com ()
T N
This expression is very similar to formula (18) for J com (), which indicates that the
solution adopted is consistent with the first-order renormalisation.
Inserting J com () given in (38) into (37), we obtain a well-defined expression for
Y . We can now use it to define a renormalisation estimate using the renormalisation
equation (20). We call this estimate the second-order renormalisation estimate and
denote it b
REN2 .
7.3 The SORI Scheme
Mimicking the FORI scheme, we can readily advance a scheme for numerically finding
b REN2 . We call this the second-order renormalisation scheme, Version I, or the SORI
scheme. Its steps are given in Figure (6).
1. Set
2. Assuming that k 1 is known, compute the matrix Y k 1
using (37) and
(38).
3. Compute a normalised eigenvector of Y k 1
corresponding to the eigenvalue
closest to zero and take this eigenvector for k .
4. If k is sufficiently close to k 1 , then terminate the procedure; otherwise
increment k and return to Step 2.
Figure
renormalisation scheme, Version I.
7.4 The SORII Scheme
The SORI scheme can be modified in a similar way to that employed with the FORI
scheme. The resulting second-order renormalisation scheme, Version II, or the SORII
scheme, is effectively the second of the two schemes originally proposed by Kanatani.
Introduce
com
Abbreviating J 0
com
com (; ), let
com
It is immediately verified that equations (22) and (24) hold, as does J com
com (; );
the counterpart of (23).
Again, take b ALS to be an initial guess 0 . Suppose that an update k 1 has already
been generated. Note that
where
Let k be a normalised eigenvector of Y k 1
corresponding to the smallest eigenvalue
k . We intend to find an update c k appealing directly to c k 1 . To this end, observe that
Substituting k for and k 1 for in (22), taking into account (41), assuming
explained analogously when deriving the FORII
scheme), and taking into account that, by (43), J 0
Combining this equation with (44) yields
Taking into account that c 2
we can rewrite (45) as the quadratic constraint on c k 1 given by
Let
Equation (46) has two solutions
which are real when D k 1 0.
Suppose that D k 1 0. If c k were directly defined by (43), it would be non-
negative. It is therefore reasonable to insist that c k obtained by updating c k 1 also be
non-negative. This requirement can be met by setting c next by ensuring that
k. For this reason, we select 1 to be c k 1 , obtaining
To treat the case D k 1 < 0, we first multiply the numerator and denominator of the
fractional expression in (48) by
obtaining
Next we note that if k
k is small compared to k
and further
This formula is very similar to (27). We use it with the equality sign instead of the
approximation sign to generate c k in the case D k 1 < 0.
In this way, we arrive at the following update formula:
The SORII scheme can now be formulated as in Figure (7).
1. Set
2. Assuming that k 1 and c k 1 are known, compute the matrix Y k 1 ;k 1
by using (42).
3. Compute a normalised eigenvector of Y k 1 ;k 1
corresponding to the
smallest eigenvalue k and take this eigenvector for k . Then define c k by
using (49).
4. If k is sufficiently close to k 1 , then terminate the procedure; otherwise
increment k and return to Step 2.
Figure
7: Second-order renormalisation scheme, Version II.
7.5 The SORIII Scheme
The estimate b REN2 can be represented as a limit of a sequence of minimisers of
cost functions as follows. Take b ALS for an initial guess 0 . Suppose that an up-date
has already been generated. Define k to be the minimiser of the function
com (; k 1
6=0
com (; k 1
Assuming that the sequence f k g converges, take lim k!1 k for b REN2 . It can readily
be shown that b REN2 thus defined satisfies Y It also can be shown that each
com
com
Here k is an eigenvector and J 0
corresponding eigenvalue of the
defined by
( a real
In fact, k is an eigenvector of P k 1 corresponding to the smallest eigenvalue. This observation
leads to the second-order renormalisation scheme, Version III, or the SORIII
scheme, given in Figure (8).
1. Set
2. Assuming that k 1 is known, compute the matrices M k 1
and N 2;k 1
3. Compute a normalised eigenvector of the eigenvalue problem
corresponding to the smallest eigenvalue and take this eigenvector for k .
4. If k is sufficiently close to k 1 , then terminate the procedure; otherwise
increment k and return to Step 2.
Figure
8: Second-order renormalisation scheme, Version III.
The eigenvalue problem for a quadratic pencil can readily be reduced to the eigenvalue
problem for a linear pencil. Indeed, and satisfy
if and only if there exists 0 such that
I l
I l 0
in which case, necessarily, . The matrices N 1 and N 2 appropriate for the
SORIII scheme are non-negative definite, but not necessarily positive definite. A way
of reducing the problem (51) to a similar problem involving positive definite matrices
2 is given in [4] This method takes advantage of the special form of the
matrices M , N 1; , and N 2; .
8 Experimental Results
The previously derived algorithms were tested on the problem of conic fitting, which
constitutes a classical benchmark problem in the literature [2, 5, 7, 8, 10,11, 15-19, 23].
Specifically, the fitting algorithms were applied to contaminated data arising from a
portion of an ellipse.
Synthetic testing is employed here as this enables precise control of the nature of
the data and their associated uncertainties. Results obtained in real world testing, in
applications domains described earlier, are presented in subsequent work.
Our tests proceeded as follows. A randomly oriented ellipse was generated such
that the ratio of its major to minor axes was in the range [2; 3], and its major axis was
approximately 200 pixels in length. One third of the ellipse's boundary was chosen as
the base curve, and this included the point of maximum curvature of the ellipse. A set
of true points was then randomly selected from a distribution uniform along the length
of the base curve.
For each of the true points, a covariance matrix was randomly generated (using
a method described below) in accordance with some chosen average level of noise,
. The true points were then perturbed randomly in accordance with their associated
covariance matrices, yielding the data points. In general, the noise conformed to an
inhomogeneous and anisotropic distribution. Figure 9 shows a large ellipse, some selected
true points, a small ellipse for each of these points and the data points. Each
of the smaller ellipses represents a level set of the probability density function used
to generate the datum, and as such captures graphically the nature of the uncertainty
described by its covariance matrix.
Figure
9: True ellipse, data, and associated covariance ellipses
The following procedure was adopted for generating covariance matrices associated
with image points, prescribing (anisotropic and inhomogeneous) noise at a given
average level . The scale of a particular covariance matrix was first selected from
a uniform distribution in the range [0; 2]. (Similar results were obtained using other
distributions about .) Next, a skew parameter was generated from a uniform distribution
between 0 and 0:5. An intermediate covariance matrix was then formed by
setting
This matrix was then 'rotated' by an angle
selected from a uniform distribution between
and 2 to generate the final covariance
with
O
cos
sin
sin
cos
Let Tr A denote the trace of the matrix A. Since
is clear that the above procedure ensures that E [Tr
With the data points and their associated covariances prepared, each method under
test was then challenged to determine the coefficients of the best fitting conic. (Note,
therefore, that it was not assumed that the conic was an ellipse.) The methods were
supplied with the data points, and if a specific method was able to utilise uncertainty
information, it was also supplied with the data points' covariance matrices. Then estimates
were generated, and for each of these a measure of the error was computed
using a recipe given below. Testing was repeated many times using newly generated
data points (with the covariance matrices and true data points remaining intact). The
average errors were then displayed for each method.
The error measure employed was as follows. Assume that a particular method
has estimated an ellipse. The error in this estimate was declared to be the sum of the
shortest (Euclidean) distances of each true point from the estimated ellipse. Note that
this measure takes advantage of the fact that the underlying true points are known. Were
these unknown, an alternative measure might be the sum of the Mahalanobis distances
from the data points to the estimated ellipses.
The methods tested were as follows:
least squares scheme,
renormalisation scheme 1,
renormalisation scheme 2,
renormalisation scheme 3,
renormalisation scheme 1,
renormalisation scheme 2,
renormalisation scheme 3,
Table
1 shows the average error obtained when each method was applied to 500
sets of data points, with varying from 1 to 10 pixels in steps of 1. Each set of data
points was obtained by perturbing shows the tabular
data in graphical form. The algebraic least squares method performs worst while some
of the renormalisation schemes and the fundamental numerical scheme perform best.
method is systematically deficient, generating average errors up to 22%
greater than the best methods. The SORI and SORII schemes are similarly deficient;
however, they are best seen as incremental developments leading to SORIII. Finally,
the FORI, FORII, FORIII and SORIII schemes are seen to trail FNS only very slightly.
NOISE LEVEL ALS SMP FOR I FOR II FOR III SOR I SOR II SOR III FNS
1:0 2:710 1:093 1:072 1:072 1:071 1:093 1:093 1:071 1:075
2:0 5:579 2:078 1:990 1:987 1:987 2:078 2:078 1:987 1:976
3:0 8:340 3:169 3:077 3:067 3:067 3:169 3:169 3:067 3:049
5:0 15:091 5:662 5:129 5:092 5:092 5:655 5:661 5:092 5:054
8:0 26:036 9:294 8:115 8:037 8:037 9:254 9:288 8:037 7:966
9:0 31:906 10:791 9:036 8:948 8:948 10:748 10:776 8:950 8:827
Table
1: Error results obtained for all methods
Acerage noise level (pixels)515
Average
error
(pixels)
ALS
Figure
results against average noise level depicted graphically. ALS refers
to the algebraic least squares method (with some errors out of range). Group 1 comprises
tabulated results for details.
9 Conclusion
The statistical approach to parameter estimation problems of Kenichi Kanatani occupies
an important place within the computer vision literature. However, a critical component
of this work, the so-called renormalisation method, concerned with minimising
particular cost functions, has proven difficult for the vision community to absorb. Our
major aim in this paper has been to clarify a number of issues relating to this renormalisation
method.
For a relatively general problem form, encompassing many vision problems, we
first derived a practical cost function for which claims of optimality may be advanced.
We then showed that a Sampson-like method of minimisation generates estimates which
are statistically biased. Renormalisation was rationalised as an approach to undoing
this bias, and we generated several novel variations on the theme.
Pivotal in the establishing of a framework for comparing selected iterative minimisation
schemes was the devising of what we called the fundamental numerical scheme.
It emerges that this scheme is not only considerably simpler to derive and implement
than its renormalisation-based counterparts, but it also exhibits marginally superior
performance.
Acknowledgements
The authors are grateful for the insightful comments of Marino Ivancic, Kenichi Kanatani,
Garry Newsam, Naoya Ohta and Robyn Owens. In addition, the authors would like to
thank two anonymous referees for providing suggestions that led to improvements in
the presentation of the paper. This work was in part funded by the Australian Research
Council and the Cooperative Research Centre for Sensor Signal and Information Processing
--R
Statistical Analysis of Measurement Error Models and Applications (Arcata
Fitting conic sections to scattered data
Determining the egomotion of an uncalibrated camera from instantaneous optical flow
Image and Vision Computing 10
Direct least square fitting of ellipses
A buyer's guide to conic fitting
Measurement
Statistical bias of conic fitting and renormalisation
Heteroscedastic regression in computer vision: problems with bilinear constraint
The role of total least squares in motion analysis
Fitting ellipses and predicting confidence envelopes using a bias corrected kalman filter
A note on the least square fitting of ellipses
Nonparametric segmentation of curves into various representations
Fitting conics sections to 'very scattered' data: An iterative refinement of the Bookstein algorithm
Estimation of planar curves
A new approach to geometric fitting
a tutorial with application to conic fitting
--TR
Measurement error models
Fitting ellipses and predicting confidence envelopes using a bias corrected Kalman filter
Estimation of Planar Curves, Surfaces, and Nonplanar Space Curves Defined by Implicit Equations with Applications to Edge and Range Image Segmentation
Ellipse detection and matching with uncertainty
Three-dimensional computer vision
A note on the least squares fitting of ellipses
A buyer''s guide to conic fitting
The Development and Comparison of Robust Methods for Estimating the Fundamental Matrix
On the Optimization Criteria Used in Two-View Motion Analysis
Direct Least Square Fitting of Ellipses
Heteroscedastic Regression in Computer Vision
Statistical Optimization for Geometric Computation
Statistical Bias of Conic Fitting and Renormalization
Nonparametric Segmentation of Curves into Various Representations
The Role of Total Least Squares in Motion Analysis
Optimal Estimation of Matching Constraints
Motion analysis with a camera with unknown, and possibly varying intrinsic parameters
--CTR
Wojciech Chojnacki , Michael J. Brooks, On the Consistency of the Normalized Eight-Point Algorithm, Journal of Mathematical Imaging and Vision, v.28 n.1, p.19-27, May 2007
N. Chernov, On the Convergence of Fitting Algorithms in Computer Vision, Journal of Mathematical Imaging and Vision, v.27 n.3, p.231-239, April 2007
N. Chernov , C. Lesort , N. Simnyi, On the complexity of curve fitting algorithms, Journal of Complexity, v.20 n.4, p.484-492, August 2004
Wojciech Chojnacki , Michael J. Brooks , Anton van den Hengel , Darren Gawley, On the Fitting of Surfaces to Data with Covariances, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.11, p.1294-1303, November 2000
N. Chernov , C. Lesort, Least Squares Fitting of Circles, Journal of Mathematical Imaging and Vision, v.23 n.3, p.239-252, November 2005
Wojciech Chojnacki , Michael J. Brooks , Anton van den Hengel , Darren Gawley, Revisiting Hartley's Normalized Eight-Point Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.9, p.1172-1177, September
Wojciech Chojnacki , Michael J. Brooks , Anton van den Hengel , Darren Gawley, From FNS to HEIV: A Link between Two Vision Parameter Estimation Methods, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.2, p.264-268, January 2004 | maximum likelihood;covariance matrix;surface fitting;conic fitting;renormalisation;statistical methods;fundamental matrix estimation |
373294 | A model of OASIS role-based access control and its support for active security. | OASIS is a role-based access control architecture for achieving secure interoperation of services in an open, distributed environment. Services define roles and implement formally specified policy for role activation and service use; users must present the required credentials, in the specified context, in order to activate a role or invoke a service. Roles are activated for the duration of a session only. In addition, a role is deactivated immediately if any of the conditions of the membership rule associated with its activation becomes false.OASIS does not use role delegation but instead defines the notion of appointment, whereby a user in some role may issue an \actright{} to some other user. The role activation conditions of services may include \actright{}s, prerequisite roles and environmental constraints.We motivate our approach and formalise OASIS. First, a basic model is presented followed by an extended model which includes parameterisation. | INTRODUCTION
The growing interest in role-based access control (RBAC)
as an eective means of replacing traditional discretionary
and mandatory access control has led to the development of
several models over the past few years [20, 8, 16]. Besides
the basic structure of subject, role and privilege, they all
include a notion known as role hierarchy. In its original
interpretation, a senior role in a hierarchy is treated as an
instance of all its junior roles. This means a senior role is
granted all privileges given to junior ones.
Although role hierarchy has become part of the accepted
model, we question whether it is appropriate in real-world
applications. In general a senior role will not need all the
privileges of its junior roles to carry out its work, and this
leads to violation of the principle of least privilege [18]. One
of the main reasons for using RBAC is that it provides a
natural way to model constraints such as separation of du-
ties. Role hierarchies complicate the specication and enforcement
of these constraints. For example, one kind of
separation of duties constraint is that a subject cannot have
a pair of con
icting privileges at any time. Privileges are
assigned to roles, and the interpretation of this constraint is
that a subject cannot act in a pair of con
icting roles simul-
taneously. Suppose that this pair of roles is in a hierarchy,
each being subsidiary to the same senior role. It would then
be impossible to exercise the senior role without breaking
either the constraint or the hierarchy.
There are various proposals to work around these prob-
lems. Sandhu [19] proposes separating the purposes of a
hierarchy into usage and activation. In an activation hierarchy
a user may choose to activate any role that is below
her assigned role in the hierarchy, therefore avoiding having
two con
icting roles active simultaneously. Ferraiolo et
al. [8] choose to override the inheritance relationship whenever
there is tension between hierarchy and con
ict. Moett
[14] suggests employing an ordering other than the organisational
hierarchy to dene a role hierarchy, or to use subsidiary
roles (also known as private roles [20]) outside the
organisational hierarchy. Such solutions are awkward since
they require an organisation to adapt its security policies
in order to avoid the potential problems of role hierarchies.
Moett and Lupu [15] examine some possible uses of role hierarchies
and identify three potential interpretations for role
hierarchies, namely isa hierarchy, activity hierarchy, and supervision
hierarchy. We believe that the use of role hierarchies
arises mainly through the in
uence of object-oriented
modelling and we are not convinced of their utility in practice
Goh questions the concept of role hierarchies from the
point of view of subsidiarity in [10]. He argues that useful
role hierarchies are uncommon in a real organisation where
tasks are assigned to appropriate roles independently of the
authority structure. We believe this view is a step forward
towards a more practical model for role based access control.
In this paper, we present a role based access control model
in which the fundamental role-role relationship, role activation
dependency, is dynamic. Each role activation is governed
by a set of rules, which are specied in logic. Roles
are parameterised, which helps to make policy expression
more scalable. Our model oers several advantages over existing
ones while retaining most of their desirable features.
The advantages include: (1) each role is named by a specic
issuing service, so that it is easy to dene roles and establish
policies for each service independently. (2) role activation is
controlled by rules which need to change only if the underlying
security policies change, so it is possible to deploy policy
separately for each administrative domain. (3) the use
of parameters makes it easy to tailor the model to specic
applications, since state may be read from the environment
when each role is activated. In this way active security based
on tasks or work
ow can be implemented naturally. (4) our
logic-based approach supports formal reasoning about policy
The remainder of this paper is organised as follows. Section
introduces our model informally and relates it to the
literature. Section 3 discusses appointment which replaces
delegation in the model. Section 4 provides a formal description
of the model. We present a simplied model in detail,
and indicate how it can be extended to include parameter-
isation. Section 5 describes a scenario to demonstrate one
application of the model. Section 6 concludes this paper
with a summary.
2. THEOASISACCESSCONTROLMODEL
OASIS stands for Open Architecture for Secure Interworking
Services. It is designed to facilitate access control in distributed
systems. OASIS embodies an open, decentralised
approach; for example, roles are dened by services and
services may interoperate, recognising one another's roles,
according to service-level agreements. We do not envisage
a single centralised role administrator. Previous work on
OASIS has focused mainly on the architectural issues [11,
3]. [12] discussed engineering issues in large-scale OASIS
implementations. It is important to note that OASIS is integrated
with an event infrastructure [3]; this allows services
protected by OASIS to communicate asynchronously, so that
one service can be notied immediately of a change of state
at another. This paper addresses the formalisation of the
OASIS model, but rst we motivate the essential concepts
on which it depends.
Central to the OASIS model is the idea of credential-based
role activation. The credentials that a user possesses, together
with side conditions which depend on the state of the
environment, will authorise him or her to activate a number
of roles. At any given time, a user will activate a subset of
these potential roles in order to carry out some specic task,
thus embodying the principle of least privilege in an organisation
[18]. The ability to activate and deactivate roles is
vital to the support for active security [23], where the context
is taken into consideration when an access is requested.
The concept of role activation in OASIS is similar to the
concept of session in [20], except that a user cannot deactivate
at will. Activation of any role in OASIS is explicitly
controlled by a role activation rule, and this rule may require
that specied preconditions continue to hold while the role
remains active, the role membership rule. When a role is
activated at a service an event channel is created in association
with each membership condition. An event is triggered
immediately any such condition becomes false, causing the
role to be deactivated. For example, such a trigger may
be generated when a timer expires or when a database is
updated to remove a user from membership of a group.
A role activation rule species the conditions that a user
must meet in order to activate a role. The intuition behind
this is that roles are usually given to a person provided that
he or she has met certain conditions, e.g. being qualied as
a physician, being employed by a company, being assigned
to a task, being on shift, etc. We model these conditions in
three categories, namely: prerequisite roles, appointments,
and constraints.
A prerequisite role in the condition for a target role means
that a user must have already activated the prerequisite
role before he or she can activate the target role. This is
a session-based notion. The basis of the concept of prerequisite
roles is competency and appropriateness, as pointed
out by Sandhu et al. [20].
Appointment occurs when a member of some role grants
a credential 1 that will allow some user to activate another
role. The context may be an assignment of jobs or tasks. It
may also be the passing of an examination, becoming professionally
qualied, or becoming employed. An activation
rule for a role requiring an appointment will require that the
associated credential is presented when activating that role.
Note that the appointing role is in no sense delegating; a
clerk in a hospital registry will not be medically qualied.
Security policies in real life often involve constraints such
as separation of duties. Several types of constraint have
been identied and discussed in the literature [20, 13, 21,
17]. In our model, constraints may be associated with role
activation rules, see denition 4 in section 4.1; in future
work we plan to specify role constraints at the organisational
level, for example, \an account clerk cannot simultaneously
be a billing clerk". We describe a possible implementation
of role constraints in the discussion of negated prerequisite
roles following denition 6.
The use of roles allows access control policy to be specied
in terms of the privileges of categories of users. This has
two advantages: rst, there is no need to change policy as
sta come and go; second, details of individuals need only
be taken into account during role activation. On the other
hand, we feel that in many applications it is insu-cient to
base access control decisions solely on roles and their assigned
privileges. This is especially true when context information
such as time needs to be considered. Extensions have
been proposed to the basic RBAC models in order to support
work
ow systems [6], team-based systems [22, 24], and,
more generally, in the content-based access control model of
1 For clarity we introduce a new term appointment certicate
for the credential associated with an appointment. We had
earlier referred to auxiliary credential certicates, see [3].
Giuri and Iglio [9] and the generalised model of Covington
et al.[7]. Most of these proposals are specic to their application
domains.
In OASIS, we have extended the role model with parame-
ters, based on rst-order logic. Parameters may be included
in the rules that cover both role activation and access to an
object or service. Parameters may be bound to such items
as the time of a role activation, the userid of a le owner,
or an attribute of the object that is being accessed. The
values that instantiate parameters are therefore context-
dependent. Our model is similar to some extent to Giuri
and Iglio's model based on role templates [9]. The main difference
is that our model uses formal rst-order predicate
logic as its foundation.
In section 4.1 we present the details of a simplied model
using propositional logic. This version omits parameters,
but it is identical to the full model in all other essential
features. In section 4.2 we outline the extensions necessary
to include parameters. We adopt a formal approach using
logic because adding parameters to privileges and roles adds
a layer of complexity to the model. A logic-based approach
helps to reduce errors in security policies by allowing static
checks to be performed, for example, for completeness, consistency
and reducibility. It also allows formal reasoning
about security policies to discover potential errors or con-
icts. Furthermore, the use of logic enables our model to
be integrated with policies specied in pseudo-natural lan-
guage. Preliminary, proof-of-concept work in this area can
be found in [2].
3. APPOINTMENT
Role delegation is an extension to the conventional delegation
of privileges, in which one user grants privileges to
another through roles. Recent and signicant work in this
area has been done by Barka and Sandhu. In [5], a role-based
delegation model called RBDM0 is introduced. In RBDM0,
if a role is delegated then all the associated privileges are
granted. Delegation is limited to a single step. Later work
by the same authors [4] identies cases of role-based delegation
that are useful in practice, and in particular extends
the model to include cascading delegation.
We introduce the notion of appointment to replace delega-
tion. A user in a role acting as an appointer grants another
user, the appointee, a credential which may be used to activate
one or more roles. Roles activated on the basis of an
appointment are usually associated with some tasks or re-
sponsibilities, and encapsulate the privileges granted by the
appointer.
3.1 Appointment vs. Delegation
The appointment model oers several advantages over the
traditional delegation model. First, privilege propagation is
controlled and well-dened. In [4], an attribute totality is
introduced to indicate whether the whole set of privileges
assigned to a role are to be delegated, or only some sub-
set. The latter case is known as partial delegation. Partial
delegation breaks the formal semantics of role-based access
control since a partially delegated role is in fact a new role
sharing an overloaded name with its delegating role. In the
appointment model, only those roles required to complete a
job function may be activated by the appointee, and conditions
may be specied to restrict the context of activation.
The appointment model thus goes some way towards embodying
the principle of least privilege [18]. Appointment
in itself confers no privileges. Any privileges derive solely
from roles activated on the basis of an appointment, and
are limited to the current session. Privileges are independent
of the appointer role.
Second, cascading delegation becomes irrelevant since appointees
are typically in a dierent role from their appointer.
The level of delegation attribute in [4] therefore becomes redundant
Third, in the appointment model, an appointer can give
access to privileges that he or she does not possess, albeit in
a controlled fashion. There is no reason why an appointer
should be able to satisfy the conditions for activating a role
that is to be granted to the appointee.
In general, delegation can be viewed as a special case of
appointment, in which a user in some role may appoint another
user to activate that same role. This mechanism could
be part of the emergency procedure of a service when a role
holder is called away or is taken ill.
3.2 Taxonomy
Barka and Sandhu describe a classication framework for
role-based delegation models in [4]. While some of these
attributes are useful and indeed essential when designing
a delegation model, we present a dierent taxonomy spe-
cic to role-based appointment and dynamic role activation.
Our taxonomy involves three types of users, appointer, appointee
and revoker. The appointer is someone with the
credentials for activating the appointing role, the appointee
species who is to be allocated an appointment, and the
revoker species who can revoke an appointment.
Appointer. Appointment implies the granting of privileges.
It is therefore essential to restrict which users may grant each
type of appointment. For each specic type of appointment
some role is identied to be the appointer. So long as a user
is acting in the appointer capacity, i.e. the user has activated
the appointer role, the user can make appointments of the
given type. For most organisational policies, it is su-cient
to control who may appoint through an appointer role. If
it is necessary to restrict those granting appointments to a
specic set of users, this can be enforced by side conditions
when the appointer role is activated.
Appointee. An appointment is granted so that the appointee
can obtain privileges by activating some role. At the least
the activation rules will require that the user is known to
the system, for example through holding a credential as an
authenticated user. In some circumstances that may be the
only requirement. An example is that a doctor on duty in
Accident and Emergency (A&E) may appoint any member
of the hospital sta who is on duty to order a blood test for
a patient, rather than being constrained to appoint only a
nurse in the A&E team.
If on the other hand the appointment is to be long-lived,
the regulations that govern the issue of appointment cer-
ticates can specify checks that should be made on all ap-
pointees. An example is when a new doctor is registered as
an employee of a hospital. The administrator making an appointment
(and/or issuing a smartcard) must check the new
doctor's academic and professional qualications. Checks
may include a mixture of clerical and computational proce-
dures. Provided that the checks are satised the appointer
will apply for an appointment certicate to transfer to the
appointee. A revocation credential specic to the appointment
is issued to and may be retained by the appointer, see
below.
It is possible to restrict the use of appointment certicates
independently of their issue. In particular, an appointment
certicate may require that the user presenting it is already
active in one or more roles, see denition 3. Such a condition
is appropriate when an appointment should only be
activated by sta who are already on duty. The appointment
certicate may also be subject to predicates which
can include environmental constraints, see denition 11 in
section 4.2.
Revoker. In OASIS an appointment can be revoked in three
possible ways: by its appointer only; by anyone in the ap-
pointer role; or by rule-based system revocation. The rst
two cases make use of the revocation credential returned at
the time of appointment.
In the rst case, an appointment can only be revoked by
its appointer. This is common in real-life organisations; for
example, the lead doctor in a care team might assign tasks
to sta on that team by means of appointment. He then
becomes responsible for monitoring performance. Revoking
the appointment of any member who performs badly is up
to the lead doctor himself. The revocation credential is valid
only when presented by the user who made the appointment.
This is called appointer-only revocation.
As pointed out in [5], dependence on a particular user to
revoke may have undesirable consequences; for example, if
an appointer is on leave it may be impossible to take immediate
action to limit damage. A solution is to allow anyone
who can activate the appointer role to make the revocation.
This is called appointer-role revocation. This is helpful only
if more than one user can activate the appointer role. The
principle is to limit the spread of damage by increasing the
number of people who can stop a misbehaving party.
The third possibility for revocation is system-managed re-
vocation. In this case, an appointment is revoked automatically
if certain conditions are met. There are many circumstances
in which the revocation of an appointment can be
better handled by the system than by a human. Continuing
the A&E scenario, the lead doctor may appoint a nurse to
order a blood test and wish that appointment to be revoked
as soon as the order has been successfully made.
Three possible types of system-managed revocation are
based on time, tasks and sessions. For time-based revoca-
tion, an appointment is associated with an expiry time. The
appointment certicate is revoked automatically at the expiry
time. This is appropriate if the policy is to review long-lived
credentials at regular intervals. In other applications,
such as an appointment to complete a specic job, it may
be di-cult to apply in practice. An alternative approach
is to bind an appointment to its assigned responsibilities,
expressed as tasks. The system monitors the progress of
these tasks and once they have been completed successfully,
the appointment is automatically revoked. This approach,
which requires substantial support from a task model, is
suitable in a work
ow environment.
The third type of system-managed revocation is based on
sessions. This can have two interpretations, the session of
the appointer or the session of the appointee. In the former
case, an appointment is valid so long as the appointer role
is still activated. It will be revoked automatically when the
appointer leaves the appointer role. This type of revocation
ensures tight monitoring of the appointee. An appointment
can also be for the duration of the current session of the
appointee. For example, a junior doctor may be appointed
to stand in for a consultant who is called away to an emer-
gency. When the junior's shift is over he logs o and the
session and appointment end. In practice the membership
rules for a role entered by an appointee will often require
that some other role remain active, and use of the appointment
is therefore limited to the associated session.
When designing practical systems which deploy the OASIS
model we have used appointment for long-standing persistent
credential allocation. Examples are credentials that
depend on academic or professional qualications, or on
holding a particular job in an organisation. Such credentials
are used, among others, to activate roles in order to
carry out tasks for the duration of a session.
4. FORMAL MODEL
We present two models in this section. The rst is based
on propositional logic to formalise role activation conditions,
see section 4.1. It covers most of the ideas introduced in
the previous sections, including appointment. We show in
particular how to express the membership rules associated
with active roles, and explain how we enforce these rules
using event channels.
OASIS roles and appointment certicates include param-
eters. Role activation rules can match parameters to ensure,
for example, that logged-in users can only invoke mutator
methods on objects that they own. In section 4.2 we outline
the extensions required to handle parameterisation. The
extended model is based on rst-order predicate calculus,
which allows the use of variables in expressions. Our models
are not designed to be application-specic. Instead, they
are capable of expressing a variety of security policies.
4.1 Basic Model
The model is built on top of six basic sets, described as
follows:
set of all users
S: set of all services
set of all role names
set of all environmental constraints
O: set of all objects
A: set of all access modes for objects
In addition to these sets, which are fundamental, two
other sets are central to the interpretation of the basic model:
R: set of all roles
P: set of all privileges
A user is a human-being interacting with a computer sys-
tem. An element in U can be any convenient representation
that uniquely identies a user in a system. The computer
system is composed of a collection of services S, which may
be managed independently. A role is a named job function
or title within an organisation that is associated with some
service; a role is specic to a service, and is dened below.
Services confer privileges on their role members, and may
also recognise the roles of other services.
Denition 1. A role r 2 R is a pair (s; n) 2 S N , where
s 2 S is a service and n 2 N is the name of a role dened
by s.
The name of a role is unique within the scope of its den-
ing service. When describing our model, we blur the distinction
between roles and role names where this will not lead
to confusion.
An environmental constraint e 2 E is a proposition that
is evaluated at the time of role activation. The value may
depend on factors such as the time of day, the identity of
the computer on which the current process is running or a
condition such as group membership which requires access
to a local database. In this paper we do not discuss the
details of environmental constraints. We therefore consider
each environmental constraint as an atomic proposition.
The conditions of some role activation rule must be sat-
ised when a role is activated. We may require in addition
that some subset of these conditions, the membership rule,
remains true throughout the session. If an environmental
constraint e appears as a membership condition then its implementation
must be active; when the role is activated each
membership condition is evaluated, and in addition a trigger
is set to notify the service should the condition become
false. We discuss this requirement in more detail below.
A privilege is a right to perform some operation on a particular
object. It is dened formally as follows.
Denition 2. A privilege p 2 P is a pair (o; a) 2 O A,
O is an object and a 2 A is an access mode for
the object o.
The set of objects and their corresponding access modes
are service dependent. For example, in relational database
applications, objects may represent rows and their associated
access modes include read- or update-attributes. In
object-oriented systems, including distributed object sys-
tems, objects are represented naturally while access modes
are the methods for each object. In general, we treat privileges
as an abstract unit if the context permits.
The underlying idea of RBAC is to associate privileges
with roles, and roles with users. These associations are described
as relations in our model. Before describing these
relations, we need to dene the relationship between roles.
As explained earlier, role hierarchy does not have any place
in our model. Instead we control the acquisition of privileges
through role activation governed by rules. Roles can
only be activated during a session, and being active in one
role may be a precondition for activating another; an example
is a log-in credential that ensures that the user has
been authenticated. In order to activate certain roles a user
must hold an appointment; the corresponding condition in a
role activation rule is an appointment certicate. Its formal
denition follows.
Denition 3. An appointment certicate ! is an instance
of an appointment. It may include a set of prerequisite roles,
described by the function
where
is the set of
all appointment certicates in the system.
An appointment certicate held by a user is valid only if
the user is active in all of its prerequisite roles. This allows
an appointer to ensure that an appointment certicate can
only be used when the preconditions for activating all of
those roles have been met.
A role activation rule species the conditions for a user to
activate a role. It can be formally dened as follows.
Denition 4. A role activation rule, or activation rule for
short, is dened as a sequent (x1
for 1 i n is a variable in the universe
[E , and
We say that each x i for 1 i n is an activating
condition for the role r.
For an activation rule (x1 ; x2 ; :::; xn ' r), a user must satisfy
all conditions x1 ; x2 ; :::; xn in order to activate the role
r. Satisfaction interprets each variable within the current
context to give a Boolean value, see denition 6. There may
be more than one activation rule associated with a particular
role r.
An example of a role activation rule is given below with
and
According to this rule a user who is active in role r1 and
holds the appointment certicate !1 can activate the role r4 ,
provided that the conditions for the appointment certicate
to be valid are satised. In this case the sole condition is
that the user be active also in the prerequisite role r3 .
This denition of activation rule is essentially a restricted
form of Boolean logic. Any Boolean expression without
negation over the universe X can be translated into one or
more activation rules by rewriting it into disjunctive normal
form (DNF), and taking each implicant as an activation con-
dition. For example, an expression in the same universe as
the above example is shown below in Boolean logic syntax.
This is translated to DNF,
and this can be written in sequent notation shown below.
The set of all activation rules specied in a system is denoted
by . We summarise the symbols representing additional
sets of objects in our model here.
set of all appointment certicates
set of all role activation rules
set of all membership rules
We now consider a special type of role activation rule,
called initial rules.
Denition 5. A role activation rule (x1 ; x2 ; :::; xn ' r) in
which for 1 i n the variable x i[ E is initial. The
role r is said to be an initial role.
Initial rules provide a means to allow users to start a session
by acquiring initial roles. A particular case is that of
rules with no antecedent conditions, ' r. The activation
of such an initial role depends on system policies and typically
requires system-dependent mechanisms, for example,
password authentication or challenge-response authentica-
tion. In general activation of an initial role may require
an appointment certicate and be subject to environmental
constraints. The set of all initial roles is denoted IR R.
We restrict explicit association between users and roles to
initial roles. In order to activate any other role a user must
satisfy the preconditions of some activation rule, including
possession of one or more prerequisite roles. These preconditions
can include appointments and environmental constraints
as well as role membership.
Note that during a session a user accumulates privileges
by activating a succession of roles. Starting from a set of initial
roles, which become active following authentication, a
number of roles may be entered according to specied rules.
An acyclic directed graph structure is therefore established
that exhibits the run-time dependency of each role on its pre-
conditions. Supercially the structure is similar to a static
role hierarchy, but there are important dierences. First,
the dependency structure is dynamic; there may be several
activation rules for the same role. Second, any privileges
acquired by entering a role in this way will usually not be
shared with any prerequisite role; it is more likely that the
new role is more specic, and has been activated on the basis
of appointment, or perhaps following database look-up. Parameter
values play an essential part in determining which
particular users can acquire the privileges associated with
more specic roles.
The activation of a non-initial role requires a user to satisfy
each of the conditions of some activation rule for that
role. We dene what is required for elements of each of the
sets
R,
and E to satisfy a precondition for role activation.
Denition 6. The interpretation function for a role activation
rule is a truth assignment with type, I
falseg. An interpretation function, I, with respect
to a user u 2 U is denoted as Iu , and is dened below:
true if x 2 R and u is active in
role x,
u possesses the
appointment certicate x
and is active in all the
prerequisite roles r 2 (x),
and the evaluation
of x yields true.
false otherwise
Note that the denition of activation rules does not include
negation (:). We brie
y consider the eect of allowing
negation of each of the three types of role activation condi-
tion. First, environmental constraints e 2 E are atomic by
denition. Any discussion of negation must take place in
the context of an explicit environmental sublanguage, such
as temporal expressions that test the time of day.
Second, appointment certicates !record the fact
of an appointment. It is only when a role is activated on
the basis of an appointment certicate that any privileges
are bestowed on the user. Negation should be associated
with the roles activated rather than with the appointment
certicate itself. In any case, appointment certicates can
be anonymous and therefore transferable from user to user;
the advantage of such a scheme is that a single revocation
credential covers all the users of such an appointment cer-
ticate. It is not in general possible to tell whether a user
has obtained such an appointment.
Negating a role r 2 R makes perfectly good sense. In-
deed, allowing a negated role among the conditions for role
activation has a natural interpretation under Iu , namely
active in role r. This is a possible
implementation of a separation of duties constraint. But if
a user must not activate two roles simultaneously, then the
activation rules for each role should indicate that this user
must not be active in the other. A more appropriate way of
specifying the requirement would be to declare an explicit
separation of duties constraint. We are actively investigating
this issue.
Given the denition of the interpretation function, we can
then dene role activation formally.
Denition 7. (Role activation) A user u 2 U can activate
a role r 2 R if and only if there exists an activation rule
where Iu is the interpretation function for u at the time
when the activation request is made.
Note that the denition of the interpretation function implies
that its evaluation with respect to a user changes with
the context. When a user requests an activation of a role,
the interpretation function is immediately evaluated in the
user's context and the decision is made.
The opposite of role activation is role deactivation. Often
continuing activation of a role will be valid only if some
subset of the activation conditions continues to hold. These
are called the membership conditions. The membership rule
associated with a role activation rule species those conditions
that must remain true in order for a user to remain
active in that role.
When a role is activated at a service s 2 S each of the conditions
of the activation rule is veried. For roles associated
with s itself this is straightforward. Roles and appointment
certicates of other services must be validated by the issuer.
In the case that x i is a membership condition s establishes
an event channel on the trigger :x i so that the issuer can
notify s should the condition become false. OASIS depends
on asynchronous notication to support role deactivation,
see for example [3].
We have not dened explicit sublanguages for environmental
contraints in this paper. However, it is worth considering
two examples. First, let's suppose that a particular
role may be held only between 1600 and 1800 hours on any
day. We can include this requirement as part of the activation
rules through a constraint in E ; at activation we check
the time of day, say 1723, and set a timer exception for 1800
hours. In this instance the evaluation is independent of the
user u.
Second, suppose that user u requests a privilege that is
restricted to members of group g. For this example we require
active database support. At activation, we check the
data base for the required group membership. At the same
time, we set a trigger for the negation of the condition. If
the group manager updates the database to exclude user u
then the trigger res and deactivation takes place. This example
shows how constraints in E may be user specic. The
rst prototype implementation of OASIS included a simple
associative tuple store with triggers.
Denition 8. The membership rule associated with the
activation rule for the role r 2 R is
the sequent (x1
for which 1 are the membership conditions.
If a user is active in the role r through the activating
rule
r shall be immediately deactivated if the
associated membership rule (x1 ; x2 ; :::; xm ' r) can no longer
be satised. We denote the set of all membership rules in
a system as . The formal denition of role deactivation is
given below.
Denition 9. (Role deactivation) A role r 2 R held by
a user u 2 U is deactivated if Iu 6j= x i , where x i is some
membership condition in the rule (x1
corresponding to the rule
2 that activated r, and Iu is
the interpretation function for u.
Continued satisfaction of the membership rule associated
with the rule used for activating a role r is required for
the user to remain active in r. Note that the deactivation
of a role r may trigger the deactivation of another role r 0
whose membership depends on the membership of r. This
is referred to as cascading deactivation. Its implementation
is discussed in [3, 11] where the implementation of triggers
and an event infrastructure are discussed.
Note that the membership rule associated with an active
role r is specic to the rule under which r was activated.
Consider as an example the rules obtained by translating
the Boolean expression introduced after denition 4
which species that a user who is active either in role r1 or
in role r2 and who holds an appointment certicate !1 may
activate role r4 .
The corresponding activation rules are as follows:
In each case the membership rule will include the relevant
prerequisite role, so as to enforce cascading deactivation at
the end of the session. If revocation of the appointment is
to take immediate eect, then the appointment certicate
must also be a membership condition.
We can now dene the association of roles with privileges.
This is expressed as a relation as follows.
RP R P, the role-privilege relation.
RP describes the role-privilege relationship. It is a many-
to-many relation specied by the security administrators of
an organisation to express security policies. We distinguish
two sets of privileges for a role by the terms direct and eec-
tive. Our denitions are dierent from those given in [17],
where direct and eective privileges are dened with role
hierarchy in mind.
The direct privilege set of a role r 2 R is the set of privileges
assigned to r directly, i.e. RPg.
The eective privilege set of a role r is the set of privileges
that a user who is active in r must necessarily hold.
This includes the eective privileges of all roles specied as
membership conditions when r was activated, including the
prerequisite roles of any appointment certicates. Each of
these roles must still be active, or r would have been subject
to cascading deactivation. The eective privilege set is dy-
namic, and depends on the specic activation history. The
following denition ascends the activation tree recursively.
Denition 10. Suppose a user u 2 U is active in some role
r whose current membership rule is r). The
eective privilege set EP(r) of r is dened as follows:
is a prerequisite
role
if x iis an appointment
certicate, the union of
EP() for all prerequisite roles 2
The eective privilege set for role r denes privileges that
a user must continue to hold while remaining active in that
role.
In some RBAC models it is possible to compute the maximum
privileges that a user may assume. OASIS denes
security policies on a service by service basis for multiple
management domains in a distributed world. For example,
a nationwide system for electronic health records will comprise
many interoperating domains such as hospitals, primary
care practices, clinics, research institutes etc. Services
within a given domain express their policy for role activation
and service use. Membership of a role of one service
may be required as a credential for entering another. Such
dependencies are specied in service level agreements. It is
likely that policy will be administered at domain level, and
will derive from local and national administrative and legal
sources, depending on the application. Service level agreements
will also be made across domains. Appointments may
be made at several administrative levels. Some appointment
certicates will apply to many domains, for example those
representing academic and professional qualications. Others
will be dynamic and local, for example temporary substitution
for a colleague who is called away while on duty.
Should it be required, it is possible to compute the maximum
privileges that a user may obtain based on statically
known appointments. This assumes that all constraints will
be satised at the time roles are activated. In practice,
dynamic environmental conditions may prevent some roles
from being activated in any specic session. In addition unforeseen
appointments might be made dynamically within
sessions.
Previously we introduced appointment certicates to represent
appointments. Services which support appointment
will dene their own roles and policies to manage it, and
will issue and validate the appointment certicates. At each
appointment an appointment certicate is returned to the
appointer, who subsequently transfers it to the appointee.
The latter can then use the appointment certicate during
role activation, either at the issuing service or at some other.
The role activation rules may specify a number of prerequisite
roles in addition to one or more appointment certicates.
In this way we can for example implement the two-signature,
countersign approval system commonly found in business by
requiring two appointment certicates. In addition the ap-
pointer, when applying for the appointment certicate, may
specify a set of roles in which the appointee must be active
in order for it to be valid, see denition 3.
OASIS supports rapid and selective revocation, which is
managed by invalidating the appointment certicate issued
on an appointment. Whenever a credential is invalidated
any roles that depend on it are deactivated and their associated
credentials invalidated, see [3] for implementation
details. This can lead to cascading deactivation of a tree of
roles in which a user is active. Cascading deactivation also
takes place when a session ends after a user logs o; logged-
in-user is likely to be an initial role. This basic model is
su-cient to support system-managed revocation. We do
not attempt to formalise the details.
4.2 Extended Model
There is an increasing interest in the research community
in just-in-time, active security where policies must adapt to
their environment. Prominent examples include the workow
authorisation model [1] and task-based authorisation
controls (TBAC) [23]. Essentially, a major drawback of traditional
RBAC models that limits their usefulness is their
inability to take into account ne-grained information from
the execution context. In the introduction we discussed ad-hoc
extensions that have been suggested to meet specic
needs. As an alternative we propose a generic framework
which can be tailored to each application domain with minimal
eort.
The problem of implementing an access control system for
the Electronic Health Records (EHRs) of the United Kingdom
National Health Service (NHS) is one of the case
studies that has informed the design of OASIS [2]. In this
application it is vital that the user requesting access can
be identied, since under the UK Patients' Charter the patient
has the right to exclude named individuals. Traditional
functional roles are not adequate for this purpose, since such
an exclusion is specic both to the patient and to the potential
reader. Individual identity must therefore be established
by a credential which is presented on access; in the OASIS
model the obvious choice is a role membership certicate,
asserting the NHS identier of the user in some way. Rules
for role activation refer to individual roles, so it is not possible
to name a separate role for each potential user; such an
implementation would be neither manageable nor scalable.
Instead we set up generic initial roles such as logged-in-user
and smart-card to correspond to the modes of authentication
supported by the system. In order to identify the individual
we extend the role membership certicate (RMC) by a pa-
rameter, userid and NHS-id respectively. Such parameters
are among the elds protected when the RMC is generated,
see for example [3].
In the basic model described in the last section access
control decisions are made on the basis of propositions that
are evaluated in the current context. These propositions
relate to roles and appointments, and the policy governing
the acquisition of privileges is expressed in terms of them.
Role activation rules can take account of the execution environment
by evaluating propositions relating to such factors
as the current time of day or an entry in an administration
database. We extend this model to allow parameterisation of
roles, appointment certicates, privileges and environmental
constraints. In order to accommodate these extensions
we have enhanced the specication to dene role activation
rules and membership rules in terms of predicates rather
than propositions.
The details of these extensions are intuitive. RMCs contain
a number of protected elds, which, together with a
nonce key identifying the session, form input to the one-way
function that generates the signature. In the formal model a
parameterised role consists of a role together with
a k-tuple that identies the parameter values. In expressing
the conditions for role entry a proposition identifying r as a
prerequisite role is replaced by a k-ary predicate. Within a
rule the arguments of each predicate may be either variables
or constants. Variables may also occur in the parameterised
role r 0 that is being activated. Evaluation of an activation
rule proceeds by unication over the variables that are spec-
ied. Values in the RMCs that establish membership of pre-requisite
roles set the corresponding variables appearing in
activation conditions. Activation succeeds only if all conditions
can be met, and subject to a consistent assignment of
values to variables. In this case the parameters of the new
role r 0 are set from the variable values established during
unication.
Environmental constraints are in detail application spe-
cic, but a common and useful form is the ability to check
information in a database during role activation. Provided
the database itself can be identied then such a constraint
can be viewed as a predicate assertion. For a relational
database the natural interpretation is of the occurrence of a
tuple in a relation named by the constraint. In a deductive
database the predicate assertion species a query directly,
and unication over the variables involved has a direct counterpart
during query evaluation.
Parameterised appointment certicates are similar to parameterised
roles. If an appointment certicate with k parameters
appears as a precondition for role activation then
the activation rule includes a k-ary predicate. Variable values
are matched against other occurrences of the same named
variable within the rule.
A parameterised appointment certicate is valid only if
its holder is active in all of its prerequisite roles. In ad-
dition, parameters of credentials (RMCs) associated with
these roles may be required to match parameters of the appointment
certicate. An appointment certicate may also
be subject to one or more environmental constraints. The
relationship between a parameterised appointment certi-
cate and its prerequisite roles is specied in a validity rule,
dened as follows.
Denition 11. A validity rule for a parameterised appointment
certicate ! is dened as a sequent (x1
Note that only parameterised roles or environmental constraints
are allowed in the premises part of a validity rule.
During role activation any appointment certicates are validated
before the activation rule is evaluated. Unication
of variables in the validity rule may constrain undened parameters
of the appointment certicate; the values set form
part of the context during role activation.
The roles held by a user determine that user's privileges.
A privilege can typically be considered as a specic access
right at a service. Roles are parameterised, and parameter
values can be propagated to privileges at request
time. For example, the privileges corresponding to a role
writer(userid) activated at a le service may be restricted
to les that are owned by the user named userid. We do not
go into details here.
The rst prototype implementation of OASIS used simple
parameter matching at role activation time, essentially setting
parameters when they were rst encountered and denying
role activation if there was a later con
ict. Parameters
were strings, and the only value comparison supported was
equality. Database look-up was supported by an associative
tuple store.
If an environmental constraint appears in a membership
rule then the service which evaluates it must have an active
implementation. One reason for using such a naive database
was that it was easy to set up event channels for it; if a
query during activation represented a membership condition
then an event channel was established, and the role activation
service could be notied if the condition became false
subsequently. We have just started to experiment with the
POSTGRES object-relational database management system
(DBMS), which allows agents external to the DBMS to set
triggers in order to receive notication of database update.
Initial experiments are encouraging, but a lot of work remains
to be done. If this approach is successful then it
would be natural to regard parameter k-tuples as instances
of classes, and to enforce type checking of individual param-
eters. That would be an obvious improvement.
5. EXAMPLES
Suppose that privacy legislation has been passed whereby
someone who has paid for medical insurance may take certain
genetic tests anonymously. The insurance company's
membership database contains the members' data; the genetic
clinic has no access to this and the insurance company
may not know the results of the genetic test, or even
that it has taken place. The clinic, for accounting purposes,
must ensure that the test is authorised under the scheme. A
member of the scheme is issued a computer-readable membership
card containing an appointment certicate and the
expiry date. In the simplest scheme the membership card
is authenticated at the clinic, the member enters the unpa-
rameterised role paid up patient and the test is carried out.
patient
In the presentation above paid up patient is an unparame-
terised initial role with no explicit preconditions. Checking
the expiry date on the membership card is part of the system
authentication process. A more likely scenario is that the activation
rule for paid up patient comprises the appointment
certicate and an environmental constraint requiring that
the date of the (start of the) treatment is before the expiry
date of the insurance scheme membership. The appointment
certicate is validated at the issuing service (a trusted
third party) before role activation can proceed. In this case
the appointment certicate becomes the membership rule
for the role paid up patient ; if the appointment certicate is
found to be fraudulent treatment is terminated.
It is easy to express the constraint on the expiry date using
parameters. Suppose that the expiry date is a protected
eld of the appointment certicate, and the environmental
constraint 1 checks that a temporal argument lies in the
future. The following activation rule matches the parameter
t, reading it from the appointment certicate and supplying
it as argument to 1 .
patient
An initial role logged in user(uid, machine) might be dened
so that the user-id and the machine at which the login
has taken place can be carried forward and checked as environmental
conditions on subsequent role activation. Again,
at the engineering level, the parameters are protected elds
in the role membership certicates. If login can be at any
computer in the administrative domain we dene the role
logged in user(uid) with a single parameter. In the health-care
domain everyone has a unique NHS identier which
could be used as a parameter.
We now work through an example scenario from an A&E
department of a hospital. Let us suppose that some of
the roles involved are nurse(x), screening nurse(x), doctor(x)
and treating doctor(x,y) where x is the identity of the role
holder in each case and y is the patient being treated. In
outline, as hospital sta come on duty they login and activate
roles such as nurse, screening nurse and doctor. When
someone goes o-duty they logout and the roles they hold
are deactivated. As an example of how dynamic appointment
might be used we suppose that a screening nurse assigns
each patient that arrives to a particular doctor who
becomes the treating doctor for that patient.
There is an electronic health record (EHR) service in the
hospital domain which interacts with a National EHR ser-
vice, external to the domain, in order to assemble any re-
quired, and authorised, records of treatments the patient has
had. Let us assume that the general policy is that screening
nurses may read patients' contact and emergency data
only, and that treating doctors may read the EHR of any
patient y they are treating while they are active in the role
treating doctor(x,y). The EHR service recognizes the A&E
service roles described above (a service-level agreement) and
implements this policy.
For the key roles in this scenario we now dene the activation
rule and the membership rule. We also demonstrate
long-term and dynamic appointment.
When a doctor or nurse is employed at the hospital their
academic and professional credentials are checked and they
are issued with an appointment certicate parametrised with
their identity information. This might be on a computer
readable card or be stored in the administration lespace.
For the role doctor(x) the activation rule comprises x 's appointment
certicate. The membership rule is identical to
the activation rule since the appointment certicate must
remain valid for x to remain active in the role.
The role nurse(x) has a similar structure.
For the role screening nurse(x) the activation rule comprises
the prerequisite role nurse(x) and has no appointment
or environmental conditions. The membership rule is identical
to the activation rule.
For the role treating doctor(x,y) the activation rule comprises
the prerequisite role doctor(x) and there is no environmental
condition. An appointment certicate is required.
This is a certicate allocated by the screening nurse to doctor
x authorising her to treat patient y.
doctor(x treating doctor(x ; y)
The role doctor(x) could also be a prerequisite role of the
appointment certicate, but it is redundant in this example.
Note that if the screening nurse goes o duty and logs out,
she deactivates her role nurse(x) causing the dependent role
screening nurse(x) to be deactivated. The appointment cer-
ticate !3(x; y) is not invalidated at the end of her session.
The membership rule of the role treating doctor(x,y) is once
again identical to the activation rule. First, x must remain
active in the prerequisite role doctor(x), in order to ensure
that the role treating doctor(x,y) is deactivated at the end
of the session. There are two reasons for making !3(x; y)
a membership condition as well. The screening nurse may
wish to reassign the patient to another doctor, and in any
case the appointment should be revoked when patient y is
discharged. Note that x may still be on duty when this
happens.
A given doctor will be assigned to a number of dierent
patients while on duty and will activate the role treating
doctor(x,y) for each of them. The role treating doctor(x,y)
of the A&E service gives doctor x the privilege to access patient
y's health record at the EHR service. Other hospital
services such as the Pharmacy service and the X-ray service
will also be OASIS-aware and require the A&E role membership
certicate treating doctor(x,y) on invocation. Such
services will record the parameters x and y for accounting
and audit.
6. CONCLUSION
OASIS is an access control system for open, interworking
services in a distributed environment modelled as domains
of services. Services may be developed independently but
service level agreements allow their secure interoperation.
OASIS is closely integrated with an active, event-based middleware
infrastructure. In this way we continuously monitor
applications within their environment, ensuring that security
policy is satised at all times. We therefore address the
needs of distributed applications that require active security.
Any formalisation must take account of the relationship between
OASIS and the underlying active platform.
In this paper we have formalised the OASIS model without
reference to domains. Formal specication is crucial in
order to manage access control policy for future, large scale,
widely distributed, multi-domain systems. A formal model
allows policy components established across a number of domains
to be checked for consistency. This is necessary, since
otherwise policy cannot be deployed by domains acting au-
for example, a government edict might require
changes of policy across heterogeneous healthcare domains.
Automation is essential to minimise human error, and it can
only be used where a formal model exists.
OASIS is role based: services name their client roles and
enforce policy for role activation and service invocation, expressed
in terms of their own and other services' roles. A
signed role membership certicate is returned to the user on
successful role activation and this may be used as a credential
for activating other roles, according to policy.
We do not use role delegation. Instead, we have dened
appointment which we believe to be both more intuitive and
more applicable in practice. Appointments may be long-
lived, such as academic and professional qualication, or
transient, such as standing in for a colleague who is called
away while on duty. On appointment, the appointee is issued
with an appointment certicate which may be used,
together with any other credentials required by policy, to
activate one or more roles.
In addition to role membership certicates and appointment
certicates role activation rules may include environmental
constraints. Examples are user-independent constraints
such as time of day and conditions on user-dependent
parameters. For example, it may be necessary to perform
database lookup at a service to ascertain that the user is a
member of some group. Alternatively, a simple parameter
check may ascertain that the user is a specied exception to
a general category who may activate the role.
The membership rule for a role indicates those security
predicates for activating the role that must remain true for
the role to continue to be held. Event channels are set up
between services to ensure that all conditions of the membership
rule remain true. Should any condition become false
this triggers an event notication to the role-issuing service
and the role is deactivated for that user. By this means we
maintain an active security environment.
OASIS is session based. Starting from initial roles, such
as \authenticated, logged-in user", a user may activate a
number of roles by submitting the credentials required to
satisfy an activation rule. The activated roles therefore form
trees dependent on initial roles. Should any membership
condition for any role become false the dependent subtree is
collapsed. If a single initial role is deactivated (the user logs
out), all the active roles collapse and the session terminates.
Our application domains require parameterisation of roles.
For example, in the healthcare domain a patient might specify
\all doctors except my uncle Fred Smith may read my
health record". For a ling system it is necessary to indicate
individual owners of les as well as groups of users.
Future work involves the detailed modelling of role pa-
rameters. In practice, distributed systems comprise many
domains; for example the healthcare domain comprises hos-
pitals, primary care practices, research institutes etc. We
will generalise our naming structure to include domains ex-
plicitly. We are working on the management of policy for
role activation and service use. Policy may derive from local
and national sources and will change over time. Policy
stores will be managed using OASIS in our active environ-
ment. The formalisation of OASIS will provide a rm basis
for requirements such as checking the consistency of policies.
7.
ACKNOWLEDGEMENT
We acknowledge the support of the Engineering and Physical
Sciences Research Council (EPSRC) under he grant OASIS
Access Control: Implementation and Evaluation. Members
of the Opera research group in the Computer Laboratory
made helpful comments on earlier drafts of this paper.
The ideas that lie behind denition 11 were introduced by
John Hine, see [12]. We are grateful to Jon Tidswell and
the anonymous referees for constructive criticism which has
much improved this paper.
8.
--R
An authorization model for work ows.
Translating role-based access control policy within context
Generic support for distributed applications.
Framework for role-based delegation models
A exible model for the speci
Generalized role-based access control for securing future applications
Role templates for content-based access control
Towards a more complete model of role.
OASIS: Access Control in an Open
An architecture for distributed OASIS services.
Mutual exclusion of roles as a means of implementing separation of duty in role-based access control systems
Access rights administration in role-based security systems
The role graph model and con ict of interest.
The protection of information in computer systems.
Role activation hierarchies.
Separation of duty in role-based environments
--TR
Role-Based Access Control Models
A flexible model supporting the specification and enforcement of role-based authorization in workflow management systems
Team-based access control (TMAC)
Mutual exclusion of roles as a means of implementing separation of duty in role-based access control systems
Role templates for content-based access control
Role activation hierarchies
Towards a more complete model of role
Control principles and role hierarchies
Team-and-role-based organizational context and access control for cooperative hypermedia environments
The role graph model and conflict of interest
A role-based access control model and reference implementation within a corporate intranet
The uses of role hierarchies in access control
An architecture for distributed OASIS services
Generic Support for Distributed Applications
Access Rights Administration in Role-Based Security Systems
Task-Based Authorization Controls (TBAC)
An Authorization Model for Workflows
Translating Role-Based Access Control Policy within Context
Framework for role-based delegation models
Separation of Duty in Role-based Environments
--CTR
Jacques Wainer , Akhil Kumar , Paulo Barthelmess, DW-RBAC: A formal security model of delegation and revocation in workflow systems, Information Systems, v.32 n.3, p.365-384, May, 2007
Mohammad A. Al-Kahtani , Ravi Sandhu, Induced role hierarchies with attribute-based RBAC, Proceedings of the eighth ACM symposium on Access control models and technologies, June 02-03, 2003, Como, Italy
Jason Crampton, Administrative scope and role hierarchy operations, Proceedings of the seventh ACM symposium on Access control models and technologies, June 03-04, 2002, Monterey, California, USA
Yang , Raimund K. Ege , Huiqun Yu, Mediation security specification and enforcement for heterogeneous databases, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Gang Yin , Huai-min Wang , Dian-xi Shi , Yan Jia , Meng Teng, A rule-based framework for role-based constrained delegation, Proceedings of the 3rd international conference on Information security, November 14-16, 2004, Shanghai, China
Jean Bacon , Ken Moody , Walt Yao, Access control and trust in the use of widely distributed services, SoftwarePractice & Experience, v.33 n.4, p.375-394, 10 April
Hua Wang , Jiuyong Li , Ron Addie , Stijn Dekeyser , Richard Watson, A framework for role-based group deligation in distributed environments, Proceedings of the 29th Australasian Computer Science Conference, p.321-328, January 16-19, 2006, Hobart, Australia
Jason Crampton , George Loizou, Administrative scope: A foundation for role-based administrative models, ACM Transactions on Information and System Security (TISSEC), v.6 n.2, p.201-231, May
Longhua Zhang , Gail-Joon Ahn , Bei-Tseng Chu, A rule-based framework for role-based delegation and revocation, ACM Transactions on Information and System Security (TISSEC), v.6 n.3, p.404-441, August
Luc Moreau , Christian Queinnec, Resource aware programming, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.3, p.441-476, May 2005
Tolone , Gail-Joon Ahn , Tanusree Pai , Seng-Phil Hong, Access control in collaborative systems, ACM Computing Surveys (CSUR), v.37 n.1, p.29-41, March 2005
Jean Bacon , Ken Moody , Walt Yao, A model of OASIS role-based access control and its support for active security, ACM Transactions on Information and System Security (TISSEC), v.5 n.4, p.492-540, November 2002
Gustaf Neumann , Mark Strembeck, An approach to engineer and enforce context constraints in an RBAC environment, Proceedings of the eighth ACM symposium on Access control models and technologies, June 02-03, 2003, Como, Italy
Mark Strembeck , Gustaf Neumann, An integrated approach to engineer and enforce context constraints in RBAC environments, ACM Transactions on Information and System Security (TISSEC), v.7 n.3, p.392-427, August 2004 | RBAC;service level agreements;certificates;OASIS;policy;role based access control |
373399 | Hierarchical GUI Test Case Generation Using Automated Planning. | AbstractThe widespread use of GUIs for interacting with software is leading to the construction of more and more complex GUIs. With the growing complexity come challenges in testing the correctness of a GUI and its underlying software. We present a new technique to automatically generate test cases for GUIs that exploits planning, a well-developed and used technique in artificial intelligence. Given a set of operators, an initial state, and a goal state, a planner produces a sequence of the operators that will transform the initial state to the goal state. Our test case generation technique enables efficient application of planning by first creating a hierarchical model of a GUI based on its structure. The GUI model consists of hierarchical planning operators representing the possible events in the GUI. The test designer defines the preconditions and effects of the hierarchical operators, which are input into a plan-generation system. The test designer also creates scenarios that represent typical initial and goal states for a GUI user. The planner then generates plans representing sequences of GUI interactions that a user might employ to reach the goal state from the initial state. We implemented our test case generation system, called Planning Assisted Tester for grapHical user interface Systems (PATHS) and experimentally evaluated its practicality and effectiveness. We describe a prototype implementation of PATHS and report on the results of controlled experiments to generate test cases for Microsoft's WordPad. | Introduction
G RAPHICAL User Interfaces (GUIs) have become an
important and accepted way of interacting with to-
day's software. Although they make software easy to use
from a user's perspective, they complicate the software development
process [1], [2]. In particular, testing GUIs is
more complex than testing conventional software, for not
only does the underlying software have to be tested but the
GUI itself must be exercised and tested to check whether it
confirms to the GUI's specifications. Even when tools are
used to generate GUIs automatically [3], [4], [5], these tools
themselves may contain errors that may manifest themselves
in the generated GUI leading to software failures.
Hence, testing of GUIs continues to remain an important
aspect of software testing.
Testing the correctness of a GUI is difficult for a number
of reasons. First of all, the space of possible interactions
with a GUI is enormous, in that each sequence of GUI commands
can result in a different state, and a GUI command
may need to be evaluated in all of these states. The large
number of possible states results in a large number of input
The authors are with the Department of Computer Science, University
of Pittsburgh, Pittsburgh, PA 15260, USA. The second author
is also in the Intelligent Systems Program. E-mail: fatif, pollack,
soffag@cs.pitt.edu.
permutations [6] requiring extensive testing, e.g., Microsoft
released almost 400,000 beta copies of Windows95 targeted
at finding program failures [7]. Another problem relates to
determining the coverage of a set of test cases. For conventional
software, coverage is measured using the amount
and type of underlying code exercised. These measures do
not work well for GUI testing, because what matters is not
only how much of the code is tested, but in how many different
possible states of the software each piece of code is
tested. An important aspect of GUI testing is verification
of its state at each step of test case execution. An incorrect
GUI state can lead to an unexpected screen, making further
execution of the test case useless since events in the
test case may not match the corresponding GUI components
on the screen. Thus, the execution of the test case
must be terminated as soon as an error is detected. Also,
if verification checks are not inserted at each step, it may
become difficult to identify the actual cause of the error.
Finally, regression testing presents special challenges for
GUIs, because the input-output mapping does not remain
constant across successive versions of the software [1]. Regression
testing is especially important for GUIs since GUI
development typically uses a rapid prototyping model [8],
[9], [10], [11].
An important component of testing is the generation of
test cases. Manual creation of test cases and their mainte-
nance, evaluation and conformance to coverage criteria are
very time consuming. Thus some automation is necessary
when testing GUIs. In this paper, we present a new technique
to automatically generate test cases for GUI systems.
Our approach exploits planning techniques developed and
used extensively in artificial intelligence (AI). The key idea
is that the test designer is likely to have a good idea of the
possible goals of a GUI user, and it is simpler and more
effective to specify these goals than to specify sequences
of events that the user might employ to achieve them.
Our test case generation system, called Planning Assisted
Tester for grapHical user interface Systems (PATHS) takes
these goals as input and generates such sequences of events
automatically. These sequences of events or "plans" become
test cases for the GUI. PATHS first performs an automated
analysis of the hierarchical structure of the GUI
to create hierarchical operators that are then used during
plan generation. The test designer describes the preconditions
and effects of these planning operators, which are
subsequently input to the planner. Hierarchical operators
enable the use of an efficient form of planning. Specifically,
to generate test cases, a set of initial and goal states is input
into the planning system; it then performs a restricted
form of hierarchical plan generation to produce multiple
hierarchical plans. We have implemented PATHS and we
demonstrate its effectiveness and efficiency through a set
of experiments.
The important contributions of the method presented in
this paper include the following.
ffl We make innovative use of a well known and used technique
in AI, which has been shown to be capable of
solving problems with large state spaces [12]. Combining
the unique properties of GUIs and planning, we are
able to demonstrate the practicality of automatically
generating test cases using planning.
ffl Our technique exploits structural features present in
GUIs to reduce the model size, complexity, and to improve
the efficiency of test case generation.
ffl Exploiting the structure of the GUI and using hierarchical
planning makes regression testing easier.
Changes made to one part of the GUI do not affect
the entire test suite. Most of our generated test cases
are updated by making local changes.
ffl Platform specific details are incorporated at the very
end of the test case generation process, increasing the
portability of the test suite. Portability, which is important
for GUI testing [13], assures that test cases
written for GUI systems on one platform also work on
other platforms.
ffl Our technique allows reuse of operator definitions that
commonly appear across GUIs. These definitions can
be maintained in a library and reused to generate test
cases for subsequent GUIs.
The next section gives a brief overview of PATHS using
an example GUI. Section III briefly reviews the fundamentals
of AI plan generation. Section IV describes how planning
is applied to the GUI test case generation problem. In
Section V we describe a prototype system for PATHS and
give timing results for generating test cases. We discuss
related work for automated test case generation for GUIs
in Section VI and conclude in Section VII.
II.
Overview
In this section we present an overview of PATHS through
an example. The goal is to provide the reader with a high-level
overview of the operation of PATHS and highlight the
role of the test designer in the overall test case generation
process. Details about the algorithms used by PATHS are
given in Section IV.
GUIs typically consist of components such as labels, but-
tons, menus, and pop-up lists. The GUI user interacts with
these components, which in turn generate events. For ex-
ample, pushing a button Preferences generates an event
(called the Preferences event) that opens a window. In
addition to these visible components on the screen, the user
also generates events by using devices such as a mouse or
a keyboard. For the purpose of our model, GUIs have two
types of windows: GUI windows and object windows. GUI
windows contain GUI components, whereas object windows
do not contain any GUI components. Object windows are
used to display and manipulate objects, e.g., the window
used to display text in MS WordPad.
Up
Select
Fig. 1. The Example GUI.
Figure
presents a small part of the MSWordPad's GUI.
This GUI can be used for loading text from files, manipulating
the text (by cutting and pasting) and then saving
the text in another file. At the highest level, the GUI has
a pull-down menu with two options (File and Edit) that
can generate events to make other components available.
For example the File event opens a menu with New, Open,
Save and SaveAs options. The Edit event opens a menu
with Cut, Copy, and Paste options, which are used to cut,
copy and paste objects respectively from the main screen.
The Open and SaveAs events open windows with several
more components. (Only the Open window is shown; the
SaveAs window is similar.) These components are used to
traverse the directory hierarchy and select a file. Up moves
up one level in the directory hierarchy and Select is used
to either enter subdirectories or select files. The window is
closed by selecting either Open or Cancel.
The central feature of PATHS is a plan generation sys-
tem. Automated plan generation has been widely investigated
and used within the field of artificial intelligence.
The input to the planner is an initial state, a goal state,
and a set of operators that are applied to a set of objects.
Operators, which model events, are usually described in
terms of preconditions and effects: conditions that must
be true for the action to be performed and conditions that
will be true after the action is performed. A solution to
a given planning problem is a sequence of instantiated operators
that is guaranteed to result in the goal state when
executed in the initial state. 1 In our example GUI, the
operators relate to GUI events.
Consider Figure 2(a), which shows a collection of files
stored in a directory hierarchy. The contents of the files
We have described only the simplest case of AI planning. The
literature includes many techniques for extensions, such as planning
under uncertainty [14], but we do not consider these techniques in
this paper.
(a)
This is the text that must be modified.
This is the text that must be modified.
This needs to be modified.
This needs to be modified.
This is the text.
This is the text.
(b)
This is the text that must be modified.
This is the text that must be modified.
This needs to be modified.
This needs to be modified.
This is the text.
This is the text.
This is the final text.
This is the final text.
new.doc
Fig. 2. A Task for the Planning System; (a) the Initial State, and
(b) the Goal State.
are shown in boxes, and the directory structure is shown
as an Exploring window. Assume that the initial state
contains a description of the directory structure, the location
of the files, and the contents of each file. Using these
files and WordPad's GUI, we can define a goal of creating
the new document shown in Figure 2(b) and then storing it
in file new.doc in the /root/public directory. Figure 2(b)
shows this goal state that contains, in addition to the
old files, a new file stored in /root/public directory. Note
that new.doc can be obtained in numerous ways, e.g., by
loading file Document.doc, deleting the extra text and typing
in the word final, or by loading file doc2.doc and inserting
text, or by creating the document from scratch by
typing in the text.
Our test case generation process is partitioned into two
phases, the setup phase and plan-generation phase. In the
first step of the setup phase, PATHS creates a hierarchical
model of the GUI and returns a list of operators from the
model to the test designer. By using knowledge of the GUI,
the test designer then defines the preconditions and effects
of the operators in a simple language provided by the planning
system. During the second or plan-generation phase,
the test designer describes scenarios (tasks) by defining a
set of initial and goal states for test case generation. Fi-
nally, PATHS generates a test suite for the scenarios. The
test designer can iterate through the plan-generation phase
any number of times, defining more scenarios and generat-
I
Roles of the Test Designer and PATHS During Test Case
Generation.
Phase Step Test Designer PATHS
Hierarchical
GUI Operators
Preconditions
and Effects of
Operators
Plan
Generation
3
4 Generate Test
Cases for T
Iterate 3 and 4 for Multiple Scenarios
ing more test cases. Table I summarizes the tasks assigned
to the test designer and those automatically performed by
PATHS.
For our example GUI, the simplest approach in step 1
would be for PATHS to identify one operator for each GUI
event (e.g., Open, File, Cut, Paste). (As a naming conven-
tion, we disambiguate with meaningful prefixes whenever
names are the same, such as Up.) The test designer would
then define the preconditions and effects for all the events
shown in Figure 3(a). Although conceptually simple, this
approach is inefficient for generating test cases for GUIs
as it results in a large number of operators. Many of these
events (e.g., File and Edit) merely make other events pos-
sible, but do not interact with the underlying software.
An alternative modeling scheme, and the one used in this
work, models the domain hierarchically with high-level operators
that decompose into sequences of lower level ones.
Although high-level operators could in principle be developed
manually by the test designer, PATHS avoids this inconvenience
by automatically performing the abstraction.
More specifically, PATHS begins the modeling process by
partitioning the GUI events into several classes. The details
of this partitioning scheme are discussed later in Section
IV. The event classes are then used by PATHS to create
two types of planning operators - system-interaction
operators and abstract operators.
The system-interaction operators are derived from those
GUI events that generate interactions with the underlying
software. For example, PATHS defines a system-
interaction operator EDIT CUT that cuts text from the
example GUI's window. Examples of other system-
interaction operators are EDIT PASTE and FILE SAVE.
The second set of operators generated by PATHS is a
set of abstract operators. These will be discussed in more
detail in Section IV, but the basic idea is that an abstract
operator represents a sequence of GUI events that invoke a
window that monopolizes the GUI interaction, restricting
the focus of the user to the specific range of events in the
window. Abstract operators encapsulate the events of the
restricted-focus window by treating the interaction within
that window as a separate planning problem. Abstract
operators need to be decomposed into lower level operators
by an explicit call to the planner. For our example GUI,
New, Open, Save, SaveAs,
Cut, Copy, Paste,
Open.Up, Open.Select, Open.Cancel, Open.Open,
Planning Operators = {
File_New, File_Open, File_Save, File_SaveAs,
Edit_Cut, Edit_Copy, Edit_Paste}.
(a)
(b)
Fig. 3. The Example GUI: (a) Original GUI Events, and (b) Planning
Operators derived by PATHS.
II
Operator-event Mappings for the Example GUI.
Operator Name Operator Type GUI Events
FILE NEW Sys. Interaction !File, New?
FILE OPEN Abstract !File, Open?
FILE SAVE Sys. Interaction !File, Save?
FILE SAVEAS Abstract !File, SaveAs?
EDIT CUT Sys. Interaction !Edit, Cut?
EDIT COPY Sys. Interaction !Edit, Copy?
Sys. Interaction !Edit, Paste?
abstract operators include File Open and File SaveAs.
The result of the first step of the setup phase is that the
system-interaction and abstract operators are determined
and returned as planning operators to the test designer.
The planning operators returned for our example are shown
in
Figure
3(b).
In order to keep a correspondence between the original
GUI events and these high-level operators, PATHS also
stores mappings, called operator-event mappings, as shown
in
Table
II. The operator name (column 1) lists all the
operators for the example GUI. Operator type (column
classifies each operator as either abstract or system-
interaction. Associated with each operator is the corresponding
sequence of GUI events (column 3).
The test designer then specifies the preconditions and
effects for each planning operator. An example of a planning
operator, EDIT CUT, is shown in Figure 4. EDIT CUT
is a system-interaction operator. The operator definition
contains two parts: preconditions and effects. All the conditions
in the preconditions must hold in the GUI before
the operator can be applied, e.g., for the user to generate
the Cut event, at least one object on the screen should
be selected (highlighted). The effects of the Cut event are
that the selected objects are moved to the clipboard and
removed from the screen. The language used to define each
operator is provided by the planner as an interface to the
planning system. Defining the preconditions and effects is
not difficult as this knowledge is already built into the GUI
structure. For example, the GUI structure requires that
Cut be made active (visible) only after an object is selected.
This is precisely the precondition defined for our example
operator (EDIT CUT) in Figure 4. Definitions of operators
Pfr.h#'. ))@9DUf8VU
9@G 'Tp.rr#Piw#
9@G Tryrp#rq#Piw#
Hr"
File Edit
Cut
Fig. 4. An Example of a GUI Planning Operator.
representing events that commonly appear across GUIs,
such as Cut, can be maintained in a library and reused for
subsequent similar applications.
The test designer begins the generation of particular test
cases by inputing the defined operators into PATHS and
then identifying a task, such as the one shown in Figure 2
that is defined in terms of an initial state and a goal state.
PATHS automatically generates a set of test cases that
achieve the goal. An example of a plan is shown in Figure
5. (Note that TypeInText() is an operator representing
a keyboard event.) This plan is a high-level plan that
must be translated into primitive GUI events. The translation
process makes use of the operator-event mappings
stored during the modeling process. One such translation
is shown in Figure 6. This figure shows the abstract operators
contained in the high-level plan are decomposed by (1)
inserting the expansion from the operator-event mappings,
and (2) making an additional call to the planner. Since the
maximum time is spent in generating the high-level plan,
it is desirable to generate a family of test cases from this
single plan. This goal is achieved by generating alternative
sub-plans at lower levels. These sub-plans are generated
much faster than generating the high-level plan and can be
substituted into the high-level plan to obtain alternative
test cases. One such alternative low-level test case generated
for the same task is shown in Figure 7. Note the use of
nested invocations to the planner during abstract-operator
decomposition.
The hierarchical mechanism aids regression testing, since
changes made to one component do not necessarily invalidate
all test cases. The higher level plans can still be re-
#Fig. 5. A Plan Consisting of Abstract Operators and a GUI Event.
Low-level Test Case
Fig. 6. Expanding the Higher Level Plan.
tained and local changes can be made to sub-plans specific
to the changed component of the GUI. Also, the steps in the
test cases are platform independent. An additional level of
translation is required to generate platform-dependent test
cases. By using a high-level model of the GUI, we have the
advantage of obtaining platform-independent test cases.
III. Plan Generation
We now provide details on plan generation. Given an
initial state, a goal state, a set of operators, and a set of
objects, a planner returns a set of steps (instantiated op-
erators) to achieve the goal. Many different algorithms for
plan generation have been proposed and developed. Weld
presents an introduction to least-commitment planning [15]
and a survey of the recent advances in planning technology
[16].
Formally, a planning problem is a 4-tuple,
where is the set of operators, D is a finite set of objects,
I is the initial state, and G is the goal state. Note that an
operator definition may contain variables as parameters;
typically an operator does not correspond to a single executable
action but rather to a family of actions: one for
each different instantiation of the variables. The solution
Low-level Test Case
Fig. 7. An Alternative Expansion Leads to a New Test Case.
to a planning problem is a plan: a tuple !
where S is a set of plan steps (instances of operators, typically
defined with sets of preconditions and effects), O is
a set of ordering constraints on the elements of S, L is a
set of causal links representing the causal structure of the
plan, and B is a set of binding constraints on the variables
of the operator instances in S. Each ordering constraint
is of the form S meaning
that step S i
must occur sometime before step S j
(but not
necessarily immediately before). Typically, the ordering
constraints induce only a partial ordering on the steps in
S. Causal links are triples
are elements of S and c is both an effect of S i and a pre-condition
for S j . 2 Note that corresponding to this causal
link is an ordering constraint, i.e.,
. The reason for
tracking a causal link is to ensure that no step
"threatens" a required link, i.e., no step S k
that results in
:c can temporally intervene between steps S i
As mentioned above, most AI planners produce partially-ordered
plans, in which only some steps are ordered with
respect to one another. A total-order plan can be derived
from a partial-order plan by adding ordering constraints.
Each total-order plan obtained in such a way is called a
linearization of the partial-order plan. A partial-order plan
is a solution to a planning problem if and only if every
consistent linearization of the partial-order plan meets the
solution conditions.
Figure
8(a) shows the partial-order plan obtained to realize
the goal shown in Figure 2 using our example GUI.
generally, c represents a proposition that is the unification
of an effect of S i
and a precondition of S j
DeleteText
("needs to be modified")
TypeInText
("is the final text")
FILE_OPEN
("Samples", "report.doc")
FILE_SAVEAS
("public", "new.doc")
(a)
(c)
DeleteText
("needs to be modified")
TypeInText
("is the final text")
FILE_OPEN
("Samples", "report.doc")
FILE_SAVEAS
("public", "new.doc")
DeleteText
("needs to be modified")
TypeInText
("is the final text")
FILE_OPEN
("Samples", "report.doc")
FILE_SAVEAS
("public", "new.doc")
l
Ordering Constraints
(b)
Fig. 8. (a) A Partial-order Plan, (b) the Ordering Constraints in the
Plan, and (c) the Two Linearizations.
In the figure, the nodes (labeled S i
, and S l
the plan steps (instantiated operators) and the edges
represent the causal links. The bindings are shown as
parameters of the operators. Figure 8(b) lists the ordering
constraints, all directly induced by the causal links
in this example. In general, plans may include additional
ordering constraints. The ordering constraints specify
that the DeleteText() and TypeInText() actions can
be performed in either order, but they must precede the
FILE SAVEAS() action and must be performed after the
FILE OPEN() action. We obtain two legal orders, both of
which are shown in Figure 8(c), and thus two high-level
test cases are produced that may be decomposed to yield
a number of low-level test cases.
In this work, we employ recently developed planning
technology that increases the efficiency of plan generation.
Specifically, we generate single-level plans using the Interference
Progression Planner (IPP) [17], a system that
extends the ideas of the Graphplan system [18] for plan
generation. Graphplan introduced the idea of performing
plan generation by converting the representation of a planning
problem into a propositional encoding. Plans are then
found by means of a search through a graph. The planners
in the Graphplan family, including IPP, have shown
increases in planning speeds of several orders of magnitude
on a wide range of problems compared to earlier planning
systems that rely on a first-order logic representation
and a graph search requiring unification of unbound variables
[18]. IPP uses a standard representation of actions in
which preconditions and effects can be parameterized: subsequent
processing performs the conversion to the propositional
form. 3 As is common in planning, IPP produces
partial-order plans.
IPP forms plans at a single level of abstraction. Techniques
have been developed in AI planning to generate
plans at multiple levels of abstraction called Hierarchical
Task Network (HTN) planning [19]. In HTN planning, domain
actions are modeled at different levels of abstraction,
and for each operator at level n, one specifies one or more
"methods" at level n \Gamma 1. A method is a single-level partial
plan, and we say that an action "decomposes" into
its methods. HTN planning focuses on resolving conflicts
among alternative methods of decomposition at each level.
The GUI test case generation problem is unusual in that, in
our experience at least, it can be modeled with hierarchical
plans that do not require conflict resolution during decom-
position. We are thus able to make use of a restricted form
of hierarchical planning, which assumes that all decompositions
are compatible. Hierarchical planning is valuable
for GUI test case generation as GUIs typically have a large
number of components and events and the use of a hierarchy
allows us to conceptually decompose the GUI into
different levels of abstraction, resulting in greater planning
efficiency. As a result of this conceptual shift, plans can
be maintained at different abstraction levels. When subsequent
modifications are made to the GUI, top-level plans
usually do not need to be regenerated from scratch. In-
stead, only sub-plans at a lower level of abstraction are af-
fected. These sub-plans can be regenerated and re-inserted
in the larger plans, aiding regression testing.
IV. Planning GUI Test Cases
Having described AI planning techniques in general, we
now present details of how we use planning in PATHS to
generate test cases for GUIs.
A. Developing a Representation of the GUI and its Operation
In developing a planning system for testing GUIs, the
first step is to construct an operator set for the planning
problem. As discussed in Section II, the simplest approach
of defining one operator for each GUI event is inefficient,
resulting in a large number of operators. We exploit certain
structural properties of GUIs to construct operators
at different levels of abstraction. The operator derivation
process begins by partitioning the GUI events into
several classes using certain structural properties of GUIs.
Note that the classification is based only on the structural
properties of GUIs and can thus be done automatically by
PATHS using a simple depth-first traversal algorithm. The
GUI is traversed by opening menus and windows by clicking
on buttons; for convenience the names of each operator
are taken off the label of each button/menu-item it repre-
sents. Note that several commercially available tools also
3 In fact, IPP generalizes Graphplan precisely by increasing the expressive
power of its representation language, allowing for conditional
and universally quantified effects.
perform such a traversal of the GUI, e.g., WinRunner from
Mercury Interactive Corporation.
The classification of GUI events that we employ is as
follows:
Menu-open events open menus, i.e., they expand the set
of GUI events available to the user. By definition,
menu-open events do not interact with the underlying
software. The most common example of menu-open
events are generated by buttons that open pull-down
menus, e.g., File and Edit.
Unrestricted-focus events open GUI windows that do
not restrict the user's focus; they merely expand the
set of GUI events available to the user. For exam-
ple, in the MS PowerPoint software, the Basic Shapes
are displayed in an unrestricted-focus window. For
the purpose of test case generation, these events can
be treated in exactly the same manner as menu-open
events; both are used to expand the set of GUI events
available to the user.
Restricted-focus events open GUI windows that have the
special property that once invoked, they monopolize
the GUI interaction, restricting the focus of the user
to a specific range of events within the window, until
the window is explicitly terminated. Preference setting
is an example of restricted-focus events in many GUI
systems; the user clicks on Edit and Preferences, a
window opens and the user then spends time modifying
the preferences, and finally explicitly terminates
the interaction by either clicking OK or Cancel.
System-interaction events interact with the underlying
software to perform some action; common examples
include cutting and pasting text, and opening object
windows.
The above classification of events are then used to create
two classes of planning operators.
System-interaction operators represent all sequences of
zero or more menu-open and unrestricted-focus events
followed by a system-interaction event. Consider a
small part of the example GUI: one pull-down menu
with one option (Edit) which can be opened to give
more options, i.e., Cut and Paste. The events available
to the user are Edit, Cut and Paste. Edit is
a menu-open event and Cut and Paste are system-
interaction events. Using this information the following
two system-interaction operators are obtained.
The above is an example of an operator-event mapping
that relates system-interaction operators to GUI
events. The operator-event mappings fold the menu-
open and unrestricted focus events into the system-
interaction operator, thereby reducing the total number
of operators made available to the planner, resulting
in greater planning efficiency. These mappings
are used to replace the system-interaction operators by
their corresponding GUI events when generating the
final test case.
In the above example, the events Edit, Cut and
Paste are hidden from the planner and only the
system-interaction operators namely, EDIT CUT and
EDIT PASTE, are made available. This abstraction prevents
generation of test cases in which Edit is used
in isolation, i.e., the model forces the use of Edit either
with Cut or with Paste, thereby restricting attention
to meaningful interactions with the underlying
software. 4
Abstract operators are created from the restricted-focus
events. Abstract operators encapsulate the events of
the underlying restricted-focus window by creating a
new planning problem, the solution to which represents
the events a user might generate during the focused
interaction. The abstract operators implicitly
divide the GUI into several layers of abstraction, so
that test cases can be generated for each GUI level,
thereby resulting in greater efficiency. The abstract
operator is a complex structure since it contains all
the necessary components of a planning problem, including
the initial and goal states, the set of objects,
and the set of operators. The prefix of the abstract operator
is the sequence of menu-open and unrestricted-
focus events that lead to the restricted-focus event.
This sequence of events is stored in the operator-event
mappings. The suffix of the abstract operator represents
the restricted-focus user interaction. The abstract
operator is decomposed in two steps: (1) using
the operator-events mappings to obtain the abstract
operator prefix, and (2) explicitly calling the planner
to obtain the abstract operator suffix. Both the prefix
and suffix are then substituted back into the high-level
plan. For example, in Figure 6, the abstract operator
FILE OPEN is decomposed by substituting its prefix
using a mapping and suffix (ChDir,
Select, Open) by invoking the planner.
Figure
9(a) shows a small part of the example GUI:
a File menu with two options, namely Open and
SaveAs. When either of these events is generated, it
results in another GUI window with more components
being made available. The components in both windows
are quite similar. For Open the user can exit
after pressing Open or Cancel; for SaveAs the user
can exit after pressing Save or Cancel. The complete
set of events available is Open, SaveAs, Open.Select,
Open.Up, Open.Cancel, Open.Open, SaveAs.Select,
SaveAs.Up, SaveAs.Cancel and SaveAs.Save. Once
the user selects Open, the focus is restricted to
Open.Select, Open.Up, Open.Cancel and Open.Open.
Similarly, when the user selects SaveAs, the focus
is restricted to SaveAs.Select, SaveAs.Up,
SaveAs.Cancel and SaveAs.Save. These properties
lead to the following two abstract operators:
File
File
In addition to the above two operator-event map-
4 Test cases in which Edit stands in isolation can be created by (1)
testing Edit separately, or (2) inserting Edit at random places in the
generated test cases.
SaveAs
Save
File
File_Open
File_SaveAs
(a)
Abstraction
Abstraction
(c)
Abstract
Operator Template
Operator Name: File_Open
Initial State: determined at run time
Goal State: determined at run time
Operator List:
Abstract
Operator Template
Operator Name: File_SaveAs
Initial State: determined at run time
Goal State: determined at run time
Operator List:
(b)
Fig. 9. (a) Open and SaveAs Windows as Abstract Operators, (b)
Abstract
Operator Templates, and (c) Decomposition of the Abstract
Operator Using Operator-event Mappings and Making a
Separate Call to the Planner to Yield a Sub-plan.
pings, an abstract operator definition template
is created for each operator as shown in Figure 9(b).
This template contains all the essential components of
the planning problem, i.e., the set of operators that
are available during the restricted-focused user inter-
action, and initial and goal states, both determined
dynamically at the point before the call. Since the
higher-level planning problem has already been solved
before invoking the planner for the abstract opera-
tor, the preconditions and effects of the high-level abstract
operator are used to determine the initial and
goal states of the sub-plan. At the highest level of
abstraction, the planner will use the high-level oper-
ators, i.e., File Open and File SaveAs to construct
plans. For example, in Figure 9(c), the high-level plan
contains File Open. Decomposing File Open requires
(1) retrieving the corresponding GUI events from the
stored operator-event mappings (File, Open), and (2)
invoking the planner, which returns the sub-plan (Up,
Select, Open). File Open is then replaced by the sequence
The abstract and system-interaction operators are given
as input to the planner. The operator set returned for the
running example is shown in Figure 3(b).
Initial State:
contains(root private)
contains(private Figures)
contains(private Latex)
contains(Latex
contains(private Courses)
contains(private Thesis)
contains(root public)
contains(public html)
contains(html gif)
containsfile(gif doc2.doc)
containsfile(private
Document.doc)
containsfile(Samples
report.doc)
currentFont(Times Normal
in(doc2.doc This)
in(doc2.doc is)
in(doc2.doc the)
in(doc2.doc text.)
after(This is)
after(is the)
after(the text.)
font(This Times Normal 12pt)
font(is Times Normal 12pt)
font(the Times Normal 12pt)
font(text. Times Normal
Similar descriptions for
Document.doc and report.doc
Goal State:
containsfile(public new.doc)
in(new.doc This)
in(new.doc is)
in(new.doc the)
in(new.doc final)
in(new.doc text.)
after(This is)
after(is the)
after(the final)
after(final text.)
font(This Times Normal 12pt)
font(is Times Normal 12pt)
font(the Times Normal 12pt)
font(final Times Normal
font(text. Times Normal
Fig. 10. Initial State and the changes needed to reach the Goal State.
B. Modeling the Initial and Goal State and Generating Test
Cases
The test designer begins the generation of particular test
cases by identifying a task, consisting of initial and goal
states (see Figure 2). The test designer then codes the
initial and goal states or uses a tool that automatically
produces the code. 5 The code for the initial state and
the changes needed to achieve the goal states is shown in
Figure
10. Once the task has been specified, the system
automatically generates a set of test cases that achieve the
goal. The algorithm to generate the test cases is discussed
next.
C. Algorithm for Generating Test Cases
The test case generation algorithm is shown in Figure 11.
The operators are assumed to be available before making
a call to this algorithm, i.e., steps 1-3 of the test case generation
process shown in Table I must be completed before
making a call to this algorithm. The parameters (lines
include all the components of a planning problem and
a threshold (T) that controls the looping in the algorithm.
The loop (lines 8.12) contains the explicit call to the planner
(\Phi). The returned plan p is recorded with the operator
set, so that the planner can return an alternative plan in
the next iteration (line 11). At the end of this loop, plan-
List contains all the partial-order plans. Each partial-order
plan is then linearized (lines 13.16), leading to multiple linear
plans. Initially the test cases are high-level linear plans
5 A tool would have to be developed that enables the user to visually
describe the GUI's initial and goal states. The tool would then
translate the visual representation to code, e.g., the code shown in
Figure
10.
Lines
Algorithm :: GenTestCases(
Operator Set;
Fig. 11. The Complete Algorithm for Generating Test Cases
(line 17). The decomposition process leads to lower level
test cases. The high-level operators in the plan need to be
expanded/decomposed to get lower level test cases. If the
step is a system-interaction operator, then the operator-
event mappings are used to expand it (lines 20.22). How-
ever, if the step is an abstract operator, then it is decomposed
to a lower level test case by (1) obtaining the GUI
events from the operator-event mappings, (2) calling the
planner to obtain the sub-plan, and (3) substituting both
these results into the higher level plan. Extraction functions
are used to access the planning problem's components
at lines 24.27. The lowest level test cases, consisting of
GUI events, are returned as a result of the algorithm (line
33).
As noted earlier, one of the main advantages of using
the planner in this application is to automatically generate
alternative plans for the same goal. Generating alternative
plans is important to model the various ways in which different
users might interact with the GUI, even if they are
all trying to achieve the same goal. AI planning systems
typically generate only a single plan; the assumption made
there is that the heuristic search control rules will ensure
that the first plan found is a high quality plan. In PATHS,
we generate alternative plans in the following two ways.
1. Generating multiple linearizations of the partial-order
plans. Recall from the earlier discussion that the ordering
constraints O only induce a partial ordering,
so the set of solutions are all linearizations of S (plan
steps) consistent with O. We are free to choose any linear
order consistent with the partial order. All possible
linear orders of a partial-order plan result in a family
of test cases. Multiple linearizations for a partial-order
plan were shown earlier in Figure 8.
2. Repeating the planning process, forcing the planner
to generate a different test case at each iteration.
V. Experiments
A prototype of PATHS was developed and several sets
of experiments were conducted to ensure that PATHS is
practical and useful. These experiments were executed on a
Pentium based computer with 200MB RAM running Linux
OS. A summary of the results of some of these experiments
is given in the following sections.
A. Generating Test Cases for Multiple Tasks
PATHS was used to generate test cases for Microsoft's
WordPad. Examples of the generated high-level test cases
are shown in Table III. The total number of GUI events in
WordPad was determined to be approximately 325. After
analysis, PATHS reduced this set to 32 system-interaction
and abstract operators, i.e., roughly a ratio of
This reduction in the number of operators is impressive
and helps speed up the plan generation process, as will be
shown in Section V-B.
Defining preconditions and effects for the 32 operators
was fairly straightforward. The average operator definition
required 5 preconditions and effects, with the most
complex operator requiring 10 preconditions and effects.
Since mouse and keyboard events are part of the GUI, three
additional operators for mouse and keyboard events were
defined.
Table
IV presents the CPU time taken to generate test
cases for MS WordPad. Each row in the table represents
a different planning task. The first column shows the task
the second column shows the time needed to generate
the highest-level plan; the third column shows the
average time spent to decompose all sub-plans; the fourth
column shows the total time needed to generate the test
case (i.e., the sum of the two previous columns). These
results show that the maximum time is spent in generating
the high-level plan (column 2). This high-level plan is then
used to generate a family of test cases by substituting alternative
low-level sub-plans. These sub-plans are generated
relatively faster (average shown in column 3), amortizing
the cost of plan generation over multiple test cases. Plan 9,
which took the longest time to generate, was linearized to
obtain 2 high-level plans, each of which was decomposed
to give several low-level test cases, the shortest of which
consisted of 25 GUI events.
The plans shown in Table III are at a high level of ab-
straction. Many changes made to the GUI have no effect
on these plans, making regression testing easier and less
expensive. For example, none of the plans in Table III
I
Some WordPad Plans Generated for the Task of Figure 2.
Plan Plan Plan
No. Step Action
contain any low-level physical details of the GUI. Changes
made to fonts, colors, etc. do not affect the test suite in
any way. Changes that modify the functionality of the
GUI can also be readily incorporated. For example, if the
WordPad GUI is modified to introduce an additional file
opening feature, then most of the high-level plans remain
the same. Changes are only needed to sub-plans that are
generated by the abstract operator FILE-OPEN. Hence the
cost of initial plans is amortized over a large number of test
cases.
We also implemented an automated test execution sys-
tem, so that all the test cases could be automatically executed
without human intervention. Automatically executing
the test cases involved generating the physical
mouse/keyboard events. Since our test cases are represented
at a high level of abstraction, we translate the high-level
actions into physical events. The actual screen coordinates
of the buttons, menus, etc. were derived from the
layout information.
B. Hierarchical vs. Single-level Test Case Generation
In our second experiment, we compared the single-level
test case generation with the hierarchical test case generation
technique. Recall that in the single-level test case
IV
Time Taken to Generate Test Cases for WordPad.
Task Plan Sub Total
No. Time Plan Time
(sec) Time (sec)
3 3.17 0.00 3.17
9 40.47 0.04 40.51
generation technique, planning is done at a single level of
abstraction. The operators have a one-to-one correspondence
with the GUI events. On the other hand, in the
hierarchical test case generation approach, the hierarchical
modeling of the operators is used.
Results of this experiment are summarized in Table V.
We have shown CPU times for 6 different tasks. Column 1
shows the task shows the length of the
test case generated by using the single-level approach and
Column 3 shows its corresponding CPU time. The same
task was then used to generate another test case but this
time using the hierarchical operators. Column 4 shows the
length of the high-level plans and Column 5 shows the time
needed to generate this high-level plan and then decompose
it. Plan 1 obtained from the hierarchical algorithm expands
to give a plan of length 18, i.e., exactly the same plan obtained
by running its corresponding single-level algorithm.
The timing results show the hierarchical approach is more
efficient than the single-level approach. This results from
the smaller number of operators used in the planning problem
This experiment demonstrates the importance of the hierarchical
modeling process. The key to efficient test case
generation is to have a small number of planning operators
at each level of planning. As GUIs become more com-
plex, our modeling algorithm is able to obtain increasing
number of levels of abstraction. We performed some exploratory
analysis for the much larger GUI of Microsoft
Word. There, the automatic modeling process reduced the
number of operators by a ratio of 20 : 1.
VI. Related Work
Current tools to aid the test designer in the testing process
include record/playback tools [20], [21]. These tools
record the user events and GUI screens during an interactive
session. The recorded sessions are later played back
whenever it is necessary to recreate the same GUI states.
Several attempts have been made to automate test case
generation for GUIs. One popular technique is programming
the test case generator [22]. For comprehensive test-
ing, programming requires that the test designer code all
Comparing the single level with the hierarchical approach.
'-' indicates that no plan was found in 1 hour.
Single level Hierarchical
Task Plan Time Plan Time
No. Length (sec.) Length (sec.)
4 26 3312.72 6 7.18
possible decision points in the GUI. However, this approach
is time consuming and is susceptible to missing important
GUI decisions.
A number of research efforts have addressed the automation
of test case generation for GUIs. Several finite-state
machine models have been proposed to generate test
cases [23], [24], [25], [26]. In this approach, the software's
behavior is modeled as a FSM where each input triggers
a transition in the FSM. A path in the FSM represents a
test case, and the FSM's states are used to verify the soft-
ware's state during test case execution. This approach has
been used extensively for test generation for testing hardware
circuits [27]. An advantage of this approach is that
once the FSM is built, the test case generation process is
automatic. It is relatively easy to model a GUI with an
FSM; each user action leads to a new state and each transition
models a user action. However, a major limitation of
this approach, which is an especially important limitation
for GUI testing, is that FSM models have scaling problems
[28]. To aid in the scalability of the technique, variations
such as variable finite state machine (VFSM) models have
been proposed by Shehady et al. [28].
Test cases have also been generated to mimic novice users
[7]. The approach relies on an expert to manually generate
the initial sequence of GUI events, and then uses genetic
algorithm techniques to modify and extend the sequence.
The assumption is that experts take a more direct path
when solving a problem using GUIs whereas novice users
often take longer paths. Although useful for generating
multiple test cases, the technique relies on an expert to
generate the initial sequence. The final test suite depends
largely on the paths taken by the expert user.
AI planning has been found to be useful for generating
focused test cases [29] for a robot tape library command
language. The main idea is that test cases for command
language systems are similar to plans. Given an initial
state of the tape library and a desired goal state, the planner
can generate a "plan" which can be executed on the
software as a test case. Note that although this technique
has similarities to our approach, several differences exist: a
major difference is that in [29], each command in the language
is modeled with a distinct operator. This approach
works well for systems with a relatively small command
language. However, because GUIs typically have a large
number of possible user actions, a hierarchical approach is
needed.
VII. Conclusions
In this paper, we presented a new technique for testing
GUI software, and we showed its potential value for the
test designer's tool-box. Our technique employs GUI tasks,
consisting of initial and goal states, to generate test cases.
The key idea of using tasks to guide test case generation is
that the test designer is likely to have a good idea of the
possible goals of a GUI user, and it is simpler and more
effective to specify these goals than to specify sequences
of events that achieve them. Our technique is unique in
that we use an automatic planning system to generate test
cases from GUI events and their interactions. We use the
description of the GUI to automatically generate alternative
sequences of events from pairs of initial and goal states
by iteratively invoking the planner.
We have demonstrated that our technique is both practical
and useful by generating test cases for the popular MS
WordPad software's GUI. Our experiments showed that the
planning approach was successful in generating test cases
for different scenarios. We developed a technique for decomposing
the GUI at multiple levels of abstraction. Our
technique not only makes test case generation more intu-
itive, but also helps scale our test generation algorithms for
larger GUIs. We experimentally showed that the hierarchical
modeling approach was necessary to efficiently generate
test cases.
Hierarchical test case generation also aids in performing
regression testing. Changes made to one part of the GUI
do not invalidate all the test cases. Changes can be made
to lower level test cases, retaining most of the high-level
test cases.
Representing the test cases at a high level of abstraction
makes it possible to fine-tune the test cases to each implementation
platform, making the test suite more portable.
A mapping is used to translate our low-level test cases to sequences
of physical actions. Such platform-dependent mappings
can be maintained in libraries to customize our generated
test cases to low-level, platform-specific test cases.
We note some current limitations of our approach. First,
the test case generator is largely driven by the choice of
tasks given to the planner. Currently in PATHS, these
tasks are chosen manually by the test designer. A poorly
chosen set of tasks will yield a test suite that does not provide
adequate coverage. We are currently exploring the
development of coverage measures for GUIs. Second, we
depend heavily on the hierarchical structure of the GUI for
efficient test case generation. If PATHS is given a poorly
structured GUI then no abstract operators will be obtained
and the planning will depend entirely on primitive opera-
tors, making the system inefficient. Third, our approach
must be used in conjunction with other test case generation
techniques to adequately test the software as is generally
the case with most test case generators.
One of the tasks currently performed by the test designer
is the definition of the preconditions and effects of the op-
erators. Such definitions of commonly used operators can
be maintained in libraries, making this task easier. We are
also currently investigating how to automatically generate
the preconditions and effects of the operators from a GUI's
specifications.
VIII.
Acknowledgments
This research was partially supported by the Air Force
Office of Scientific Research (F49620-98-1-0436) and by
the National Science Foundation (IRI-9619579). Atif
Memon was partially supported by the Andrew Mellon Pre-doctoral
Fellowship.
We thank the anonymous reviewers of this article for
their comments and Brian Malloy for his valuable sugges-
tions. A preliminary version of the paper appeared in the
Proceedings of the 21st International Conference on Software
Engineering, Los Angeles, May 1999 [30].
--R
"Why are human-computer interfaces difficult to design and implement?,"
"Integrating the MVC paradigm into an object-oriented framework to accelerate GUI application development,"
"User interface software tools,"
"ADDI: A tool for automating the design of visual interfaces,"
"Regression testing of GUI event interactions,"
"Toward automatic generation of novice user test scripts,"
"User interface design in the trenches: Some tips on shooting from the hip,"
"Iterative user-interface design,"
"Interactive scenarios for the development of a user interface prototype,"
"User interface design and evaluation - application of the rapid prototyping tool EMSIG,"
"The role of domain-specific knowledge in the planning as satisfiability framework,"
"Java GUI testing,"
"Conditional nonlinear planning,"
"An introduction to least commitment planning,"
"Recent advances in AI planning,"
"Ex- tending planning graphs to an ADL subset,"
"Fast planning through planning graph analysis,"
"HTN planning: Complexity and expressivity,"
"Stress Tests For GUI Programs,"
"Inte- grated data capture and analysis tools for research and testing an graphical user interfaces,"
"The black art of GUI testing,"
"Automated test generation from a behavioral model,"
"Testing software design modeled by finite-state machines,"
"Automated test generation, execution, and reporting,"
"A reduced test suite for protocol conformance testing,"
"Redundancy identifi- cation/removal and test generation for sequential circuits using implicit state enumeration,"
"A method to automate user interface testing using variable finite state machines,"
"Test case generation as an AI planning problem,"
"Using a goal-driven approach to generate test cases for GUIs,"
--TR
--CTR
Yanhong Sun , Edward L. Jones, Specification-driven automated testing of GUI-based Java programs, Proceedings of the 42nd annual Southeast regional conference, April 02-03, 2004, Huntsville, Alabama
Christoph Csallner , Yannis Smaragdakis, JCrasher: an automatic robustness tester for Java, SoftwarePractice & Experience, v.34 n.11, p.1025-1050, September 2004
Atif M. Memon , Mary Lou Soffa , Martha E. Pollack, Coverage criteria for GUI testing, ACM SIGSOFT Software Engineering Notes, v.26 n.5, Sept. 2001
Fevzi Belli , Christof J. Budnik, Test minimization for human-computer interaction, Applied Intelligence, v.26 n.2, p.161-174, April 2007
Atif M. Memon , Mary Lou Soffa, Regression testing of GUIs, ACM SIGSOFT Software Engineering Notes, v.28 n.5, September
Jessica Chen , Suganthan Subramaniam, Specification-based Testing for Gui-based Applications, Software Quality Control, v.10 n.3, p.205-224, November 2002
Avik Sinha , Carol Smidts, An experimental evaluation of a higher-ordered-typed-functional specification-based test-generation technique, Empirical Software Engineering, v.11 n.2, p.173-202, June 2006
Anneliese K. Amschler Andrews , Chunhui Zhu , Michael Scheetz , Eric Dahlman , Adele E. Howe, AI Planner Assisted Test Generation, Software Quality Control, v.10 n.3, p.225-259, November 2002
model-based test design technique for enhanced testing of domain-specific applications, ACM Transactions on Software Engineering and Methodology (TOSEM), v.15 n.3, p.242-278, July 2006
Qing Xie , Atif M. Memon, Designing and comparing automated test oracles for GUI-based software applications, ACM Transactions on Software Engineering and Methodology (TOSEM), v.16 n.1, p.4-es, February 2007
Manish Gupta , Jicheng Fu , Farokh B. Bastani , Latifur R. Khan , I. -Ling Yen, Rapid goal-oriented automated software testing using MEA-graph planning, Software Quality Control, v.15 n.3, p.241-263, September 2007
Atif Memon , Adithya Nagarajan , Qing Xie, Automating regression testing for evolving GUI software: Research Articles, Journal of Software Maintenance and Evolution: Research and Practice, v.17 n.1, p.27-64, January 2005
Atif Memon , Adithya Nagarajan , Qing Xie, Automating regression testing for evolving GUI software, Journal of Software Maintenance: Research and Practice, v.17 n.1, p.27-64, January 2005 | GUI regression testing;automated test case generation;software testing;generating alternative plans;GUI testing;application of AI planning |
373425 | Robust Classification for Imprecise Environments. | In real-world environments it usually is difficult to specify target operating conditions precisely, for example, target misclassification costs. This uncertainty makes building robust classification systems problematic. We show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. In some cases, the performance of the hybrid actually can surpass that of the best known classifier. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization. The hybrid also is efficient to build, to store, and to update. The hybrid is based on a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull (ROCCH) method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. Finally, we point to empirical evidence that a robust hybrid classifier indeed is needed for many real-world problems. | Introduction
Traditionally, classification systems have been built by experimenting with many different classifiers, comparing
their performance and choosing the best. Experimenting with different induction algorithms, parameter settings,
and training regimes yields a large number of classifiers to be evaluated and compared. Unfortunately, comparison
is often difficult in real-world environments because key parameters of the target environment are not known. For
example, the optimal cost/benefit tradeoffs and the target class priors seldom are known precisely, and often are
subject to change. For example, in fraud detection we cannot ignore either the cost or class distribution, nor can
we assume that our distribution specifications are precise or static (Fawcett & Provost, 1997). We need a method
for the management, comparison, and application of multiple classifiers that is robust to imprecise and changing
environments.
We describe the ROC convex hull (rocch) method, which combines techniques from ROC analysis, decision
analysis and computational geometry. The ROC convex hull decouples classifier performance from specific class
and cost distributions, and may be used to specify the subset of methods that are potentially optimal under any
combination of cost assumptions and class distribution assumptions. The rocch method is efficient, so it facilitates
the comparison of a large number of classifiers. It minimizes the management of classifier performance data because
it can specify exactly those classifiers that are potentially optimal, and it is incremental, easily incorporating new
and varied classifiers.
We demonstrate that it is possible and desirable to avoid complete commitment to a single best classifier during
system construction. Instead, the rocch can be used to build from the available classifiers a hybrid classification
system that will perform best under any target cost/benefit and class distributions. Target conditions can then be
specified at run time. Moreover, in cases where precise information is still unavailable when the system is run (or if
the conditions change dynamically during operation), the hybrid system can be tuned easily (and optimally) based
on feedback from its actual performance.
The paper is structured as follows. First we sketch briefly the traditional approach to building such systems,
in order to demonstrate that it is brittle under the types of imprecision common in real-world problems. We then
introduce and describe the rocch and its properties for comparing and visualizing classifier performance in imprecise
environments. In the following sections we formalize the notion of a robust classification system, and show that the
rocch is an elegant method for constructing one automatically. The solution is elegant because the resulting hybrid
classifier is robust for a wide variety of problem formulations, including the optimization of metrics such as accuracy,
expected cost, lift, precision, recall, and workforce utilization, and it is efficient to build, store and update. We then
show that the hybrid can actually do better than the best known classifier in some situations. Finally, by citing
results from empirical studies, we provide evidence that this type of system is needed.
1.1 An example
A systems-building team wants to create a system that will take a large number of instances and identify those for
which an action should be taken. The instances could be potential cases of fraudulent account behavior, of faulty
equipment, of responsive customers, of interesting science, etc. We consider problems for which the best method for
classifying or ranking instances is not well defined, so the system builders may consider machine learning methods,
neural networks, case-based systems, and hand-crafted knowledge bases as potential classification models. Ignoring
for the moment issues of efficiency, the foremost question facing the system builders is: which of the available models
performs "best" at classification?
Traditionally, an experimental approach has been taken to answer this question, because the distribution of
instances can be sampled if it is not known a priori. The standard approach is to estimate the error rate of each
model statistically and then to choose the model with the lowest error rate. This strategy is common in machine
learning, pattern recognition, data mining, expert systems and medical diagnosis. In some cases, other measures such
as cost or benefit are used as well. Applied statistics provides methods such as cross-validation and the bootstrap
for estimating model error rates and recent studies have compared the effectiveness of different methods (Salzberg,
1997; Kohavi, 1995; Dietterich, 1998).
Unfortunately, this experimental approach is brittle under two types of imprecision that are common in real-world
environments. Specifically, costs and benefits usually are not known precisely, and target class distributions often
are known only approximately as well. This observation has been made by many authors (Bradley, 1997; Catlett,
1995), and is in fact the concern of a large subfield of decision analysis (Weinstein & Fineberg, 1980). Imprecision
also arises because the environment may change between the time the system is conceived and the time it is used,
and even as it is used. For example, levels of fraud and levels of customer responsiveness change continually over
time and from place to place.
1.2 Basic terminology
In this paper we address two-class problems. Formally, each instance I is mapped to one element of the set fp; ng of
(correct) positive and negative classes. A classification model (or classifier) is a mapping from instances to predicted
classes. Some classification models produce a continuous output (e.g., an estimate of an instance's class membership
probability) to which different thresholds may be applied to predict class membership. To distinguish between the
actual class and the predicted class of an instance, we will use the labels fY; Ng for the classifications produced by a
model. For our discussion, let c(classification; class) be a two-place error cost function where c(Y; n) is the cost of a
false positive error and c(N; p) is the cost of a false negative error. 1 We represent class distributions by the classes'
prior probabilities p(p) and
The true positive rate, or hit rate, of a classifier is:
positives correctly classified
total positives
The false positive rate, or false alarm rate, of a classifier is:
negatives incorrectly classified
total negatives
The traditional experimental approach is brittle because it chooses one model as "best" with respect to a specific
set of cost functions and class distribution. If the target conditions change, this system may no longer perform
optimally, or even acceptably. As an example, assume that we have a maximum false positive rate FP , that must
not be exceeded. We want to find the classifier with the highest possible true positive rate, TP , that does not exceed
the FP limit. This is the Neyman-Pearson decision criterion (Egan, 1975). Three classifiers, under three such FP
limits, are shown in Figure 1. A different classifier is best for each FP limit; any system built with a single "best"
classifier is brittle if the FP requirement can change.
1 For this paper, consider error costs to include benefits not realized.
True
positive
rate
False positive rates
Figure
1: Three classifiers under three different Neyman-Pearson decision criteria
Evaluating and visualizing classifier performance
2.1 Classifier comparison: decision analysis and ROC analysis
Most prior work on building classifiers uses classification accuracy (or, equivalently, undifferentiated error rate) as
the primary evaluation metric. The use of accuracy assumes that the class priors in the target environment will be
constant and relatively balanced. In the real world this is rarely the case. Classifiers are often used to sift through
a large population of normal or uninteresting entities in order to find a relatively small number of unusual ones;
for example, looking for defrauded accounts among a large population of customers, screening medical tests for rare
diseases, and checking an assembly line for defective parts. Because the unusual or interesting class is rare among
the general population, the class distribution is very skewed (Ezawa, Singh, & Norton, 1996; Fawcett & Provost,
1997; Kubat, Holte, & Matwin, 1998; Saitta & Neri, 1998).
As the class distribution becomes more skewed, evaluation based on accuracy breaks down. Consider a domain
where the classes appear in a 999:1 ratio. A simple rule-always classify as the maximum likelihood class-gives
a 99.9% accuracy. This performance may be quite difficult for an induction algorithm to beat, though the simple
rule presumably is unacceptable if a non-trivial solution is sought. Skews of 10 2 are common in fraud detection and
reported in other applications (Clearwater & Stern, 1991).
Evaluation by classification accuracy also tacitly assumes equal In the real world
this is rarely the case because classifications tacitly involve actions, which have consequences. Actions can be as
diverse as cancelling a credit card account, moving a control surface on an airplane, or informing a patient of a cancer
diagnosis. These actions have consequences, sometimes grave, and performing an incorrect action can be very costly.
Rarely are the costs of mistakes equivalent. In mushroom classification, for example, judging a poisonous mushroom
to be edible is far worse than judging an edible mushroom to be poisonous. Indeed, it is hard to imagine a domain
in which a learning system may be indifferent to whether it makes a false positive or a false negative error. In such
cases, accuracy maximization should be replaced with cost minimization.
The problems of unequal error costs and uneven class distributions are related. It has been suggested that, for
learning, high-cost instances can be compensated for by increasing their prevalence in an instance set (Breiman,
Friedman, Olshen, & Stone, 1984). Unfortunately, little work has been published on either problem. There exist
several dozen articles in which techniques for cost-sensitive learning are suggested (Turney, 1996), but little is done
to evaluate and compare them (the article of Pazzani et al. (1994) being the exception). The literature provides even
less guidance in situations where distributions are imprecise or can change.
If a model produces an estimate of p(pjI), the posterior probability of an instance's class membership, as most
machine-learned models can, decision analysis gives us a way to produce cost-sensitive classifications (Weinstein &
Fineberg, 1980). Classifier error frequencies can be used to approximate probabilities (Pazzani et al., 1994). For an
instance I , the decision to emit a positive classification from a particular classifier is:
Regardless of whether a classifier produces probabilistic or binary classifications, its normalized cost on a test set
can be evaluated empirically as:
Most published work on cost-sensitive classification uses an equation such as this to rank classifiers. Given a set of
classifiers, a set of examples, and a precise cost function, each classifier's cost is computed and the minimum-cost
classifier is chosen. However, as discussed above, such analyses assume that the distributions are precise and static.
True
positive
rate
False positive rate
Classifier 3
Figure
2: ROC graph of three classifiers
More general comparisons can be made with Receiver Operating Characteristic (ROC) analysis, a classic methodology
from signal detection theory that is now common in medical diagnosis and has recently begun to be used more
generally in AI classifier work (Egan, 1975; Beck & Schultz, 1986; Swets, 1988). ROC graphs depict tradeoffs between
hit rate and false alarm rate.
We use the term ROC space to denote the coordinate system used for visualizing classifier performance. In ROC
space, TP is represented on the Y axis and FP is represented on the X axis. Each classifier is represented by the
point in ROC space corresponding to its For models that produce a continuous output, e.g., posterior
probabilities, TP and FP vary together as a threshold on the output is varied between its extremes (each threshold
defines a classifier); the resulting curve is called the ROC curve. An ROC curve illustrates the error tradeoffs available
with a given model. Figure 2 shows a graph of three typical ROC curves; in fact, these are the complete ROC curves
of the classifiers shown in Figure 1.
For orientation, several points on an ROC graph should be noted. The lower left point (0; 0) represents the
strategy of never alarming, the upper right point (1; 1) represents the strategy of always alarming, the point (0; 1)
represents perfect classification, and the line represents the strategy of randomly guessing the
class. Informally, one point in ROC space is better than another if it is to the northwest (TP is higher, FP is lower,
False Positive rate
True
Positive
rate
A
Figure
3: An ROC graph of four classifiers
or both). An ROC graph allows an informal visual comparison of a set of classifiers. In Figure 3, curve A is better
than curve D because it dominates in all points.
ROC graphs illustrate the behavior of a classifier without regard to class distribution or error cost, and so they
decouple classification performance from these factors. Unfortunately, while an ROC graph is a valuable visualization
technique, it does a poor job of aiding the choice of classifiers. Only when one classifier clearly dominates another
over the entire performance space can it be declared better. Consider the classifiers shown in Figure 3. Which is
best? The answer depends upon the performance requirements, i.e., the error costs and class distributions in effect
when the classifiers are to be used. Take a moment to convince yourself which classifiers in Figure 3 are optimal for
what conditions.
2.2 The ROC Convex Hull method
In this section we combine decision analysis with ROC analysis and adapt them for comparing the performance of a
set of learned classifiers. The method is based on three high-level principles. First, ROC space is used to separate
classification performance from class and cost distribution information. Second, decision-analytic information is
projected onto the ROC space. Third, the convex hull in ROC space is used to identify the subset of methods that
are potentially optimal.
2.2.1 Iso-performance lines
By separating classification performance from class and cost distribution assumptions, the decision goal can be
projected onto ROC space for a neat visualization. Specifically, the expected cost of applying the classifier represented
by a point space is:
Therefore, two points,
have the same performance if
c(N; p)p(p)
This equation defines the slope of an iso-performance line, i.e., all classifiers corresponding to points on the line
have the same expected cost. Each set of class and cost distributions defines a family of iso-performance lines.
Lines "more northwest"-having a larger TP -intercept-are better because they correspond to classifiers with lower
expected cost.
Because in most real-world cases the target distributions are not known precisely, it is valuable to be able
to identify what subset of classifiers is potentially optimal. Each possible set of distributions defines a family of
iso-performance lines, and for a given family, the optimal methods are those that lie on the "most-northwest" iso-
performance line. Thus, a classifier is potentially optimal if and only if it lies on the northwest boundary (i.e., above
the line of the convex hull (Barber, Dobkin, & Huhdanpaa, 1993) of the set of points in ROC space. 2
In Section 3 we provide a formal proof, but roughly one can see that if a point lies on the convex hull, then
there exists a line through that point such that no other line with the same slope through any other point has a
larger TP -intercept, and thus the classifier represented by the point is optimal under any distribution assumptions
corresponding the that slope. If a point does not lie on the convex hull, then for any family of iso-performance lines
there is another point that lies on an iso-performance line with the same slope but larger TP -intercept, and thus the
classifier cannot be optimal.
2 The convex hull of a set of points is the smallest convex set that contains the points.
False Positive rate
True
Positive
rate
A D
CH
Figure
4: The ROC convex hull identifies potentially optimal classifiers.
We call the convex hull of the set of points in ROC space the ROC convex hull (rocch) of the corresponding set
of classifiers. Figure 4 shows the curves of Figure 3 with the ROC convex hull drawn (CH, the border between the
shaded and unshaded areas). D is clearly not optimal. Surprisingly, B can never be optimal either because none of
the points of its ROC curve lies on the convex hull. We can also remove from consideration any points of A and C
that do not lie on the hull.
2.2.2 The ROC convex hull
Consider these classifiers under two distribution scenarios. In each, negative examples outnumber positives by 5:1.
In scenario A, false positive and false negative errors have equal cost. In scenario B, a false negative is 25 times as
expensive as a false positive (e.g., missing a case of fraud is much worse than a false alarm). Each scenario defines
a family of iso-performance lines. The lines corresponding to scenario A have slope 5; those for B have slope 1
Figure
5 shows the convex hull and two iso-performance lines, ff and fi. Line ff is the "best" line with slope 5 that
intersects the convex hull; line fi is the best line with slope 1that intersects the convex hull. Each line identifies the
optimal classifier under the given distribution.
Figure
6 shows the three ROC curves from our initial example, with the convex hull drawn.
A
False Positive rate
True
Positive
rate
a
Figure
5: Lines ff and fi show the optimal classifier under different sets of conditions.
2.2.3 Generating the ROC Convex Hull
The ROC convex hull method selects the potentially optimal classifiers based on the ROC convex hull and iso-
performance lines.
1. For each classifier, plot TP and FP in ROC space. For continuous-output classifiers, vary a threshold over the
output range and plot the ROC curve.
2. Find the convex hull of the set of points representing the predictive behavior of all classifiers of interest. For n
classifiers this can be done in O(n log(n)) time by the QuickHull algorithm (Barber et al., 1993).
3. For each set of class and cost distributions of interest, find the slope (or range of slopes) of the corresponding
iso-performance lines.
4. For each set of class and cost distributions, the optimal classifier will be the point on the convex hull that
intersects the iso-performance line with largest TP -intercept. Ranges of slopes specify hull segments.
Figures
4 and 5 demonstrate how the subset of classifiers that are potentially optimal can be identified and how
classifiers can be compared under different cost and class distributions. We now demonstrate additional benefits of
the method.
True
positive
rate
False positive rate
Classifier 3
Convex Hull
Figure
curves with convex hull
2.2.4 Comparing a variety of classifiers
The ROC convex hull method accommodates both binary and continuous classifiers. Binary classifiers are represented
by individual points in ROC space. Continuous classifiers produce numeric outputs to which thresholds can be
applied, yielding a series of comprising an ROC curve. Each point may or may not contribute to the
ROC convex hull. Figure 7 depicts the binary classifiers E, F and G added to the previous hull. E may be optimal
under some circumstances because it extents the convex hull. Classifiers F and G never will be optimal because they
do not extend the hull.
New classifiers can be added incrementally to an rocch analysis, as demonstrated in Figure 7 by the addition
of classifiers E,F, and G. Each new classifier either extends the existing hull or does not. In the former case the hull
must be updated accordingly, but in the latter case the new classifier can be ignored. Therefore, the method does not
require saving every classifier (or saving statistics on every classifier) for re-analysis under different conditions-only
those points on the convex hull. No other classifiers can ever be optimal, so they need not be saved. Every classifier
that does lie on the convex hull must be saved.
False Positive rate
A
True
Positive
rate
F
G
Figure
7: Classifier E may be optimal because it extends the ROC convex hull. F and G cannot because they do
not.
2.2.5 Changing distributions and costs
Class and cost distributions that change over time necessitate the reevaluation of classifier choice. In fraud detection,
costs change based on workforce and reimbursement issues; the amount of fraud changes monthly. With the ROC
convex hull method, comparing under a new distribution involves only calculating the slope(s) of the corresponding
iso-performance lines and intersecting them with the hull, as shown in Figure 5.
The ROC convex hull method scales gracefully to any degree of precision in specifying the cost and class distribu-
tions. If nothing is known about a distribution, the ROC convex hull shows all classifiers that may be optimal under
any conditions. Figure 4 showed that, given classifiers A, B, C and D of Figure 3, only A and C can ever be optimal.
With complete information, the method identifies the optimal classifier(s). In Figure 5 we saw that classifier A (with
a particular threshold value) is optimal under scenario A and classifier C is optimal under scenario B. Next we will
see that with less precise information, the ROC convex hull can show the subset of possibly optimal classifiers.
2.2.6 Sensitivity analysis
Imprecise distribution information defines a range of slopes for iso-performance lines. This range of slopes intersects
a segment of the ROC convex hull, which facilitates sensitivity analysis. For example, if the segment defined by a
False Positive rate
A
True
Positive
rate
False Positive rate
A
True
Positive
rate(b)
False Positive rate
True
Positive
rate a
A
(c)
Figure
8: Sensitivity analysis using the ROC convex hull: (a) low sensitivity (only C can be optimal), (b) high
sensitivity (A, E, or C can be optimal), (c) doing nothing is the optimal strategy
range of slopes corresponds to a single point in ROC space or a small threshold range for a single classifier, then there
is no sensitivity to the distribution assumptions in question. Consider a scenario similar to A and B in that negative
examples are 5 times as prevalent as positive ones. In this scenario, the cost of dealing with a false alarm is between
$10 and $20, and the cost of missing a positive example is between $200 and $250. This defines a range of slopes
for iso-performance lines: 1- m - 1. Figure 8a depicts this range of slopes and the corresponding segment of the
ROC convex hull. The figure shows that the choice of classifier is insensitive to changes within this range (and only
fine tuning of the classifier's threshold will be necessary). Figure 8b depicts a scenario with a wider range of slopes:2
3. The figure shows that under this scenario the choice of classifier is very sensitive to the distribution.
Classifiers A, C and E each are optimal for some subrange.
A particularly interesting question in any domain is, When is a "do nothing" strategy better than any of my
available classifiers? Consider Figure 8c. The point (0; 0) corresponds to doing nothing, i.e., issuing negative
classifications regardless of input. Any set of cost and class distribution assumptions for which the best hull-
intersecting iso-performance line passes through the origin (e.g., line ff) defines a scenario where this null strategy is
optimal. Intuitively, Figure 8c illustrates that false positives are so expensive (or negatives so prevalent) that neither
A nor C is good enough to be used. Improvements to A might allow it to beat the null strategy represented by ff,
but no modification to C is likely to have an effect.
Building robust classifiers
Up to this point, we have concentrated on the use of the rocch for visualizing and evaluating sets of classifiers. The
rocch helps to delay classifier selection as long as possible, yet provides a rich performance comparison. However,
once system-building incorporates a particular classifier, the problem of brittleness resurfaces. This is important
because the delay between system-building and deployment may be large, and because many systems must survive
for years. In fact, in many domains a precise, static specification of future costs and class distributions is not just
unlikely, it is impossible (Provost, Fawcett, & Kohavi, 1998).
We address this brittleness by using the rocch to produce robust classifiers, defined as satisfying the following.
Under any target cost and class distributions, a robust classifier will perform at least as well as the best classifier for
those conditions. Our statements about optimality are practical: the "best" classifier may not be the Bayes-optimal
classifier, but it is at least as good as any known classifier. Stating that a classifier is robust is stronger than stating
that it is optimal for a specific set of conditions. A robust classifier is optimal under all possible conditions.
In principle, classification brittleness could be overcome by saving all possible classifiers (neural nets, decision
trees, expert systems, probabilistic models, etc.) and then performing an automated run-time comparison under the
desired target conditions. However, such a system is not feasible because of time and space limitations-there are
myriad possible classification models, arising from the many different learning methods under their many different
parameter settings. Storing all the classifiers is not practical, and tuning the system by comparing classifiers on the
fly under different conditions is not practical. Fortunately, doing so is not necessary. Moreover, we will show that it
is sometimes possible to do better than any of these classifiers.
3.1 ROCCH-hybrid classifiers
We now show that robust hybrid classifiers can be built using the rocch.
I be the space of possible instances and let C be the space of sets of classification models. Let a
-hybrid classifier comprise a set of classification models C 2 C and a function
A -hybrid classifier takes as input an instance I 2 I for classification and a number x 2 !. As output, it produces
the classification produced by -(I ; x; C).
Things will get more involved later, but for the time being consider that each set of cost and class distributions
defines a value for x, which is used to select the (predetermined) best classifier for those conditions. To build a
-hybrid classifier, we must define - and the set C. We would like C to include only those models that perform
optimally under some conditions (class and cost distributions), since these will be stored by the system, and we
would like - to be general enough to apply to a variety of problem formulations.
The models comprising the rocch can be combined to form a -hybrid classifier that is an elegant, robust
classifier.
Definition 2 The rocch-hybrid is a -hybrid classifier where C is the set of classifiers that comprise the rocch
and - makes classifications using the classifier on the rocch with
Note that for the moment the rocch-hybrid is defined only for FP values corresponding to rocch vertices.
3.2 Robust classification
Our definition of robust classifiers was intentionally vague about what it means for one classifier to be better than
another, because different situations call for different comparison frameworks. We now continue with minimizing
expected cost, because the process of proving that the rocch-hybrid minimizes expected cost for any cost and class
distributions provides a deep understanding of why and how the rocch-hybrid works. Later we generalize to a wide
variety of comparison frameworks.
The rocch-hybrid can be seen as an application of multicriteria optimization to classifier design and construction.
The classifiers on the rocch are Edgeworth-Pareto optimal (Stadler, 1988) with respect to TP, FP, and the objective
functions we discuss. Multicriteria optimization was used previously in machine learning by Tcheng, Lambert, Lu
and Rendell for the selection of inductive bias (Tcheng, Lambert, Lu, & Rendell, 1989). Alternatively, the rocch
can be seen as an application of the theory of games and statistical decisions, for which convex sets (and the convex
3.2.1 Minimizing expected cost
From above, the expected cost of applying a classifier is:
For a particular set of cost and class distributions, the slope of the corresponding iso-performance lines is:
c(N; p)p(p) (2)
Every set of conditions will define an m ec - 0. We now can show that the rocch-hybrid is robust for problems
where the "best" classifier is the classifier with the minimum expected cost.
The slope of the rocch is an important tool in our argument. The rocch is a piecewise-linear, concave-down
"curve." Therefore, as x increases, the slope of the rocch is monotonically non-increasing with
where k is the number of rocch component classifiers, including the degenerate classifiers that define the rocch
endpoints. Where there will be no confusion, we use phrases such as "points in ROC space" as a shorthand for the
more cumbersome "classifiers corresponding to points in ROC space." For this subsection, "points on the rocch"
refer to vertices of the rocch.
Definition 3 For any real number m - 0, the point where the slope of the rocch is m is one of the (arbitrarily
chosen) endpoints of the segment of the rocch with slope m, if such a segment exists. Otherwise, it is the vertex for
which the left adjacent segment has slope greater than m and the right adjacent segment has slope less than m.
For completeness, the leftmost endpoint of the rocch is considered to be attached to a segment with infinite
slope and the rightmost endpoint of the rocch is considered to be attached to a segment with zero slope. Note that
every defines at least one point on the rocch.
Lemma 1 For any set of cost and class distributions, there is a point on the rocch with minimum expected cost.
Proof: (by contradiction) Assume that for some conditions there exists a point C with smaller expected cost than
any point on the rocch. By equations 1 and 2, a point
) has the same expected cost as a point
if
Therefore, for conditions corresponding to m ec , all points with equal expected cost form an iso-performance line in
ROC space with slope m ec . Also by 1 and 2, points on lines with larger y-intercept have lower expected cost. Now,
point C is not on the rocch, so it is either above the curve or below the curve. If it is above the curve, then
the rocch is not a convex set enclosing all points, which is a contradiction. If it is below the curve, then the iso-
performance line through C also contains a point P that is on the rocch. Since all points on an iso-performance
line have the same expected cost, point C does not have smaller expected cost than all points on the rocch, which is
also a contradiction. 2
Although it is not necessary for our purposes here, it can be shown that all of the minimum expected cost
classifiers are on the rocch.
Definition 4 An iso-performance line with slope m is an m-iso-performance line.
Lemma 2 For any cost and class distributions that translate to m ec , a point on the rocch has minimum expected
cost only if the slope of the rocch at that point is m ec .
Proof: (by contradiction) Suppose that there is a point D on the rocch where the slope is not m ec , but the point
does have minimum expected cost. By Definition 3, either (a) the segment to the left of D has slope less than m ec ,
or (b) the segment to the right of D has slope greater than m ec . For case (a), consider point N, the vertex of the
rocch that neighbors D to the left, and consider the (parallel) m ec -iso-performance lines l D and l N through D and
N. Because N is to the left of D and the line connecting them has slope less than m ec , the y-intercept of l N will be
greater than the y-intercept of l D . This means that N will have lower expected cost than D, which is a contradiction.
The argument for (b) is analogous (symmetric). 2
Lemma 3 If the slope of the rocch at a point is m ec , then the point has minimum expected cost.
Proof: If this point is the only point where the slope of the rocch is m ec , then the proof follows directly from Lemma
1 and Lemma 2. If there are multiple such points, then by definition they are connected by an m ec -iso-performance
line, so they have the same expected cost, and once again the proof follows directly from Lemma 1 and Lemma 2. 2
It is straightforward now to show that the rocch-hybrid is robust for the problem of minimizing expected cost.
Theorem 4 The rocch-hybrid minimizes expected cost for any cost distribution and any class distribution.
Proof: Because the rocch-hybrid is composed of the classifiers corresponding to the points on the rocch, this
follows directly from Lemmas 1, 2, and 3. 2
Now we have shown that the rocch-hybrid is robust when the goal is to provide the minimum expected cost
classification. This result is important even for accuracy maximization, because the preferred classifier may be
different for different target class distributions. This is rarely taken into account in experimental comparisons of
classifiers.
Corollary 5 The rocch-hybrid minimizes error rate (maximizes accuracy) for any target class distribution.
Proof: rate minimization is cost minimization with uniform error costs. 2
3.3 Robust classification for other common metrics
Showing that the rocch-hybrid is robust not only helps us with understanding the rocch method generally, it also
shows us how the rocch-hybrid will pick the best classifier in order to produce the best classifications, which we
will return to later. If we ignore the need to specify how to pick the best component classifier, we can show that the
rocch applies more generally.
Theorem 6 For any classifier evaluation metric f(FP; TP ), if @f
there exists a point on
the rocch with an f-value at least as high as that of any known classifier.
Proof: (by contradiction) Assume that there exists a classifier C o , not on the rocch, with an f-value higher than
that of any point on the rocch. C o is either (i) above or (ii) below the rocch. In case (i), the rocch is not a convex
set enclosing all the points, which is a contradiction. In case (ii), let C o be represented in ROC-space by
Because C o is below the rocch there exist points, call one on the rocch with TP p ? TP
However, by the restriction on the partial derivatives, for any such point f(FP
again is a contradiction. 2
True
positive
rate
False positive rate
Classifier 3
Hull
Neyman-Pearson
Figure
9: The ROC Convex Hull used to select a classifier under the Neyman-Pearson criterion
There are two complications to the more general use of the rocch, both of which are illustrated by the decision
criterion from our very first example. Recall that the Neyman-Pearson criterion specifies a maximum acceptable
FP rate. In standard ROC analysis, selecting the best classifier for the Neyman-Pearson criterion is easy: plot
ROC curves, draw a vertical line at the desired maximum FP , and pick the ROC curve with the largest TP at the
intersection with this line.
For minimizing expected cost it was sufficient for the rocch-hybrid to choose a vertex from the rocch for any
ec value. For problem formulations such as the Neyman-Pearson criterion, the performance statistics at a non-
vertex point on the rocch may be preferable (see Figure 9). Fortunately, with a slight extension, the rocch-hybrid
can yield a classifier with these performance statistics.
Theorem 7 An rocch-hybrid can achieve the TP :F P tradeoff represented by any point on the rocch, not just the
vertices.
Proof: (by construction) Extend -(I ; x; C) to non-vertex points as follows. Pick the point P on the rocch with
(there is exactly one). Let TP x be the TP value of this point. If (x, TP x ) is an rocch vertex, use the
corresponding classifier. If it is not a vertex, call the left endpoint of the hull segment C l and the right endpoint C r .
Let d be the distance between C l and C r , and let p be the distance between C l and P . Make classifications as follows.
For each input instance flip a weighted coin and choose the answer given by classifier C l with probability p
d and that
given by classifier C r with probability
d . It is straightforward to show that FP and TP for this classifier will be
x and TP x . 2
The second complication is that, as illustrated by the Neyman-Pearson criterion, many practical classifier comparison
frameworks include constrained optimization problems (below we will discuss other frameworks). Arbitrarily
constrained optimizations are problematic for the rocch-hybrid. Given total freedom, it is easy to place constraints
on classifier performance such that, even with the restriction on the partial derivatives, an interior point scores higher
than any acceptable point on the hull. For example, two linear constraints can enclose a subset of the interior and
exclude the entire rocch-there will be no acceptable points on the rocch. However, many realistic constraints do
not thwart the optimality of the rocch-hybrid.
Theorem 8 For any classifier evaluation metric f(FP; TP ), if @f
no constraint on classifier
performance eliminates any point on the rocch without also eliminating all higher-scoring interior points, then the
rocch-hybrid can perform at least as well as any known classifier.
Proof: Follows directly from Theorem 6 and Theorem 7. 2
Linear constraints on classifiers' FP : TP performance are common for real-world problems, so the following is
useful.
Theorem 9 For any classifier evaluation metric f(FP; TP ), if @f
there is a single constraint
on classifier performance of the form a with a and b non-negative, then the rocch-hybrid can
perform at least as well as any known classifier.
Proof: The single constraint eliminates from contention all points (classifiers) that do not fall to the left of, or below,
a line with non-positive slope. By the restriction on the partial derivatives, such a constraint will not eliminate a
point on the rocch without also eliminating all interior points with higher f-values. Thus, the proof follows directly
from Theorem 8. 2
So, finally, we have the following.
Corollary 10 For the Neyman-Pearson criterion, the rocch-hybrid can perform at least as well as that of any
known classifier.
Proof: For the Neyman-Pearson criterion, the evaluation metric higher TP is better.
The constraint on classifier performance is FP - FPmax . These satisfy the conditions for Theorem 9, and therefore
this corollary follows. 2
All the foregoing effort may seem misplaced for a simple criterion like Neyman-Pearson. However, there are many
other realistic problem formulations. For example, consider the decision-support problem of optimizing workforce
utilization, in which a workforce is available that can process a fixed number of cases. Too few cases will under-utilize
the workforce, but too many cases will leave some cases unattended (expanding the workforce usually is not a short-term
solution). If the workforce can handle C cases, the system should present the best possible set of C cases. This
is similar to the Neyman-Pearson criterion, but with an absolute cutoff (C) instead of a percentage cutoff
Theorem 11 For workforce utilization, the rocch-hybrid will provide the best set of C cases, for any choice of C.
Proof: (by construction) The decision criterion is to maximize TP subject to the constraint:
The theorem therefore follows from Theorem 9. 2
In fact, many screening problems, such as are found in marketing and information retrieval, use exactly this linear
constraint, so it follows that for maximizing lift (Berry & Linoff, 1997), precision or recall, subject to absolute or
percentage cutoffs on case presentation, the rocch-hybrid will provide the best set of cases.
As with minimizing expected cost, imprecision in the environment forces us to favor a robust solution for these
other comparison frameworks. For many real-world problems, the precise desired cutoff will be unknown or will
change (e.g., because of fundamental uncertainty, variability in case difficulty or competing responsibilities). What
is worse, for a fixed (absolute) cutoff merely changing the size of the universe of cases (e.g., the size of a document
corpus) may change the preferred classifier, because it will change the constraint line. The rocch-hybrid provides
a robust solution because it gives the optimal subset of cases for any constraint line. For example, for document
retrieval the rocch-hybrid will yield the best N documents for any N , for any prior class distribution (in the target
corpus), and for any target corpus size.
3.4 Ranking cases
An apparent solution to the problem of robust classification is to use a system that ranks cases, rather than one
that provides classifications, and just work down the ranked list (the cutoff is implicit). However, for most practical
situations, choosing the best ranking model is equivalent to choosing which classifier is best for the cutoff that will be
used. Remember that ROC curves are formed from case rankings by moving the cutoff from one extreme to the other.
For different cutoffs, implicit or explicit, different ranking functions perform better. This is exactly the problem of
robust classification, and is solved elegantly by the rocch-hybrid-the rocch-hybrid comprises the set rankers that
are best for all possible cutoffs. As an example, consider two ranking functions R a and R b . R a is perfect for the first
cases, and picks randomly thereafter. R b randomly chooses the first 10 cases, and ranks perfectly thereafter. R a
is preferable for a cutoff of 10 cases and R b is preferable for much larger cutoffs.
Whole-curve metrics
In situations where either the target cost distribution or class distribution is completely unknown, some researchers
advocate choosing the classifier that maximizes a single-number metric representing the average performance over
the entire curve. A common whole-curve metric is the area under the ROC curve (AUC) (Bradley, 1997). The
AUC is equivalent to the probability that a randomly chosen positive instance will be rated higher than a negative
instance, and thereby is also estimated by the Wilcoxon test of ranks (Hanley & McNeil, 1982). A criticism of AUC
is that for specific target conditions the classifier with the maximum AUC may be suboptimal (Provost et al., 1998)
(this criticism may be made of any single-number metric). Fortunately, not only is the rocch-hybrid optimal for
any specific target conditions, it has the maximum AUC.
Theorem 12 There is no classifier with AUC larger than that of the rocch-hybrid.
Proof: (by contradiction) Assume the ROC curve for another classifier had larger area. This curve would have to
have at least one point in ROC-space that falls outside the area enclosed by the rocch. This means that the convex
hull does not enclose all points, which is a contradiction. 2
3.6 Using the ROCCH-hybrid
To use the rocch-hybrid for classification, we need to translate environmental conditions to x values to plug into
C). For minimizing expected cost, Equation 2 shows how to translate conditions to m ec . For any m ec , by
Lemma 3 we want the FP value of the point where the slope of the rocch is m ec , which is straightforward to
calculate. For the Neyman-Pearson criterion the conditions are defined as FP values. For workforce utilization with
conditions corresponding to a cutoff C, the FP value is found by intersecting the line TP
the rocch.
We have argued that target conditions (misclassification costs and class distribution) are rarely known. It may
be confusing that we now seem to require exact knowledge of these conditions. The rocch-hybrid gives us two
important capabilities. First, the need for precise knowledge of target conditions is deferred until run time. Second,
in the absence of precise knowledge even at run time, the system can be optimized easily with minimal feedback.
By using the rocch-hybrid, information on target conditions is not needed to train and compare classifiers. This
is important because of imprecision caused by temporal, geographic, or other differences that may exist between
training and use. For example, building a system for a real-world problem introduces a non-trivial delay between the
time data are gathered and the time the learned models will be used. The problem is exacerbated in domains where
error costs or class distributions change over time; even with slow drift, a brittle model may become suboptimal
quickly. In many such scenarios, costs and class distributions can be specified (or respecified) at run time with
reasonable precision by sampling from the current population, and used to ensure that the rocch-hybrid always
performs optimally.
In some cases, even at run time these quantities are not known exactly. A further benefit of the rocch-hybrid
is that it can be tuned easily to yield optimal performance with only minimal feedback from the environment.
Conceptually, the rocch-hybrid has one "knob" that varies x in -(I ; x; C) from one extreme to the other. For any
knob setting, the rocch-hybrid will give the optimal TP :F P tradeoff for the target conditions corresponding to
that setting. Turning the knob to the right increases TP ; turning the knob to the left decreases FP . Because of
the monotonicity of the rocch-hybrid, simple hill-climbing can guarantee optimal performance. For example, if the
system produces too many false alarms, turn the knob to the left; if the system is presenting too few cases, turn the
knob to the right.
3.7 Beating the component classifiers
Perhaps surprisingly, in many realistic situations an rocch-hybrid system can do better than any of its component
classifiers. Consider the Neyman-Pearson decision criterion. The rocch may intersect the FP -line above the highest
component ROC curve. This occurs when the FP -line intersects the rocch between vertices; therefore, there is no
component classifier that actually produces these particular statistics, as in Figure 9.
Theorem 13 The rocch-hybrid can surpass the performance of its component classifiers for some Neyman-Pearson
problems.
Proof: For any non-vertex hull point (x,T P x ), TP x is larger than the TP for any other point with
Theorem 7, the rocch-hybrid can achieve any TP on the hull. Only a small number of FP values correspond to
hull vertices. 2
The same holds for other common problem formulations, such as workforce utilization, lift maximization, precision
maximization, and recall maximization.
3.8 Time and space efficiency
We have argued that the rocch-hybrid is robust for a wide variety of problem formulations. It is also efficient to
build, to store, and to update.
The time efficiency of building the rocch-hybrid depends first on the efficiency of building the component models,
which varies widely by model type. Some models built by machine learning methods can be built in seconds (once
data are available). Hand-built models can take years to build. However, we presume that this is work that would be
done anyway. The rocch-hybrid can be built with whatever methods are available, be there two or two thousand.
As described below, as new classifiers become available, the rocch-hybrid can be updated incrementally. The time
efficiency depends also on the efficiency of the experimental evaluation of the classifiers. Once again, we presume
that this is work that would be done anyway (more on this in Limitations). Finally, the time efficiency of the
rocch-hybrid depends on the efficiency of building the rocch, which can be done in O(N log N) time using the
QuickHull algorithm (Barber et al., 1993) where N is the number of classifiers.
The rocch is space efficient, too, because it comprises only classifiers that might be optimal under some target
conditions.
Theorem 14 For minimizing expected cost, the rocch-hybrid comprises only classifiers that are optimal under some
cost and class distributions.
Proof: Follows directly from Lemmas 1-3 and Definitions 3 and 4. 2
The number of classifiers that must be stored can be reduced if bounds can be placed on the potential target
conditions. As described above, ranges of conditions define segments of the rocch. Thus, the rocch-hybrid may
need only a subset of C.
Adding new classifiers to the rocch-hybrid also is efficient. Adding a classifier to the rocch will either (i) extend
the hull, adding to (and possibly subtracting from) the rocch-hybrid, or (ii) conclude that the new classifiers are
not superior to the existing classifiers in any portion of ROC space and can be discarded.
The run-time (classification) complexity of the rocch-hybrid is never worse than that of the component classifiers.
In situations where run-time complexity is crucial, the rocch should be constructed without prohibitively expensive
classification models. It will then find the best subset of the computationally efficient models.
Empirical demonstration of need
Robust classification is of fundamental interest because it weakens two very strong assumptions: the availability of
precise knowledge of costs and of class distributions. However, might it not be that existing classifiers are already
robust? For example, if a given classifier is optimal under one set of conditions, might it not be optimal under all?
It is beyond the scope of this paper to offer an in-depth experimental study answering this question. However,
we can provide solid evidence that the answer is "no" by referring to the results of two prior studies. One is a
comprehensive ROC analysis of medical domains recently conducted by Andrew Bradley (1997). 3 The other is a
published ROC analysis of UCI database domains that we undertook last year with Ronny Kohavi (Provost et al.,
1998).
Note that a classifier dominates if its ROC curve completely defines the rocch (which means dominating classifiers
are robust and vice versa). Therefore, if there exist more than a trivially few domains where no classifier dominates,
then techniques like the rocch-hybrid are essential.
3 His purpose was not to answer this question; fortunately his published results do anyway.
True
positive
rate
False positive rate
Bayes
K-NN
MSC
Perceptron
Figure
10: Bradley's classifier results for the heart bleeding data.
4.1 Bradley's study
Bradley studied six medical data sets, noting that "unfortunately, we rarely know what the individual misclassification
costs are." He plotted the ROC curves of six classifier learning algorithms (two neural nets, two decision trees and
two statistical techniques).
On not one of these data sets was there a dominating classifier. This means that for each domain, there exist
different sets of conditions for which different classifiers are preferable. In fact, our running example is based on the
three best classifiers from Bradley's results on the heart bleeding data; his results for the full set of six classifiers can
be found in Figure 10. Classifiers constructed for the Cleveland heart disease data, are shown in Figure 11.
Bradley's results show clearly that for many domains the classifier that maximizes any single metric-be it
accuracy, cost, or the area under the ROC curve-will be the best for some cost and class distributions and will not
be the best for others. We have shown that the rocch-hybrid will be the best for all.
4.2 Our study
In the study we performed with Ronny Kohavi, we chose ten datasets from the UCI repository that contained at
least 250 instances, but for which the accuracy for decision trees was less than 95%. For each domain, we induced
classifiers for the minority class (for Road, we chose the class Grass). We selected several induction algorithms from
MLC++ (Kohavi, Sommerfield, & Dougherty, 1997): a decision tree learner (MC4), Naive Bayes with discretization
(NB), k-nearest neighbor for several k values (IBk), and Bagged-MC4 (Breiman, 1996). MC4 is similar to C4.5
True
positive
rate
False positive rate
Bayes
K-NN
MSC
Perceptron
Figure
11: Bradley's classifier results for the Cleveland heart disease data
(Quinlan, 1993); probabilistic predictions are made by using a Laplace correction at the leaves. NB discretizes the
data based on entropy minimization (Dougherty, Kohavi, & Sahami, 1995) and then builds the Naive-Bayes model
(Domingos & Pazzani, 1997). IBk votes the closest k neighbors; each neighbor votes with a weight equal to one over
its distance from the test instance.
Some of the ROC curves are shown in Figures 12. For only one (Vehicle) of these ten domains was there an
absolute dominator. In general, very few of the 100 runs performed (on 10 data sets, using 10 cross-validation folds
dominating classifiers. Some cases are very close, for example Adult and Waveform-21. In other cases a
curve that dominates in one area of ROC space is dominated in another. These results also support the need for
methods like the rocch-hybrid, which produce robust classifiers.
As examples of what expected-cost-minimizing rocch-hybrids would look like internally, Table 1 shows the
component classifiers that make up the rocch for the four UCI domains of Figure 12. For example, in the Road
domain (see Figure 12 and Table 1), Naive Bayes would be chosen for any target conditions corresponding to a slope
less than 0:38, and Bagged-MC4 would be chosen for slopes greater than 0:38. They perform equally well at 0:38.
5 Limitations and future work
There are limitations to the rocch method as we have presented it here. We have defined it only for two-class
problems. We believe that it can be extended to multi-class problems, but have not yet done so formally. It should
be noted that the dimensionality of the "ROC-hyperspace" grows quadratically in the number of classes. We have
Table
1: Locally dominating classifiers for four UCI domains
Domain Slope range Dominator Domain Slope range Dominator
Vehicle [0, 1) Bagged-MC4 Satimage [0, 0.05] NB
Road [0, 0.38] NB [0.05, 0.22] Bagged-MC4
CRX [0, 0.03] Bagged-MC4 [2.60, 3.11] IB3
[0.06, 2.06] Bagged-MC4 [7.54, 31.14] IB3
also assumed constant error costs for a given type of error, e.g., all false positives cost the same. For some problems,
different errors of the same type have different costs. In many cases, such a problem can be transformed for evaluation
into an equivalent problem with uniform intra-type error costs by duplicating instances in proportion to their costs
(or by simply modifying the counting procedure accordingly).
We have also assumed for this paper that the estimates of the classifiers' performance statistics (FP and TP ) are
very good. As mentioned above, much work has addressed the production of good estimates for simple performance
statistics such as error rate. Much less work has addressed the production of good ROC curve estimates. As with
simpler statistics, care should be taken to avoid over-fitting the training data and to ensure that differences between
ROC curves are meaningful. One solution is to use cross-validation with averaging of ROC curves (Provost et al.,
1998), which is the procedure used to produce the ROC curves in Section 4.2. To our knowledge, the issue is open
of how best to produce confidence bands appropriate to a particular problem. Those shown in Section 4.2 are
appropriate for the Neyman-Pearson decision criterion (i.e., they show confidence on TP for various values of FP ).
Also, we have addressed predictive performance and computational performance. These are not the only concerns
in choosing a classification model. What if comprehensibility is important? The easy answer is that for any particular
setting, the rocch-hybrid is as comprehensible as the underlying model it is using. However, this answer falls short
if the rocch-hybrid is interpolating between two models or if one wants to understand the "multiple-model" system
as a whole.
This work is fundamentally different from other recent machine learning work on combining multiple models (Ali
1996). That work combines models in order to boost performance for a fixed cost and class distribution.
The rocch-hybrid combines models for robustness across different cost and class distributions. In principle, these
methods should be independent-multiple-model classifiers are candidates for extending the rocch. However, it
may be that some multiple-model classifiers achieve increased performance for a specific set of conditions by (in
interpolating along edges of the rocch.
The rocch method also complements research on cost-sensitive learning (Turney, 1996). Existing cost-sensitive
learning methods are brittle with respect to imprecise cost knowledge. Thus, the rocch is an essential evaluation
tool. Furthermore, cost-sensitive learning may be used to find better components for the rocch-hybrid, by searching
explicitly for classifiers that extend the rocch.
6 Conclusion
The ROC convex hull method is a robust, efficient solution to the problem of comparing multiple classifiers in
imprecise and changing environments. It is intuitive, can compare classifiers both in general and under specific
distribution assumptions, and provides crisp visualizations. It minimizes the management of classifier performance
data, by selecting exactly those classifiers that are potentially optimal; thus, only these need to be saved in preparation
for changing conditions. Moreover, due to its incremental nature, new classifiers can be incorporated easily, e.g.,
when trying a new parameter setting.
The rocch-hybrid performs optimally under any target conditions for many realistic problem formulations,
including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization.
It is efficient to build in terms of time and space, and can be updated incrementally. Furthermore, it can sometimes
classify better than any (other) known model. Therefore, we conclude that it is an elegant, robust classification
system.
We believe that this work has important implications for both machine learning applications and machine learning
research (Provost et al., 1998). For applications, it helps free system designers from the need to choose (sometimes
arbitrary) comparison metrics before precise knowledge of key evaluation parameters is available. Indeed, such
knowledge may never be available, yet robust systems can still be built.
For machine learning research, it frees researchers from the need to have precise class and cost distribution
information in order to study important related phenomena. In particular, work on cost-sensitive learning has been
impeded by the difficulty of specifying costs, and by the tenuous nature of conclusions based on a single cost metric.
Researchers need not be held back by either. Cost-sensitive learning can be studied generally without specifying costs
precisely. The same goes for research on learning with highly skewed distributions. Which methods are effective for
which levels of distribution skew? The rocch will provide a detailed answer.
Note: An implementation of the rocch method in Perl is publicly available. The code and related papers may
be found at: http://www.croftj.net/~fawcett/ROCCH/.
Acknowledgments
We thank the many with whom we have discussed ROC analysis and classifier comparison, especially Rob Holte,
George John, Ron Kohavi, Ron Rymon, and Peter Turney. We thank Andrew Bradley for supplying data from his
analysis.
--R
reduction through learning multiple descriptions.
The quickhull algorithm for convex hull.
The use of ROC curves in test performance evaluation.
Data Mining Techniques: For Marketing
Theory of Games and Statistical Decisions.
Republished by Dover Publications
The use of the area under the ROC curve in the evaluation of machine learning algorithms.
Classification and regression trees.
Bagging predictors.
Tailoring rulesets to misclassificatioin costs.
Approximate statistical tests for comparing supervised classification learning algorithms.
Neural Computation
Beyond independence: Conditions for the optimality of the simple Bayesian classifier.
Supervised and unsupervised discretization of continuous features.
In Prieditis
Learning goal oriented bayesian networks for telecommunications risk management.
Adaptive fraud detection.
Available as http://www.
The meaning and use of the area under a receiver operating characteristic (roc) curve.
A study of cross-validation and bootstrap for accuracy estimation and model selection
http://robotics.
Data mining using MLC
Machine learning for the detection of oil spills in satellite radar images.
Reducing misclassification costs.
The case against accuracy estimation for comparing induction algorithms.
"real world"
On comparing classifiers: Pitfalls to avoid and a recommended approach.
Measuring the accuracy of diagnostic systems.
Building robust learning systems by combining induction and optimization.
Cost sensitive learning bibliography.
Clinical Decision Analysis.
--TR
C4.5: programs for machine learning
Bagging predictors
The quickhull algorithm for convex hulls
reduction through learning multiple descriptions
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Learning in the MYAMPERSANDldquo;Real WorldMYAMPERSANDrdquo;
Machine Learning for the Detection of Oil Spills in Satellite Radar Images
Approximate statistical tests for comparing supervised classification learning algorithms
Activity monitoring
MetaCost
Explicitly representing expected cost
Data Mining Techniques
Adaptive Fraud Detection
On Comparing Classifiers
The Case against Accuracy Estimation for Comparing Induction Algorithms
Detecting Concept Drift with Support Vector Machines
--CTR
Jigang Xie , Zhengding Qiu , Zhenjiang Miao , Yanqiang Zhang, Bootstrap FDA for counting positives accurately in imprecise environments, Pattern Recognition, v.40 n.11, p.3292-3298, November, 2007
Anna Olecka, Evaluating classifiers' performance in a constrained environment, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Tom Fawcett , Peter A. Flach, A response to Webb and Ting's on the application of ROC analysis to predict classification performance under varying class distributions, Machine Learning, v.58 n.1, p.33-38, January 2005
Tom Fawcett , Alexandru Niculescu-Mizil, PAV and the ROC convex hull, Machine Learning, v.68 n.1, p.97-106, July 2007
Sven F. Crone , Stefan Lessmann , Robert Stahlbock, Utility based data mining for time series analysis: cost-sensitive learning for neural network predictors, Proceedings of the 1st international workshop on Utility-based data mining, p.59-68, August 21-21, 2005, Chicago, Illinois
Reuven Arbel , Lior Rokach, Classifier evaluation under limited resources, Pattern Recognition Letters, v.27 n.14, p.1619-1631, 15 October 2006
Geoffrey I. Webb , Kai Ming Ting, On the application of ROC analysis to predict classification performance under varying class distributions, Machine Learning, v.58 n.1, p.25-32, January 2005
Lian Yan , Michael Fassino , Patrick Baldasare, Enhancing the lift under budget constraints: an application in the mutual fund industry, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Csar Ferri , Peter Flach , Jos Hernndez-Orallo, Delegating classifiers, Proceedings of the twenty-first international conference on Machine learning, p.37, July 04-08, 2004, Banff, Alberta, Canada
Steven N. Thorsen , Mark E. Oxley, A description of competing fusion systems, Information Fusion, v.7 n.4, p.346-360, December, 2006
Jos Mara Gmez Hidalgo , Guillermo Cajigas Bringas , Enrique Puertas Snz , Francisco Carrero Garca, Content based SMS spam filtering, Proceedings of the 2006 ACM symposium on Document engineering, October 10-13, 2006, Amsterdam, The Netherlands
Jos Mara Gmez Hidalgo, Evaluating cost-sensitive Unsolicited Bulk Email categorization, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Tom Fawcett, ROC graphs with instance-varying costs, Pattern Recognition Letters, v.27 n.8, p.882-891, June 2006
Exploiting AUC for optimal linear combinations of dichotomizers, Pattern Recognition Letters, v.27 n.8, p.900-907, June 2006
Zhiqiang Zheng , Balaji Padmanabhan , Haoqiang Zheng, A DEA approach for model combination, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Csar Ferri, Multi-paradigm learning of declarative models: Thesis, AI Communications, v.17 n.2, p.95-97, April 2004
Rich Caruana , Mohamed Elhawary , Art Munson , Mirek Riedewald , Daria Sorokina , Daniel Fink , Wesley M. Hochachka , Steve Kelling, Mining citizen science data to predict orevalence of wild bird species, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
David Jensen , Matthew Rattigan , Hannah Blau, Information awareness: a prospective technical assessment, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Thomas C. W. Landgrebe , David M. J. Tax , Pavel Paclk , Robert P. W. Duin, The interaction between classification and reject performance for distance-based reject-option classifiers, Pattern Recognition Letters, v.27 n.8, p.908-917, June 2006
Katia Kermanidis , Manolis Maragoudakis , Nikos Fakotakis , George Kokkinakis, Learning Greek verb complements: addressing the class imbalance, Proceedings of the 20th international conference on Computational Linguistics, p.1065-es, August 23-27, 2004, Geneva, Switzerland
Francesco Tortorella, A ROC-based reject rule for dichotomizers, Pattern Recognition Letters, v.26 n.2, p.167-180, 15 January 2005
Bianca Zadrozny , Charles Elkan, Learning and making decisions when costs and probabilities are both unknown, Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, p.204-213, August 26-29, 2001, San Francisco, California
Dirk Ourston , Sara Matzner , William Stump , Bryan Hopkins, Coordinated internet attacks: responding to attack complexity, Journal of Computer Security, v.12 n.2, p.165-190, May 2004
Gerhard Widmer, Discovering simple rules in complex data: a meta-learning algorithm and some surprising musical discoveries, Artificial Intelligence, v.146 n.2, p.129-148, June
Chao-Ton Su , Long-Sheng Chen , Tai-Lin Chiang, A neural network based information granulation approach to shorten the cellular phone test process, Computers in Industry, v.57 n.5, p.412-423, June 2006
Francis R. Bach , David Heckerman , Eric Horvitz, Considering Cost Asymmetry in Learning Classifiers, The Journal of Machine Learning Research, 7, p.1713-1741, 12/1/2006
Tadeusz Pietraszek, On the use of ROC analysis for the optimization of abstaining classifiers, Machine Learning, v.68 n.2, p.137-169, August 2007
Juan Jos Garca Adeva , Juan Manuel Pikatza Atxa, Intrusion detection in web applications using text mining, Engineering Applications of Artificial Intelligence, v.20 n.4, p.555-566, June, 2007
Jerzy W. Grzymala-Busse , Linda K. Goodwin , Xiaohui Zhang, Increasing sensitivity of preterm birth by changing rule strengths, Pattern Recognition Letters, v.24 n.6, p.903-910, March
Nilesh Dalvi , Pedro Domingos , Mausam , Sumit Sanghai , Deepak Verma, Adversarial classification, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Paul N. Bennett , Susan T. Dumais , Eric Horvitz, Probabilistic combination of text classifiers using reliability indicators: models and results, Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, August 11-15, 2002, Tampere, Finland
Ashwin Srinivasan, Extracting Context-Sensitive Models in Inductive Logic Programming, Machine Learning, v.44 n.3, p.301-324, September 2001
Tom Fawcett, An introduction to ROC analysis, Pattern Recognition Letters, v.27 n.8, p.861-874, June 2006
Stijn Viaene , Bart Baesens , Guido Dedene , Jan Vanthienen , Dirk Van den Poel, Proof running two state-of-the-art pattern recognition techniques in the field of direct marketing, Enterprise information systems IV, Kluwer Academic Publishers, Hingham, MA,
Sofus A. Macskassy , Foster Provost, Intelligent information triage, Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, p.318-326, September 2001, New Orleans, Louisiana, United States
Daniel Grossman , Pedro Domingos, Learning Bayesian network classifiers by maximizing conditional likelihood, Proceedings of the twenty-first international conference on Machine learning, p.46, July 04-08, 2004, Banff, Alberta, Canada
Tom Fawcett, "In vivo" spam filtering: a challenge problem for KDD, ACM SIGKDD Explorations Newsletter, v.5 n.2, December
Tams Horvth , Thomas Grtner , Stefan Wrobel, Cyclic pattern kernels for predictive graph mining, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Aleksandar Lazarevic , Vipin Kumar, Feature bagging for outlier detection, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Aleksander Kolcz, Local sparsity control for naive Bayes with extreme misclassification costs, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Roco Alaiz-Rodrguez , Alicia Guerrero-Curieses , Jess Cid-Sueiro, Minimax Regret Classifier for Imprecise Class Distributions, The Journal of Machine Learning Research, 8, p.103-130, 5/1/2007
Mansoor J. Zolghadri , Eghbal G. Mansoori, Weighting fuzzy classification rules using receiver operating characteristics (ROC) analysis, Information Sciences: an International Journal, v.177 n.11, p.2296-2307, June, 2007
Anneleen Assche , Celine Vens , Hendrik Blockeel , Sao Deroski, First order random forests: Learning relational classifiers with complex aggregates, Machine Learning, v.64 n.1-3, p.149-182, September 2006
Thomas Grtner , John W. Lloyd , Peter A. Flach, Kernels and Distances for Structured Data, Machine Learning, v.57 n.3, p.205-232, December 2004
Ashwin Srinivasan , David Page , Rui Camacho , Ross King, Quantitative pharmacophore models with inductive logic programming, Machine Learning, v.64 n.1-3, p.65-90, September 2006
Shlomo Hershkop , Salvatore J. Stolfo, Combining email models for false positive reduction, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
D. M. Gavrila , S. Munder, Multi-cue Pedestrian Detection and Tracking from a Moving Vehicle, International Journal of Computer Vision, v.73 n.1, p.41-59, June 2007
Konstantinos Koumpis , Steve Renals, Automatic summarization of voicemail messages using lexical and prosodic features, ACM Transactions on Speech and Language Processing (TSLP), v.2 n.1, p.1-es, February 2005
Nada Lavra , Branko Kavek , Peter Flach , Ljupo Todorovski, Subgroup Discovery with CN2-SD, The Journal of Machine Learning Research, 5, p.153-188, 12/1/2004
Clifton Phua , Damminda Alahakoon , Vincent Lee, Minority report in fraud detection: classification of skewed data, ACM SIGKDD Explorations Newsletter, v.6 n.1, June 2004
Jeremy Z. Kolter , Marcus A. Maloof, Learning to detect malicious executables in the wild, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Paul N. Bennett , Susan T. Dumais , Eric Horvitz, The Combination of Text Classifiers Using Reliability Indicators, Information Retrieval, v.8 n.1, p.67-100, January 2005
Stijn Viaene , Richard A. Derrig , Guido Dedene, A Case Study of Applying Boosting Naive Bayes to Claim Fraud Diagnosis, IEEE Transactions on Knowledge and Data Engineering, v.16 n.5, p.612-620, May 2004
J. Zico Kolter , Marcus A. Maloof, Learning to Detect and Classify Malicious Executables in the Wild, The Journal of Machine Learning Research, 7, p.2721-2744, 12/1/2006
Johannes Frnkranz , Peter A. Flach, ROC 'n' rule learning: towards a better understanding of covering algorithms, Machine Learning, v.58 n.1, p.39-77, January 2005
Foster Provost , Pedro Domingos, Tree Induction for Probability-Based Ranking, Machine Learning, v.52 n.3, p.199-215, September
Gary M. Weiss, Mining with rarity: a unifying framework, ACM SIGKDD Explorations Newsletter, v.6 n.1, June 2004
Nitesh V. Chawla , Nathalie Japkowicz , Aleksander Kotcz, Editorial: special issue on learning from imbalanced data sets, ACM SIGKDD Explorations Newsletter, v.6 n.1, June 2004
Estevam R. Hruschka, Jr. , Nelson F. F. Ebecken, Towards efficient variables ordering for Bayesian networks classifier, Data & Knowledge Engineering, v.63 n.2, p.258-269, November, 2007
Chris Drummond , Robert C. Holte, Cost curves: An improved method for visualizing classifier performance, Machine Learning, v.65 n.1, p.95-130, October 2006
Mohammed J. Zaki , Charu C. Aggarwal, XRules: An effective algorithm for structural classification of XML data, Machine Learning, v.62 n.1-2, p.137-170, February 2006
Perlich , Foster Provost , Jeffrey S. Simonoff, Tree induction vs. logistic regression: a learning-curve analysis, The Journal of Machine Learning Research, 4, p.211-255, 12/1/2003
Mukund Deshpande , Michihiro Kuramochi , Nikil Wale , George Karypis, Frequent Substructure-Based Approaches for Classifying Chemical Compounds, IEEE Transactions on Knowledge and Data Engineering, v.17 n.8, p.1036-1050, August 2005
Ming-Hsuan Yang , David J. Kriegman , Narendra Ahuja, Detecting Faces in Images: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.1, p.34-58, January 2002
Michael Berthold , David J. Hand, References, Intelligent data analysis, Springer-Verlag New York, Inc., New York, NY, | uncertainty;evaluation;classification;learning;multiple models;comparison;cost-sensitive learning;skewed distributions |
373433 | Margins for AdaBoost. | Recently ensemble methods like ADABOOST have been applied successfully in many problems, while seemingly defying the problems of overfitting.ADABOOST rarely overfits in the low noise regime, however, we show that it clearly does so for higher noise levels. Central to the understanding of this fact is the margin distribution. ADABOOST can be viewed as a constraint gradient descent in an error function with respect to the margin. We find that ADABOOST asymptotically achieves a hard margin distribution, i.e. the algorithm concentrates its resources on a few hard-to-learn patterns that are interestingly very similar to Support Vectors. A hard margin is clearly a sub-optimal strategy in the noisy case, and regularization, in our case a mistrust in the data, must be introduced in the algorithm to alleviate the distortions that single difficult patterns (e.g. outliers) can cause to the margin distribution. We propose several regularization methods and generalizations of the original ADABOOST algorithm to achieve a soft margin. In particular we suggest (1) regularized ADABOOSTREG where the gradient decent is done directly with respect to the soft margin and (2) regularized linear and quadratic programming (LP/QP-) ADABOOST, where the soft margin is attained by introducing slack variables.Extensive simulations demonstrate that the proposed regularized ADABOOST-type algorithms are useful and yield competitive results for noisy data. | Introduction
An ensemble is a collection of neural networks or other types of classifiers
(hypotheses) that are trained for the same task. Boosting and other ensemble
learning methods have been used recently with great success for
several applications, e. g. OCR [29, 16]. So far the reduction of the generalization
error by AdaBoost has not been completely understood.
For low noise cases, several lines of explanation have been proposed
as candidates for explaining the well functioning of Boosting methods [28,
7, 27]. Recent studies with noisy patterns [25, 14, 26] have shown that it
is clearly a myth that Boosting methods will not overfit. In this work, we
try to understand why AdaBoost exhibits virtually no overfitting for low
noise and strong overfitting for high noise data. We propose improvements
of AdaBoost to achieve noise robustness and to avoid overfitting.
In section 2 we analyze AdaBoost asymptotically. Due to their sim-
ilarity, we will refer in the following to AdaBoost [10] and unnormalized
Arcing [8] (with exponential function) as AdaBoost-type algorithms (ATA).
We especially have a focus on the error function of ATAs and find that
the function can be written in terms of the margin and every iteration
of AdaBoost tries to minimize this function stepwise by maximizing the
margin [22, 26, 12]. From the asymptotical analysis of this function, we
can introduce the hard margin concept. We show connections to Vapnik's
maximum margin classifiers, to Support Vector (SV) learning [4] and to
linear programming (LP). Bounds on the size of the margin are given.
Noisy patterns have shown that AdaBoost can overfit: this holds for
boosted decision trees [25], RBF nets [26] and also other kinds of classifiers.
In section 3 we explain why the property of AdaBoost to enforce a hard
margin must necessarily lead to overfitting in the presence of noise or in
the case of overlapping class distributions.
Because the hard margin plays a central role in causing overfitting, we
propose to relax the hard margin in section 4 and allow for misclassifications
by using the soft margin concept, that has already been successfully
applied to Support Vector Machines (cf. [9]). Our view is that the margin
concept is the key for the understanding of both, SVMs and ATAs.
So far, we only know how a margin distribution should look like, that a
learner has to achieve for optimal classification in the no-noise case: then
a large hard margin is clearly the best choice [30]. However, for noisy data
there is always the trade-off between believing in the data or mistrusting
it, as the very data point could be mislabeled or an outlier. This leads to
the introduction of regularization, which reflects the prior knowledge that
we have about the problem. We will introduce a regularization strategy
(analogous to weight decay) into AdaBoost and subsequently extend the
LP-AdaBoost algorithm of Grove & Schuurmans [14] by slack variables to
achieve soft margins. Furthermore, we propose QP-AdaBoost and show
its connections to SVMs.
Finally, in section 5 numerical experiments on several artificial and real-world
data sets show the validity and competitiveness of our regularization
approach. The paper is concluded by a brief discussion.
2 Analysis of AdaBoost's Learning Process
2.1 Algorithm
Tg be an ensemble of T hypotheses defined on an
input vector x 2 X and
We will consider only the binary classification case; most
results can be transfered easily to the classification with more than two
classes [27]. In the binary classification case the output is one of two class
labels, i.e. h t
The ensemble generates the label f(x) j f T (x) which is the weighted
majority of the votes, where
In order to train the ensemble, i.e. to find T appropriate hypotheses
and the weights c for the convex combination, several algorithms
have been proposed: e.g. Windowing [24], Bagging [5] and Boosting/Arcing
(AdaBoost [10], ArcX4 [7]). Bagging, where the weighting is simply
where the weighting scheme is more com-
plicated, are the most well-known ensemble learning algorithms. In the
sequel, we will focus on Boosting/Arcing, i.e. AdaBoost-type algorithms.
We omit a detailed description of the ATA and give only the pseudo-code
in figure 1, for details see e.g. [10, 7, 8].
In the binary classification case, we can define the margin for an input-output
Algorithm AdaBoost(OE)
Input: l examples
Initialize:
1. Train neural network with respect to the weighted sample set fZ; wg
and
obtain hypothesis
2. Calculate the training error ffl t of
l
abort if
where \Delta is a small constant
3. Set
4. Update weights w
where Z t is a normalization constant, such that
Output: Final hypothesis
where
jbj and jbj :=
Figure
1 The AdaBoost-type algorithm (ATA) [22]. For we retrieve the original
AdaBoost algorithm [10]. ATA is a specialization of unnormalized Arcing [6] (with exponential
l and l denotes the number of training patterns. The margin
at z is positive, if the right class of the pattern is predicted. As the
positivity of the margin value increases, the decision correctness, i.e. decision
stability, becomes larger. Moreover, if
1]. The margin ae of a decision line is the smallest margin
of a pattern in the training set, i.e.
We define d(z c) as the rate of incorrect classification (cf. edge in Breiman
[6]) for one pattern by
We will also use this definition with
b (instead of c) which is just an unnormalized version of c, i.e. usually we
have jbj 6= 1 (cf. (4) and (7) in figure 1).
2.2 Error Function of AdaBoost
An important question in the analysis of ATAs is what kind of error function
is optimized. From the algorithmic formulation (cf. figure 1), it is not
straight forward to understand what the aim of this algorithm is. So to
consider why one should use the weights of the hypotheses c t and of the
patterns w t (z i ) in the manner of equation (4) and (5), let us remember
the following facts:
1. The weights w t (z i ) in the t-th iteration are chosen such that the
previous hypothesis has exactly a weighted training error ffl of 1=2
[28].
2. The weight c t of a hypothesis is chosen such that it minimizes a
functional G introduced in Breiman [8]. Essentially, this functional
depends on the rate of incorrect classification of all patterns and is
defined by
l
exp
where OE is a constant. This functional can be minimized analytically
[8, 26] and one gets the explicit form of equation (4) as solution of
3. To train the t-th hypothesis (step 1 in figure 1) we can either use
bootstrap replicates of the training set (sampled according to w t ) or
minimize a weighted error function of the base learning algorithm.
We observed that the convergence of the ATA is faster if the weighted
error function is used.
Taking a closer look at the definition of G, one finds that the computation
of the sample distribution w t (cf. equation (5)) can be derived directly
from G. Let us assume G is the error function which is minimized by the
ATA. Essentially, G defines a loss function over all margin distributions,
which depends on the value of jbj. The larger the margins mg(z i ) (i.e. the
smaller the rate of incorrect classification), the smaller the value of G.
The gradient @G
gives an answer to the question, which pattern
should increase its margin to decrease G maximimally (gradient descent).
This information can be used to compute a sample distribution w t for
training the next hypothesis h t . If it is important to increase the margin
of a pattern z i , the weight w t (z i ) should be high - otherwise low (because
the distribution w t sums to one). Surprisingly, this is exactly what ATAs
are doing.
Lemma 1 The computation of the pattern distribution w t+1 in the t-th
iteration is equivalent to normalizing the gradient of G(b
respect to mg(z
l
The proof can found in the appendix A.
From Lemma 1, the analogy to a gradient descent method is (almost)
complete. In a gradient descent, at first we compute the gradient of the
error function with respect to the parameters which are to be optimized.
This corresponds to computing the gradient of G with respect to the mar-
gins. At second, the step size in this direction is determined (usually by
a line-search). This is comparable to the minimization of G mentioned in
point 2 in the list above.
Therefore, AdaBoost can be related to a gradient descent method,
which aims to minimize the functional G by constructing an ensemble of
classifiers [22, 26, 12]. This also explains point 1 in the list, because in a
gradient descent method, the new search direction is perpendicular to the
previous one.
But the analogy is not perfect. There is a "gap" between having the
pattern distribution and having a classifier. It is difficult to find a classifier
which minimizes G by only knowing the pattern distribution [26].
As we mentioned above, there are two ways of incorporating the sample
distribution. The first way is to create a bootstrap replicate, which is
sampled according to the pattern distribution. Usually there are a lot
of random effects, which hide the "true" information contained in the
distribution. Therefore, some information is lost - the gap is larger. The
more direct way is to use a weighted error function and employ weighted
minimization (Breiman [8]), therefore we will need more iterations with
bootstrap than with weighted minimization 1 . The fastest convergence can
be obtained, if one uses G directly for finding the hypothesis (cf. [12]).
These considerations explain point 3 in the list.
In Friedman et al. [12] it was mentioned, that sometimes the randomized version shows
a better performance, than the version with weighted minimization. In connection with the
discussion in section 3 this becomes clearer, because the randomized version will show an
overfitting effect (possibly much) later and overfitting maybe not observed, whereas it was
observed using the more efficient weighted minimization.
2.3 AdaBoost as an Annealing Process
From definition of G and d, equation (11) can also be written as
exp
Inspecting this equation more closely, we see that AdaBoost uses a softmax
function [3] with parameter jbj that we would like to interpret as
an annealing parameter [22]. If the temperature # := 1=jbj is high, the
system is in a state with high energy - all patterns have relevant high
weights. If the temperature goes down, the patterns with smallest margin
will get higher and higher weights. In the limit, we arrive at the maximum
function. Only the pattern(s) with the highest rate of incorrectness d
(i.e. smallest margin) will be considered and get a non-zero weight.
Lemma 2 If, in the learning process of an
weighted training errors ffl t are bounded by ffl t -
then jbj increases at least linearly with the number of iterations t.
Proof 3 With (4), the smallest value for b t is achieved, if ffl
Then we have b
\Delta). We also have
and hence also Therefore, the smallest value of b t is log q
which is always bigger than a constant fl, which only depends on OE and \Delta.
Thus, we have jb
If the annealing speed is low, the achieved solution should have larger
margins. The reason is the same as for an usual annealing process [15]:
in the error surface, a better local minimum could be obtained locally, if
the annealing is slow enough. From equation (4), we observe that if the
training error ffl t takes a small value, b t becomes large. So, strong learners
can reduce their training errors strongly and will make jbj large after only
a few ATA iterations. The asymptotic point is reached faster. To reduce
the annealing speed, OE or the complexity of the base hypotheses has to be
decreased (with the constraint ffl t ! OE \Gamma \Delta).
Figure
shows some error functions for classification. Among them,
the error function for different values of jbj and OE is shown. In
figure 2 (left), the AdaBoost
2 ), the Squared
and the Kullback-Leibler Error ln mg(z)= ln 2 are plotted. Squared and
Kullback-Leibler are very similar to the error function of AdaBoost
for As jbj increases (in our experiments often up to 10 2 after 200
iterations), the ATA error function approximates a 0=1 loss: all patterns
with margin smaller than 0 (or more general loss others
have loss 0. If it is possible to reduce the error of all patterns to 0 (as
in the AdaBoost case), then this is asymptotically (jbj ! 1) equivalent
to the 0=1-loss around 0 AdaBoost with
The loss function for OE 6= 1
shown in figure 2 (right), demonstrate the
different offsets of the step exhibited by the 0=1 loss.
10.51.52.5loss
Figure
Loss functions for estimating a function f(x) for classification. The abscissa
shows the margin yf(x) of a pattern and the y-coordinate shows the monotone loss for that
pattern: 0=1-Loss (solid), Squared Error (dashed), Kullback-Leibler Error (dash-dotted) and
100g. On the left panel is and on the right plot OE is one
out of f1=3; 2=3g. OE controls the position of the step of the 0=1-Loss which is
asymptotically approximated by the AdaBoost loss function.
2.4 Asymptotical Analysis
2.4.1 How large is the Margin?
The main point in the explanation of ATA's good generalization performance
is the size of the (hard) margin that can be achieved [28, 8]. In the
low noise case, the hypothesis with the largest margin will have a good
generalization performance [30, 28]. Thus, it is interesting to see how large
the margin is and on what it is depending.
Generalizing theorem 5 of Freund et al. [10] to the case OE 6= 1
2 we get
Theorem 4 Assume, ffl are the weighted classification errors of
were generated while running an
. Then the following inequality holds for all ' 2 [\Gamma1; 1]:l
l
I (y i
Y
where f is the final hypothesis and
The proof can be found in appendix B.
Corollary 5 An distributions
with a margin ae, which is bounded by
ae -
Proof 6 The maximum of ffl 1\Gamma'
with respect to ffl t is reached for2
increasing monotonically in ffl t .
Therefore, we can replace ffl t by ffl in equation (13) for ' - ae:
ii
If the basis on the right hand side is smaller than 1, then asymptotically
we have P (x;y)-Z [yf(x) - Asymptotically, there is no example
that has a smaller margin than '. For the biggest possible margin ' max we
have
We can solve this equation for ' max and we get
We get the assertion, because ae is always bigger or equal ' max .
From equation (14), we can see the interaction between OE and ffl: if the
difference of ffl and OE is small, the right hand side of (14) is small. The
smaller OE the more important is this difference. From theorem 7.2 of [8]
we also have the weaker bound ae - 1 \Gamma 2OE and so, if OE is small then ae must
be large, i.e. choosing a small OE results in a larger margin on the training
patterns. An increase of the complexity of the basis algorithm leads to an
increased ae, because the error ffl t will decrease.
2.4.2 Support Patterns
A decrease in the functional G(c; jbj) := G(b) (with predominantly
achieved by improvements of the margin mg(z i ; c). If the margin
c) is negative, then the error G(c; jbj) takes clearly a big value,
which is additionally amplified by jbj. So, AdaBoost tries to decrease the
negative margin efficiently to improve the error G(c; jbj).
let us consider the asymptotic case, where the number of iterations
and therefore also jbj take large values (cf. Lemma 2). In this
case, the values of all mg(z are almost the same, small
differences are amplified strongly in G(c; jbj).
For example, when the margin mg(z another margin
the difference is amplified to the difference
between exp
i.e. to a factor of e 5 - 150.
Obviously the function G(c; jbj) is asymptotically very sensitive to
small differences between margins of the training patterns. From equation
(12), when the annealing parameter jbj takes a big value, AdaBoost
learning becomes a hard competition case: only the patterns with smallest
margin will get high weights, other patterns are effectively neglected
in the learning process. Therefore, the margins mg(z i ; c) are expected to
asymptotically converge to a fixed value ae and a subset of the training
patterns will asymptotically have the same smallest margin ae. We call
these patterns Support Patterns (cf. figure 3).
In order to confirm that the above theoretical analysis is correct, asymptotic
numerical simulations on toy data (several Gauss blobs in two di-
mensions; cf. figure are made. The training data is generated from
cumulative
probability
cumulative
probability
Figure
3 Margin distributions for AdaBoost for different noise levels oe
9% (dashed), 16% (solid) with RBF nets (13 centers) as base hypotheses (left) and with 7
centers in the base hypotheses for data with oe
AdaBoost iterations. These graphs experimentally confirm the expected trends from
equation (14).
several (nonlinearly transformed) Gaussian and uniform blobs 2 , which are
additionally disturbed by uniformly distributed noise U(0:0; oe 2 ). In our
simulations, we used 300 patterns and oe 2 is one out of 0%, 9%, and 16%.
In all simulations, radial basis function (RBF) networks with adaptive
centers are used as learners (cf. Appendix C for a detailed description).
Figure
3 shows the margin distributions after 10 4 AdaBoost iterations at
different noise levels oe 2 (left) and for different strengths of the base hypotheses
(right). From these figures, it becomes apparent that the margin
distribution asymptotically makes a step at fixed size of the margin for
some training patterns. From figure 3, one can see the influence of noise
in the data and the strength of the base hypotheses on the margin ae. If the
noise level is high or the complexity is low, one gets higher training errors
ffl t and therefore a smaller value of ae. These numerical results support our
theoretical asymptotic analysis.
Interestingly, the margin distributions of ATAs resembles the one of
Support Vector Machines (SVMs) for the separable case [4, 9, 30] (cf. figure
6). In an example (cf. figure almost all patterns, that are Support
Vectors (SVs), also lie within the step part of the margin distribution for
AdaBoost. So, AdaBoost achieves a hard margin asymptotically, such as
the SVMs for the separable case.
In an earlier study [26] we observed, that usually there is high overlap
among the Support Vectors and Support Patterns. Intuitively this is
clear, because the most difficult patterns are in the margin area. They
are emphasized strongly and become Support Patterns or Support Vectors
asymptotically. The degree of overlap depends on the kernel (SVM)
and on the base hypothesis (ATA) which are used. For the SVM with
RBF kernel the highest overlap was achieved, when the average widths of
the RBF networks was used as kernel width for the Support Vector Ma-
detailed description of the generation of the toy data used in the asymptotical simulations
can be found in the Internet http://www.first.gmd.de/~raetsch/data/banana.txt.
Figure
Training patterns with decision lines for AdaBoost (left) with RBF nets (13
centers) and SVM (right) for a low noise case with similar generalization errors. The positive
and negative training patterns are shown as '+' and '\Lambda' respectively, the Support Patterns
and Support Vectors are marked with 'o'.
chine [26]. We have observed the similarity of Support Patterns (SP) of
AdaBoost and SV of the SVM also in several other applications.
In the sequel, we can often assume the asymptotical case, where a hard
margin is achieved. The more hypotheses we combine, the better is this
approximation. And indeed, if for example (as often after 200
AdaBoost iterations on benchmark data used in section 5), already then
the approximation to a hard margin is good (cf. equation (12)). This is
illustrated by figure 5, which shows typical distributions after
To recapitulate our findings of this section:
1. AdaBoost-type algorithms aim to minimize a functional, which depends
on the margin distribution. The minimization is done through
an approximate gradient descent with respect to the margin (cf. [26,
12]).
2. Annealing is a part of the algorithm. It depends on an annealing
parameter jbj, which controls how good the 0=1-loss (around
is approximated. The size of the margin is decided by a certain
annealing process. The speed of annealing depends on the parameter
OE and is an implicit function of the strength of the learner in the
training process.
3. Some training patterns, which are in the area of the decision bound-
ary, have asymptotically the same margin. We call these patterns
Support Patterns. They have a large overlap to the SVs found by a
SVM.
4. Asymptotically, a hard margin is achieved, which is comparable to
the one of the original SV approach [4].
5. Larger hard margins can be achieved, if ffl t and/or OE are small (cf. Corollary
5). For the low noise case, a choice of ' 6= 1
can lead to a better
generalization performance, as shown for OCR in [22].
Hard Margin and Overfitting 11
cumulative
probability
Figure
5 Typical margin distribution
graphs of (original) AdaBoost after 20
(dotted), 70 (dash-dotted), 200 (dashed)
iterations. Here, the toy
example (300 patterns,
networks with centers are used. After
already 200 iterations the asymptotical
convergence is almost reached.
10.10.30.50.70.9cumulative
probability
Figure
6 Typical margin distribution graphs
(normalized) of a SVM with hard margin
(solid) and soft margin with
and Here, the same
toy example and a RBF kernel (width=0:3) is
used. The generalization error of the SVM with
hard margin is more than two times larger as
with
3 Hard Margin and Overfitting
In this section, we give reasons why the ATA is not noise robust and
exhibits suboptimal generalization ability in the presence of noise. We give
several references and examples why the hard margin approach will fail in
general if noise is present. According to our understanding, noisy data has
at least one of the following properties: (a) overlapping class probability
distributions, (b) outliers and (c) mislabeled patterns. All three kinds of
noise appear very often in data analysis. Therefore the development of a
noise robust version of AdaBoost is very important.
The first theoretical analysis of AdaBoost in connection with margin
distributions was done by Schapire et al. [28]. Their main result is a
bound on the generalization error P z-D [mg(z) - 0] depending on the VC-dimension
d of the base hypotheses class and on the margin distribution
on the training set. With probability at least
l
is satisfied, where ' ? 0 and l denotes the number of patterns. It was stated
that the reason for the success of AdaBoost, compared to other ensemble
learning methods (e.g. Bagging), is the maximization of the margin. They
experimentally observed that AdaBoost maximizes the margin of patterns
which are most difficult, i.e. have the smallest margin. However, by increasing
the minimum margin of a few patterns, AdaBoost also reduces
the margin of the rest of the other patterns.
Hard Margin and Overfitting 12
number of iterations
generalization
error
Figure
7 Typical overfitting behavior in the generalization error (smoothed) as a function
of the number of iterations (left) and a typical decision line (right) generated by AdaBoost
using RBF networks (30 centers) in the case of noisy data (300 patterns,
16%). The positive and negative training patterns are shown as '+' and '\Lambda' respectively,
the Support Patterns are marked with 'o'. An approximation to the Bayes decision line is
plotted dashed.
In Breiman [8], the connection between the smallest margin and the
generalization error was analyzed experimentally and could not be confirmed
on noisy data.
In Grove et al. [14] the Linear Programming (LP) approach of Freund
et al. [11] and Breiman [8] was extended and used to maximize the smallest
margin of an existing ensemble of classifiers. Several experiments with LP-
AdaBoost on UCI benchmarks (often noisy data) were made and it was
unexpectedly observed, that LP-AdaBoost performs in almost all cases
worse than the original AdaBoost algorithm, even if the smallest margins
are larger.
Our experiments have shown that as the margin increases, the generalization
performance becomes better on datasets with almost no noise
(e.g. OCR), however, on noisy data, we also observed that AdaBoost
overfits (for a moderate number of combined hypotheses).
As an example for overlapping classes, figure 7 (left) shows a typical
overfitting behavior in the generalization error for AdaBoost on the same
data as in section 2. Here, already after only 80 AdaBoost iterations the
best generalization performance is achieved. From equation (14) it is clear
that AdaBoost will asymptotically achieve a positive margin (if OE ! 1) and
all training patterns are classified according to their possibly wrong labels
(cf. figure 7 (right)), because the complexity of the combined hypotheses
increases more and more. The achieved decision line is far away from the
Bayes optimal line (cf. dashed line in figure 7 (right)).
To discuss the bad performance of a hard margin classifier in presence
of outliers and mislabeled patterns, we analyze the toy example in figure
8. Let us first consider the case without noise (left). Here, we can estimate
the optimal separating hyper-plane correctly. In figure 8 (middle) we
have an outlier, which corrupts the estimation. AdaBoost will certainly
concentrate its weights to this outlier and spoil the good estimate that we
would get without outlier. Next, let us consider more complex decision
lines. Here the overfitting problem gets even more distinct, if we can gen-
3Figure
8 The problem of finding a maximum margin "hyper-plane" on reliable data (left),
data with outlier (middle) and with a mislabeled pattern (right). The solid line shows the
resulting decision line, whereas the dashed line marks the margin area. In the middle and
on the left the original decision line is plotted with dots. The hard margin implies noise
sensitivity, because only one pattern can spoil the whole estimation of the decision line.
erate more and more complexity through combining a lot of hypotheses.
Then all training patterns even mislabeled ones or outliers can be classified
correctly. In figure 7 (right) and figure 8 (right) we see that the decision
surface is much too shaky and gives a bad generalization.
From these cartoons, it becomes apparent that AdaBoost is noise sensitive
and maximizing the smallest margin in the case of noisy data can
(and will) lead to bad results. Therefore, we need to allow for a possibility
of mistrusting the data.
From the bound (15) it is indeed not obvious, that we should minimize
the smallest margin: the first term on the right hand side of equation (15)
takes the whole margin distribution into account. If we would allow a
non-zero training error in the settings of figure 8, then the first term of the
right hand side of (15) becomes non-zero (' ? 0). But then ' can be larger,
such that the second term is much smaller. In Mason et al. [18] a similar
bound was used to optimize the margin distribution (a piecewise linear
directly. This approach was more successful on noisy data
than a maximization of the smallest margin.
In the following we introduce the possibility to mistrust parts of the
data, which leads to the soft margin concept.
4 Improvements using a Soft Margin
The original SV algorithm [4] had similar problems as the ATA with respect
to hard margins. In the SV approach, training errors on data with
overlapping classes were not allowed and the generalization performance
was poor on noisy data. The introduction of soft margins then gave a new
algorithm, which achieved much better results compared to the original
algorithm [9] (cf. figure 6).
In the sequel, we will show how to use the soft margin idea for ATAs.
In section 4.1 we change the error function (10) by introducing a new
which controls the importance of a pattern compared to its achieved
margin. In section 4.2 we show how the soft margin idea can be built
into the LP-AdaBoost algorithm and in section 4.3 we show an extension
to quadratic programming - QP-AdaBoost - with its connections to the
Support Vector approach.
4.1 Margin vs. Influence of a Pattern
First, we propose an improvement of the original AdaBoost by using an
regularization term in (10) in analogy to weight decay. For this we define
the influence of a pattern to the combined hypotheses h r by
which is the (weighted) average weight of a pattern computed during the
ATA's learning process (cf. pseudo-code in figure 1). A pattern which is
very often misclassified (i.e. difficult to classify), will have a high average
weight (high influence). The definition of the influence clearly depends on
the base hypotheses space H.
From Corollary 5 and Theorem 2 of [8], all training patterns will get
a margin mg(z i ) larger or equal than 1 \Gamma 2OE after many iterations (cf. figure
2 and discussion in section 2). Asymptotically, we get the following
inequalities
(or even better bounded by equation (14)). We can see the
relation between ae and G(b) for a sufficient large value of jbj in equation
as G(b) is minimized, ae is maximized. After many iterations, these
inequalities are satisfied and as long as OE - 1
2 , the hard margin ae - 0
is achieved [28], what will lead to overfitting in the case of noise. In the
following we will consider only the case
generalizations are straight
forward.
We define a soft margin of a pattern f
trade-off between the
margin and the influence of the pattern to the final hypothesis as follows
f
where C - 0 is a fixed constant and p a fixed exponent. With C and p one
can modify this trade-off. We can reformulate AdaBoost's optimization
process in terms of soft margins. With (16) and (17) we get
f
which is equivalent to
where we use i t (z i simplicity. Other functional forms of i
can also be used (depending on our prior).
In these inequalities, i t (z i ) are positive and if a training pattern has
high weights, i t (z i ) is increasing. In this way, we do not force outliers to
be classified according to their possibly wrong labels (if this would imply
a high influence), but we allow for some errors. So we get the desired
trade-off between margin and influence. If we choose
(19), the original AdaBoost algorithm is retrieved. If C is chosen high,
the data is not taken seriously and for retrieve the
Bagging algorithm [5].
From inequality (18), we can derive the new error function (cf. equation
(10)), which aims to maximize the soft margin
G Reg (b t
l
exp
ae
\Gamma2
f
oe
l
exp
ae
\Gamma2
\Theta mg(z
oe
The weight w t+1 (z i ) of a pattern is computed as the derivative of equation
(20) subject to f
and is given by
For get an update rule for the weight of a training pattern in the
t-th iteration [26]
It is more difficult to compute the weight b t of the t-th hypothesis. Es-
pecially, it is hard to derive the weight analytically. However, we can get
b t by a line search procedure [23] over (20), which has an unique solution
because @
G Reg (b t ) ? 0 is satisfied for b t ? 0. The line search procedure
can be implemented efficiently.
We can interpret this approach as regularization analogous to weight
decay, whereby we want to incorporate the prior knowledge that some
patterns are probably not reliable. Therefore, in the noisy case we prefer
hypotheses, which do not rely on only a few patterns with high weights 3 .
Instead, we are looking for hypotheses with smaller values of i(z i ). So by
this regularization, AdaBoost is not changed for easy classifiable patterns,
but only for the most difficult patterns.
The variables i(z i ) in equation (19) can also be interpreted as slack-
variables (cf. SV approach and next section), which are non-linearly involved
in the error function. Bigger values of i(z i ) for some patterns allow
a larger (soft-) margin ae.
Summarizing, this modification of AdaBoost is constructed to produce
a soft margin and therefore to avoid overfitting.
For a comparison of the soft margin distributions of a single RBF
classifier and AdaBoost Reg see figure 9.
3 Interestingly, also the (soft) SVM generates much more SV in the high noise case than
in the low noise case. Therefore, the SVM shows a trend to need more patterns to find a
hypothesis if the patterns are noisy [30].
cumulative
probability
cumulative
probability
Figure
9 Margin distribution graphs of the RBF base hypothesis (scaled) trained with
Mean Squared Error (left) and AdaBoostReg (right) with different values of C for the toy data
set after 10 3 iterations. Note that for some values for C the graphs of AdaBoostReg are quite
similar to the graphs of the singe RBF net.
4.2 Linear Programming with Slack Variables
Grove et al. [14] showed how to use linear programming to maximize the
smallest margin for a given ensemble and proposed LP-AdaBoost (cf. algorithm
(23)). In their approach, they first compute a gain (or margin)
for the given hypotheses set, which is defined by
The matrix G gives information, which hypothesis contributes a positive
(or negative) part to the margin of a pattern and is used to formulate the
following maxi-min problem: find a weight vector c 2 R T for hypotheses
t=1 , which maximizes the smallest margin ae := min i=1;::: ;l mg(z i ).
That can be solved by linear programming [17]:
Maximize ae subject to
This linear program achieves a larger hard margin than the original
AdaBoost algorithm. From the reasoning in section 3, LP-AdaBoost can
not generalize well on noisy data, since it even stronger overemphasizes difficult
patterns, e.g. outliers. Now, we define again a soft-margin for a pattern
f
to introduce regularization for LP-AdaBoost.
Technically, this approach is equivalent to the introduction of slack
variables to LP-AdaBoost and we arrive at the algorithm LP Reg -AdaBoost
[26], which solves the following linear program:
subject to
This modification allows that some patterns have smaller margins than
ae (especially lower than 0). There is a trade-off: (a) make all margins
bigger than ae and (b) maximize ae \Gamma C
This trade-off is controlled
by the constant C.
4.3 Quadratic Programming and the connection
to Support Vector Machines
In the following section, we extend the LP Reg -AdaBoost algorithm to
quadratic programming by using similar techniques as in Support Vector
Machines [4, 9, 17]. This gives interesting insights to the connection
between SVMs and AdaBoost.
We start with transforming the LP Reg -AdaBoost algorithm, which
maximizes ae, while jcj is kept fixed, to an linear program, in which ae is
fixed (to e.g. 1) and jbj is minimized. Unfortunately, there is no equivalent
linear program. But we can use Taylor expansions 4 to get the following
linear program (compare with linear programming approaches related to
learning e.g. [31, 13, 1]):
Minimize
subject to
Essentially, this is the same algorithm as (24), but the slack variables are
acting differently, because only the Taylor expansion of 1=jbj was used.
Therefore, we will achieve different soft margins as in the previous section
(cf. figure 10).
Instead of using the l 1 norm in the optimization objective of (25), we
can also use the l p norm. Clearly, each p will imply its own soft margin
characteristics. For will lead to an algorithm similar to the SVM.
4 From (24), it is straight forward to get the following problem, which is for any fixed S ? 0
equivalent to (24):
i subject to S
In this problem we can set ae + to 1 and try to optimize S. To retrieve a linear program, we
use the Taylor expansion around 1: 1
For
The optimization objective of a SVM is to find a function h w which
minimizes a functional of the form [30]
l
subject to the constraints
Here, the variables - i are the slack-variables, which make a soft margin
possible. The norm of the parameter vector w is a measure of the complexity
and the size of the margin of hypothesis h w [30]. With functional
(26), we get a trade-off (controlled by C) between the complexity of the
hypothesis and the "grade how much the hypothesis may differ from the
training patterns" (
For ensemble learning, we do not (yet) have such a measure of com-
plexity. Empirically, we observed the following: the more different the
weights for the hypotheses are, the higher the complexity of the ensem-
ble. With this in mind, we can use the l p norm (p ? 1) of the hypotheses
weight vector kbk p as a complexity measure. For example, assume
then kbk p this is a small value, if the elements are approximately equal
(analogous to bagging) and kbk p has high values, when there are some
strongly emphasized hypotheses (far away from bagging). This is intuitively
clear, because Bagging generates usually less complex classifiers
(with lower kbk p ) than, for example, LP-AdaBoost (which can generate
very sparse representations for which kbk p is large).
Note that this arguments holds only if the hypotheses are weak enough,
otherwise the kbk p will not carry the desired complexity information.
Hence, we can apply the optimization principles of SVMs to AdaBoost
and get the following quadratic optimization problem:
Minimize kbk 2
with the constraints given in equation (25). This algorithm, we call it
QP Reg -AdaBoost, is motivated by the connection to LP Reg -AdaBoost
(cf. algorithm (25)) and by the analogy to the Support Vector algorithm.
It is expected, that QP Reg -AdaBoost achieves large improvements over
the solution of the original AdaBoost algorithm - especially in the case
of noise. In comparison with LP Reg -AdaBoost we expect a similar per-
formance. Each "type of soft margin", which is implied by the norm of
the weight vector, can have merits, which may be needed by some specific
dataset.
Summarizing, we introduced a soft margin to AdaBoost by (a) regularizing
the objective function (10), (b) LP Reg -AdaBoost, which uses slack
variables and (c) QP Reg -AdaBoost, which has an interesting relation to
SVMs. For an overall comparison of the margin distributions of original
AdaBoost, SVM, AdaBoost Reg and LP/QP-AdaBoost see figures 5, 6, 9
and 10.
5 Experiments
In order to evaluate the performance of our new algorithms, we make an
extensive comparison among the single RBF classifier, the original AdaBoost
algorithm, AdaBoost Reg , L/QP Reg -AdaBoost and a Support Vector
Machine (with RBF kernel).
cumulative
probability
cumulative
probability
Figure
Margin distribution graphs of LPReg-AdaBoost (left) and QPReg-AdaBoost
(right) with different values of C for the toy data set after 10 3 iterations. LPReg-AdaBoost
sometimes generates margins on the training set, which are either 1 or -1 (step in the distri-
bution).
5.1 Experimental Setup
For this, we use 13 artificial and real world datasets from the UCI, DELVE
and STATLOG benchmark repositories: banana (toy data set used in the
previous sections), breast cancer 5 , diabetes, german, heart, image segment,
ringnorm, flare sonar, splice, new-thyroid, titanic, twonorm, waveform. Some
of the problems are originally not binary classification problems, hence
a random partition into two classes was used 6 . At first we generate 100
partitions into training and test set (mostly - 60% : 40%). On each
partition we train a classifier and get its test set error.
In all experiments, we combined 200 hypotheses. Clearly, this number
of hypotheses may be not optimal, however AdaBoost with optimal early
stopping is often worse than any of the soft margin algorithms.
As base hypotheses we used RBF nets with adaptive centers as described
in appendix C. On each data set we used cross validation to find
the best single classifier model, which is then used in the ensemble learning
algorithms.
The parameter C of the regularized versions of AdaBoost and the parameters
(C; oe) of the SVM are optimized by the first five training datasets.
On each training set 5-fold-cross validation is used to find the best model
for this dataset 7 . Finally, the model parameters are computed as the median
of the five estimations. This way of estimating the parameters is
surely not possible in practice, but will make this comparison more robust
and the results more reliable.
5 The breast cancer domain was obtained from the University Medical Center, Inst. of
Oncology, Ljubljana, Yugoslavia. Thanks go to M. Zwitter and M. Soklic for providing the
data.
6 A random partition generates a mapping m of n to two classes. For this a random \Sigma1
vector m of length n is generated. The positive classes (and the negative respectively) are
then concatenated.
7 The parameters selected by the cross validation are only near-optimal. Only 10-20 values
for each parameter are tested in two stages: first a global search (i.e. over a wide range of
the parameter space) was done to find a good guess of the parameter, which becomes more
precise in the second stage.
Table
Comparison among the six methods: Single RBF classifier, AdaBoost(AB),
AdaBoostReg (ABR;p=2), L/QPReg-AdaBoost (L/QPR-AB) and a Support Vector Machine
Estimation of generalization error in % on 13 datasets (best method in bold face,
second emphasized). AdaBoostReg gives the best overall performance.
RBF AB ABR LPR-AB QPR-AB SVM
Banana 10.8\Sigma0.6 12.3\Sigma0.7 10.9\Sigma0.4 10.7\Sigma0.4 10.9\Sigma0.5 11.5\Sigma0.6
B.Cancer 27.6\Sigma4.7 30.4\Sigma4.7 26.5\Sigma5.5 26.8\Sigma6.1 25.9\Sigma4.6 26.0\Sigma4.7
Diabetes 24.1\Sigma1.9 26.5\Sigma2.3 23.9\Sigma1.6 24.1\Sigma1.9 25.4\Sigma2.2 23.5\Sigma1.7
German 24.7\Sigma2.4 27.5\Sigma2.5 24.3\Sigma2.1 24.8\Sigma2.2 25.2\Sigma2.1 23.6\Sigma2.1
Heart 17.1\Sigma3.3 20.3\Sigma3.4 16.6\Sigma3.7 14.5\Sigma3.5 17.2\Sigma3.4 16.0\Sigma3.3
Image 3.3\Sigma0.6 2.7\Sigma0.7 2.7\Sigma0.6 2.8\Sigma0.6 2.7\Sigma0.6 3.0\Sigma0.6
Ringnorm 1.7\Sigma0.2 1.9\Sigma0.3 1.6\Sigma0.1 2.2\Sigma0.5 1.9\Sigma0.2 1.7\Sigma0.1
F.Sonar 34.4\Sigma2.0 35.7\Sigma1.8 34.2\Sigma2.2 34.8\Sigma2.1 36.2\Sigma1.8 32.4\Sigma1.8
Splice 9.9\Sigma1.0 10.3\Sigma0.6 9.5\Sigma0.7 9.9\Sigma1.4 10.3\Sigma0.6 10.8\Sigma0.6
Thyroid 4.5\Sigma2.1 4.4\Sigma2.2 4.4\Sigma2.1 4.6\Sigma2.2 4.4\Sigma2.2 4.8\Sigma2.2
Titanic 23.3\Sigma1.3 22.6\Sigma1.2 22.6\Sigma1.2 24.0\Sigma4.4 22.7\Sigma1.1 22.4\Sigma1.0
Twonorm 2.9\Sigma0.3 3.0\Sigma0.3 2.7\Sigma0.2 3.2\Sigma0.4 3.0\Sigma0.3 3.0\Sigma0.2
Waveform 10.6\Sigma1.0 10.8\Sigma0.6 9.8\Sigma0.8 10.5\Sigma1.0 10.1\Sigma0.5 9.9\Sigma0.4
Mean % 6.6\Sigma5.8 11.9\Sigma7.9 1.7\Sigma1.9 8.9\Sigma10.8 5.8\Sigma5.5 4.6\Sigma5.4
Note, to perform the simulations of this setup we had to train more
than adaptive RBF nets and to solve more than 10 5 mathematical
programming problems - a task that would have taken altogether 2 years of
computing time on a single Ultra-SPARC machine, if we hadn't distributed
it over computers.
5.2 Experimental Results
In table 1 the average generalization performance (with standard devia-
tion) over the 100 partitions of the data sets are shown. The second last
line in table 1 shows the line 'Mean %', which is computed as follows: For
each dataset the average error rates of all classifier types are divided by
the minimum error rate and 1 is subtracted. These resulting numbers are
averaged over the 13 datasets and the variance is given, too. The last
line shows the probabilities that a particular method wins, i.e. gives the
smallest generalization error, on the basis of our experiments (mean and
variance over all 13 datasets).
Our experiments on noisy data (cf. table 1) show that:
\Pi The results of AdaBoost are in almost all all cases worse than the
single classifier. This shows clearly the overfitting of AdaBoost. It is
not able to deal with noise in the data.
\Pi The averaged results for AdaBoost Reg are slightly better (Mean%
and Win%) than the results of the SVM, which is known to be an
excellent classifier. The single RBF classifier wins less often than the
SVM (for a comparison in the regression case see [20]).
-AdaBoost improves the results of AdaBoost. This is due
to the established soft margin. But the results are not as good as
the results of AdaBoost Reg and the SVM. One reason is that the
hypotheses generated by AdaBoost (aimed to construct a hard mar-
may provide not the appropriate basis to generate a good soft
margin with the mathematical programming approaches.
Conclusion 21
\Pi We can observe that quadratic programming gives slightly better results
than linear programming. This may be due to the fact that
the hypotheses coefficients generated by LP Reg -AdaBoost are more
sparse (smaller ensemble); bigger ensembles may have a better generalization
ability (e.g. due to the reduction of variance [7]). Further-
more, with QP-AdaBoost we prefer ensembles, which have approximately
equal weighted hypotheses. As stated in section 4.3, this
implies a lower complexity of the combined hypothesis, which can
lead to a better generalization performance.
\Pi The results of AdaBoost Reg are in all cases (much) better than
those of AdaBoost and better than that of the single RBF classi-
fier. AdaBoost Reg wins most often and shows the best average per-
formance. This demonstrates the noise robustness of the proposed
algorithm.
The slightly inferior performance of SVM compared to AdaBoost Reg may
be explained with (a) the fixed oe of the RBF-kernel for SVM (loosing
multi-scale information), (b) the coarse model selection, and (c) a bad
error function of the SV algorithm (noise model).
Summarizing, the original AdaBoost algorithm is only useful for low
noise cases, where the classes are easily separable (as shown for OCR
[29, 16]). L/QP Reg -AdaBoost can improve the ensemble structure through
introducing a soft margin: the same hypotheses (just with another weight-
ing) can result in an ensemble, which shows a much better generalization
performance.
The hypotheses, which are used by L/QP Reg -AdaBoost may be sub-
optimal, because they are not part of the optimization process, which
aims for a soft margin. AdaBoost Reg does not have this problem: the hypotheses
are generated such that they are appropriate to form the desired
soft-margin. AdaBoost Reg extends the applicability of Boosting/Arcing
methods to non-separable cases and should be applied, if the data is noisy.
6 Conclusion
We have shown that AdaBoost performs an approximate gradient decent
in an error function, that optimizes the margin (cf. equation 10, see also
[8, 22, 12]). Asymptotically, all emphasis is concentrated on the difficult
patterns with small margins, easy patterns effectively do not contribute
to the error measure and are neglected in the training process (very much
similar to Support Vectors). It is shown theoretically and experimentally
that the cumulative margin distribution of the training patterns in the
margin area converges asymptotically to a step and therefore AdaBoost
asymptotically achieves a hard margin for classification. The asymptotic
margin distribution of AdaBoost is very similar to the margin distribution
of a SVM (for the separable case), accordingly the patterns lying in the
step part (Support Patterns) show a large overlap to the Support Vectors
found by a SVM. However, the representation found by AdaBoost is often
less sparse than for SVMs.
We discussed in detail that AdaBoost-type algorithms, and hard margin
classifiers in general, are noise sensitive and able to overfit. We introduced
three regularization-based AdaBoost algorithms to alleviate the
overfitting problem of AdaBoost-type algorithms for high noise data: (1)
direct incorporation of the regularization term into the error function
Proof of Lemma 1 22
(AdaBoost Reg ), use of (2) linear and (3) quadratic programming with slack
variables. The essence of our proposal is to achieve a soft margin (through
regularization term and slack variables) in contrast to the hard margin
classification used before. The soft-margin approach allows to control how
much we trust the data, so we are permitted to ignore noisy patterns
(e.g. outliers) which would otherwise have spoiled our classification. This
generalization is very much in the spirit of Support Vector Machines that
also trade-off the maximization of the margin and the minimization of the
classification errors by introducing slack variables.
In our experiments on noisy data the proposed regularized versions of
AdaBoost: AdaBoost Reg and L/QP Reg -AdaBoost show a more robust behavior
than the original AdaBoost algorithm. Furthermore, AdaBoost Reg
exhibits a better overall generalization performance than all other algorithms
including the Support Vector Machines. We conjecture that this
unexpected result is mostly due to the fact that SVM can only use one
oe and therefore loose multi-scaling information. AdaBoost does not have
this limitation, since we use RBF nets with adaptive kernel widths as base
hypotheses.
Our future work will concentrate on a continuing improvement of Ada-
Boost-type algorithms for noisy real world applications. Also, a further
analysis of the relation between AdaBoost (QP Reg -AdaBoost) and Support
Vector Machines from the margin's point of view seems promising, with
particular focus on the question of what good margin distributions should
look like. Moreover, it is interesting to see how the techniques established
in this work can be applied to AdaBoost in a regression scenario (cf. [2]).
Acknowledgements
We thank for valuable discussions with B. Sch-olkopf,
A. Smola, T. Frie-, D. Schuurmans and B. Williamson. Partial funding
from EC STORM project number 25387 is gratefully acknowledged.
A Proof of Lemma 1
Proof 7 We define - t (z i ) :=
and from definition
of G and d we get
exp
e
where e
By definition, we have - t (z
1=l. Thus, we get
e
e
Z
e
e
Z (cf. step 4 in figure 1).
Proof of Theorem 4 23
B Proof of Theorem 4
The proof follows the one of Theorem 5 in [28]. Theorem 4 is a generalization
for OE 6= 1Proof 8 If yf(x) - ', then we have
y
and also
exp
Thus,
l
exp
exp
l
l
exp
y iT
where
l
exp
l
exp
exp
exp
exp
e bT =2
exp
because
With
recursively
Y
RBF nets with adaptive centers 24
Plugging in the definition for b t we get
Y
Y
OE
/s
OE
s
OE
Y
OE
Y
Y
C RBF nets with adaptive centers
The RBF nets used in the experiments are an extension of the method
of Moody and Darken [19], since centers and variances are also adapted
(see also [3, 21]). The output of the network is computed as a linear
superposition of K basis functions
denotes the weights of the output layer. The
Gaussian basis functions g k are defined as
k denote means and variances, respectively. In a first step,
the means - k are initialized with K-means clustering and the variances oe k
are determined as the distance between - k and the closest - i (i
Kg). Then in the following steps we perform a gradient descent in
the regularized error function (weight decay)
l
2l
Taking the derivative of equation (29) with respect to RBF means - k and
variances oe k we obtain
l
RBF nets with adaptive centers 25
Algorithm RBF-Net
Input:
Sequence of labeled training patterns
Number of RBF centers K
Regularization constant -
Number of iterations T
Initialize:
Run K-means clustering to find initial values for - k and determine oe
as the distance between - k and the closest - i (i 6= k).
1. Compute optimal output weights
l I
2a. Compute gradients @
E as in (31) and (30) with optimal w
and form a gradient vector v
2b. Estimate the conjugate direction v with Fletcher-Reeves-Polak-Ribiere
CG-Method [23]
3a. Perform a line search to find the minimizing step size ffi in direction v; in
each evaluation of E recompute the optimal output weights w as in line3b. Update - k and oe k with v and ffi
Output: Optimized RBF net
Figure
Pseudo-code description of the RBF net algorithm, which is used as base learning
algorithm in the simulations with AdaBoost
and
l
These two derivatives are employed in the minimization of equation (29)
by a conjugate gradient descent with line search, where we always compute
the optimal output weights in every evaluation of the error function during
the line search. The optimal output weights
notation can be computed in closed form by
l
I
and denotes the output vector, and I an identity matrix.
For this corresponds to the calculation of a pseudo-inverse of G.
So, we simultaneously adjust the output weights and the RBF centers
and variances (see figure 11 for pseudo-code of this algorithm). In this
way, the network fine-tunes itself to the data after the initial clustering
step, yet, of course, overfitting has to be avoided with careful tuning of
the regularization parameter, the number of centers K and the number of
iterations (cf. [3]). In our experiments we always used
to ten CG iterations.
--R
Combining support vector and mathematical programming methods for induction.
A boosting algorithm for regression.
Neural Networks for Pattern Recognition.
A training algorithm for optimal margin classifiers.
Bagging predictors.
Arcing the edge.
Prediction games and arcing algorithms.
Support vector networks.
A decision-theoretic generalization of on-line learning and an application to boosting
Game theory
Additive logistic regres- sion: a statistical view of boosting
Boosting in the limit: Maximizing the margin of learned ensembles.
Optimization by simulated annealing: Quantitative studies.
Learning algorithms for classification: A comparism on handwritten digit recognistion.
Nonlinear Programming.
Improved generalization through explicit optimization of margins.
Fast learning in networks of locally-tuned processing units
Using support vector machines for time series prediction.
Using support vector machines for time series prediction.
An asymptotic analysis of AdaBoost in the binary classification case.
Numerical Recipes in C.
Boosting first-order learning
Ensemble learning methods for classifi- cation
Improved boosting algorithms using confidence-rated predictions
Boosting the margin: a new explanation for the effectiveness of voting methods.
AdaBoosting neural networks.
The Nature of Statistical Learning Theory.
Density estimation using sv machines.
--TR
A training algorithm for optimal margin classifiers
Numerical recipes in C (2nd ed.)
C4.5: programs for machine learning
The nature of statistical learning theory
Networks
Bagging predictors
Game theory, on-line prediction and boosting
Improved boosting algorithms using confidence-rated predictions
The connection between regularization operators and support vector kernels
Boosting in the limit
Using support vector machines for time series prediction
Combining support vector and mathematical programming methods for classification
Regularizing AdaBoost
Improved Generalization Through Explicit Optimization of Margins
Neural Networks for Pattern Recognition
Boosting the margin
AdaBoosting Neural Networks
A Boosting Algorithm for Regression
Theoretical Views of Boosting
Barrier Boosting
Boosting First-Order Learning
--CTR
Masayuki Nakamura , Hiroki Nomiya , Kuniaki Uehara, Improvement of Boosting Algorithm by Modifying the Weighting Rule, Annals of Mathematics and Artificial Intelligence, v.41 n.1, p.95-109, May 2004
Rong Jin , Huan Liu, Robust feature induction for support vector machines, Proceedings of the twenty-first international conference on Machine learning, p.57, July 04-08, 2004, Banff, Alberta, Canada
Theodore B. Trafalis , Alexander M. Malyscheff, An Analytic Center Machine, Machine Learning, v.46 n.1-3, p.203-223, 2002
Yuan (Alan) Qi , Thomas P. Minka , Rosalind W. Picard , Zoubin Ghahramani, Predictive automatic relevance determination by expectation propagation, Proceedings of the twenty-first international conference on Machine learning, p.85, July 04-08, 2004, Banff, Alberta, Canada
Yijun Sun , Sinisa Todorovic , Jian Li, Increasing the Robustness of Boosting Algorithms within the Linear-programming Framework, Journal of VLSI Signal Processing Systems, v.48 n.1-2, p.5-20, August 2007
Jacek ski, Ho--Kashyap classifier with generalization control, Pattern Recognition Letters, v.24 n.14, p.2281-2290, October
E. Andeli , M. Schaffner , M. Katz , S. E. Krger , A. Wendemuth, Kernel least-squares models using updates of the pseudoinverse, Neural Computation, v.18 n.12, p.2928-2935, December 2006
Alain Rakotomamonjy, Variable selection using svm based criteria, The Journal of Machine Learning Research, 3, 3/1/2003
Rong Jin , Jian Zhang, A smoothed boosting algorithm using probabilistic output codes, Proceedings of the 22nd international conference on Machine learning, p.361-368, August 07-11, 2005, Bonn, Germany
Ulrike von Luxburg , Olivier Bousquet , Bernhard Schlkopf, A Compression Approach to Support Vector Model Selection, The Journal of Machine Learning Research, 5, p.293-323, 12/1/2004
Training algorithms for fuzzy support vector machines with noisy data, Pattern Recognition Letters, v.25 n.14, p.1647-1656, 15 October 2004
Saharon Rosset , Ji Zhu , Trevor Hastie, Boosting as a Regularized Path to a Maximum Margin Classifier, The Journal of Machine Learning Research, 5, p.941-973, 12/1/2004
Yoshikazu Washizawa , Yukihiko Yamashita, Kernel projection classifiers with suppressing features of other classes, Neural Computation, v.18 n.8, p.1932-1950, August 2006
Steve A. Billings , Kian L. Lee, Nonlinear Fisher discriminant analysis using a minimum squared error cost function and the orthogonal least squares algorithm, Neural Networks, v.15 n.2, p.263-270, March 2002
Cynthia Rudin , Ingrid Daubechies , Robert E. Schapire, The Dynamics of AdaBoost: Cyclic Behavior and Convergence of Margins, The Journal of Machine Learning Research, 5, p.1557-1595, 12/1/2004
Takashi Takenouchi , Shinto Eguchi, Robustifying AdaBoost by Adding the Naive Error Rate, Neural Computation, v.16 n.4, p.767-787, April 2004
Manfred K. Warmuth , Jun Liao , Gunnar Rtsch, Totally corrective boosting algorithms that maximize the margin, Proceedings of the 23rd international conference on Machine learning, p.1001-1008, June 25-29, 2006, Pittsburgh, Pennsylvania
Olivier Chapelle , Vladimir Vapnik , Olivier Bousquet , Sayan Mukherjee, Choosing Multiple Parameters for Support Vector Machines, Machine Learning, v.46 n.1-3, p.131-159, 2002
Koji Tsuda , Motoaki Kawanabe , Gunnar Rtsch , Sren Sonnenburg , Klaus-Robert Mller, A new discriminative kernel from probabilistic models, Neural Computation, v.14 n.10, p.2397-2414, October 2002
N. Louw , S. J. Steel, Variable selection in kernel Fisher discriminant analysis by means of recursive feature elimination, Computational Statistics & Data Analysis, v.51 n.3, p.2043-2055, December, 2006
Wei Chu , S. Sathiya Keerthi , Chong Jin Ong, Bayesian trigonometric support vector classifier, Neural Computation, v.15 n.9, p.2227-2254, September
Stefano Merler , Bruno Caprile , Cesare Furlanello, Parallelizing AdaBoost by weights dynamics, Computational Statistics & Data Analysis, v.51 n.5, p.2487-2498, February, 2007
Cesare Furlanello , Maria Serafini , Stefano Merler , Giuseppe Jurman, Semisupervised Learning for Molecular Profiling, IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), v.2 n.2, p.110-118, April 2005
Jimmy Liu Jiang , Kia-Fock Loe , Hong Jiang Zhang, Robust face detection in airports, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.503-509, 1 January 2004
Carl Gold , Alex Holub , Peter Sollich, Bayesian approach to feature selection and parameter tuning for support vector machine classifiers, Neural Networks, v.18 n.5-6, p.693-701, June 2005
Daniel S. Yeung , Defeng Wang , Wing W. Ng , Eric C. Tsang , Xizhao Wang, Structured large margin machines: sensitive to data distributions, Machine Learning, v.68 n.2, p.171-200, August 2007
Peter Bhlmann , Bin Yu, Sparse Boosting, The Journal of Machine Learning Research, 7, p.1001-1024, 12/1/2006
Senjian An , Wanquan Liu , Svetha Venkatesh, Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression, Pattern Recognition, v.40 n.8, p.2154-2162, August, 2007
Arthur Tenenhaus , Alain Giron , Emmanuel Viennet , Michel Bra , Gilbert Saporta , Bernard Fertil, Kernel logistic PLS: A tool for supervised nonlinear dimensionality reduction and binary classification, Computational Statistics & Data Analysis, v.51 n.9, p.4083-4100, May, 2007
G. Blanchard , C. Schfer , Y. Rozenholc , K.-R. Mller, Optimal dyadic decision trees, Machine Learning, v.66 n.2-3, p.209-241, March 2007
Gonzalo Martnez-Muoz , Alberto Surez, Using boosting to prune bagging ensembles, Pattern Recognition Letters, v.28 n.1, p.156-165, January, 2007
Gilles Blanchard, Different Paradigms for Choosing Sequential Reweighting Algorithms, Neural Computation, v.16 n.4, p.811-836, April 2004
Yoram Baram , Ran El-Yaniv , Kobi Luz, Online Choice of Active Learning Algorithms, The Journal of Machine Learning Research, 5, p.255-291, 12/1/2004
E. K. Tang , P. N. Suganthan , X. Yao, An analysis of diversity measures, Machine Learning, v.65 n.1, p.247-271, October 2006
Erin L. Allwein , Robert E. Schapire , Yoram Singer, Reducing multiclass to binary: a unifying approach for margin classifiers, The Journal of Machine Learning Research, 1, p.113-141, 9/1/2001
Gunnar Rtsch , Manfred K. Warmuth, Efficient Margin Maximizing with Boosting, The Journal of Machine Learning Research, 6, p.2131-2152, 12/1/2005
Michael E. Tipping, Sparse bayesian learning and the relevance vector machine, The Journal of Machine Learning Research, 1, p.211-244, 9/1/2001
Jiann-Ming Wu, Natural discriminant analysis using interactive Potts models, Neural Computation, v.14 n.3, p.689-713, March 2002
Nigel Duffy , David Helmbold, Boosting Methods for Regression, Machine Learning, v.47 n.2-3, p.153-200, May-June 2002
X. Hong , R. J. Mitchell, Backward elimination model construction for regression and classification using leave-one-out criteria, International Journal of Systems Science, v.38 n.2, p.101-113, 01 February 2007
Mathias M. Adankon , Mohamed Cheriet, Optimizing resources in model selection for support vector machine, Pattern Recognition, v.40 n.3, p.953-963, March, 2007
Robust Loss Functions for Boosting, Neural Computation, v.19 n.8, p.2183-2244, August 2007
Michael Collins , Robert E. Schapire , Yoram Singer, Logistic Regression, AdaBoost and Bregman Distances, Machine Learning, v.48 n.1-3, p.253-285, 2002
Rong Jin , Jian Zhang, Multi-Class Learning by Smoothed Boosting, Machine Learning, v.67 n.3, p.207-227, June 2007
Philip M. Long , Vinsensius Berlian Vega, Boosting and Microarray Data, Machine Learning, v.52 n.1-2, p.31-44, July-August
W. John Wilbur , Lana Yeganova , Won Kim, The Synergy Between PAV and AdaBoost, Machine Learning, v.61 n.1-3, p.71-103, November 2005
Kai-Quan Shen , Chong-Jin Ong , Xiao-Ping Li , Einar P. Wilder-Smith, Feature selection via sensitivity analysis of SVM probabilistic outputs, Machine Learning, v.70 n.1, p.1-20, January 2008
Philip M. Long, Minimum majority classification and boosting, Eighteenth national conference on Artificial intelligence, p.181-186, July 28-August 01, 2002, Edmonton, Alberta, Canada
Hochreiter , Klaus Obermayer, Support vector machines for dyadic data, Neural Computation, v.18 n.6, p.1472-1510, June 2006
Masashi Sugiyama, Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis, The Journal of Machine Learning Research, 8, p.1027-1061, 5/1/2007
Roman Krepki , Gabriel Curio , Benjamin Blankertz , Klaus-Robert Mller, Berlin Brain-Computer Interface-The HCI communication channel for discovery, International Journal of Human-Computer Studies, v.65 n.5, p.460-477, May, 2007
Gavin C. Cawley , Nicola L. C. Talbot, Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters, The Journal of Machine Learning Research, 8, p.841-861, 5/1/2007
Gunnar Rtsch , Sebastian Mika , Bernhard Schlkopf , Klaus-Robert Mller, Constructing Boosting Algorithms from SVMs: An Application to One-Class Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.9, p.1184-1199, September 2002
Ralf Herbrich , Thore Graepel , Colin Campbell, Bayes point machines, The Journal of Machine Learning Research, 1, p.245-279, 9/1/2001
Gunnar Rtsch , Ayhan Demiriz , Kristin P. Bennett, Sparse Regression Ensembles in Infinite and Finite Hypothesis Spaces, Machine Learning, v.48 n.1-3, p.189-218, 2002
Ron Meir , Gunnar Rtsch, An introduction to boosting and leveraging, Advanced lectures on machine learning, Springer-Verlag New York, Inc., New York, NY, | classification;support vectors;arcing;ADABOOST;soft margin;large margin |
373623 | Customer Retention via Data Mining. | ``Customer Retention'' is an increasingly pressing issue in today's ever-competitive commercial arena. This is especially relevant and important for sales and services related industries. Motivated by a real-world problem faced by a large company, we proposed a solution that integrates various techniques of data mining, such as feature selection via induction, deviation analysis, and mining multiple concept-level association rules to form an intuitive and novel approach to gauging customer loyalty and predicting their likelihood of defection. Immediate action triggered by these ``early-warnings'' resulting from data mining is often the key to eventual customer retention. | Introduction
In the last decade, the increased dependency and widespread use of
databases in almost every business, scientific and government organization
has led to an explosive growth of data. Instead of being blessed with
more information to aid decision making, the overwhelming amounts of
data have inevitably resulted in the problem of "information overload-
ing" but "knowledge starvation", as the human analysts are unable
to keep pace to digest the data and turn it into useful knowledge
for application purposes. This situation has motivated some scientists
and researchers in the fields of artificial intelligence, machine learning,
statistics and databases to put their expertise together to form the field
of knowledge discovery in databases (KDD). KDD seeks to intelligently
analyze voluminous amount of information in databases and extract
previously unknown and useful knowledge (nuggets) from them (Fayyad
et al., 1996).
Active research in these fields has produced a wide range of effective
knowledge discovery techniques like ID3 (Quinlan, 1986) for
classification (used in C4.5 (Quinlan, 1993)) and Apriori (Agrawal and
Srikant, 1994) for association rule mining (used in DBMiner (Kamber
et al., 1997)), to cater to various applications in data mining. This
tremendous success achieved in the research domain has also spun off
a wide repertoire of high-quality, off-the-shelf commercial data mining
software/tools, like C5.0 by RuleQuest, MineSet by Silicon Graphics
and Intelligent Miner by IBM, to name a few. Many people saw these
c
Publishers. Printed in the Netherlands.
tools as the catalyst for the success of data mining applications. After
all, many organizations are facing problems coping with overwhelming
amounts of data in their databases and are attracted by the potential
competitive advantages from data mining applications.
The availability of real-world problems and the wealth of data from
the organizations' databases provide an excellent test-bed for us to
perform practical data mining. Our work was motivated by a real-world
problem involving a collaboration with a large company to tackle the
pressing issue of "customer retention". Such collaborations between
academia and application domains to solve real-world problems represent
a positive step towards the success of data mining applications.
The proposed approach showcases an effective application of data mining
in the sales and services related industries, and reveals the complex
and intertwined process of practical data mining. More importantly, we
demonstrate that real-world data mining is an art of combining careful
study of the domain, intelligent analysis of the problems, and skillful
use of various tools from machine learning, statistics, and databases.
One truth about data mining applications is that even when the
problem or goal is clear and focused, the mining process still remains a
complicated one, involving multiple tasks across multiple stages. In our
application, although we are clear that we want to retain customers and
the goal is to "identify the potential defectors way before they actually
defect", it is still difficult to know where to start. Without studying
the domain, it is impossible for us to go further. The problem and goal
statement specified gave no clue to "what" tasks of data mining are
involved, "which" techniques are to be applied, and "how" they are
applied. Obviously, the problem of customer retention must be further
decomposed into several sub-problems such that the knowledge derived
from a task in one phase can serve as the input to the next phase. Since
the available data mining techniques and tools are designed to be task-specific
(according to the framework given in Figure 1) rather than
problem-specific, they cannot be applied directly to solve real-world
problems.
Many challenges can only be found in real-world applications. The
changing environment can cause the data to fluctuate and make the
previously discovered patterns partially invalid. Such phenomenon is
termed as "concept drift" (Widmer, 1996). The possible solutions include
incremental methods for updating the patterns and treating such
drifts as an opportunity for "interesting" discovery by using it to cue
the search for patterns of change (Matheus et al., 1994). In practice,
we often find many organizations collecting data as a by-product of
a business process. Hence, large databases with hundreds of tables,
millions of records and multi-gigabytes size are quite commonplace
Analysis
Selection
Discretization
Extraction
Tree
Network
Discovery
Association
Classification
Clustering
Deviation Statistical
Analysis
Preprocessing
Summarization
Visualization
Category Task
Approach
Tools, Methods and Techniques
Framework for Data Mining
Type
Multi-Purpose
System
Figure
1. A framework for classification of data mining tools (by tasks).
in many application domains. In our case, the company's databases
typically logged in more than 45,000 transactions per day. In a few
months, this database alone can easily aggregate to a few gigabytes
of data without even including the many other operational databases.
To aggravate the matter, most of the databases are of high dimensionality
- with a very large number of fields (attributes). This results
in an exponential increase in size of search space, and hence intolerably
long computational time will be required for running any machine
learning algorithms. To resolve these problems, we need to pre-process
the data set to reduce the dimensionality of the problem. Another
practical issue is that concepts can have various levels of abstraction
or taxonomies. For this reason, knowledge or patterns have various
characteristics. However, most transactional and operational databases
are usually described in terms of low-level concepts and relations, like
"freight car companies". This is cognitively distant from the high-level
business concepts that are required for decision making, like "loyal-
customers". Since the generality of knowledge is needed to determine its
"interestingness" and applicability in practical problems, some forms
of generalization (and specialization) must be catered for. Our work
here will focus mainly on those challenges of the immediate need for
customer retention. Obviously, there are also many other challenges for
practical data mining applications (Fayyad et al., 1996). They include
challenges like "understandability of patterns", "complex relationships
between fields", "missing and noisy data", "data over-fitting", so on
and so forth.
Another point that cannot be stressed enough is that a data mining
application requires substantial much know-how, skills and experiences
from the user. This is often not fully understood by many, thus leading
to failures in data mining applications. This work illustrates the kind of
complexity involved in solving real-world problems and further justifies
that practical data mining is an art that requires more than just directly
applying the off-the-shelf techniques and tools. The remainder of the
work is organized as follows. In Section 2, we begin with a domain
analysis and task discovery of the customer retention problem faced
by the company. We then perform a top-down problem decomposition
and list various sub-problems. This is important because each sub-problem
must map to only one specific task of data mining, so that the
existing data mining tools and techniques can effectively be applied.
In Section 3, we illustrate the use of feature selection via induction
to choose the objective "indicators" (or salient features) about customer
loyalty. With this technique, "concept-drifts" - the definition of
a concept change over time (Clearwater et al., 1989) can be captured
as they take place. This is followed by the use of deviation analysis
and forecasting to monitor these indicators for the potentially
defecting customers in Section 4. Next, in Section 5, we elaborate
on the employment of multiple-level association rule mining in
predicting customers who are likely to follow the previously identified
defecting customers and leave the company. These "early-warnings"
of the possible chain-effect will enable the marketing division to take
rectification actions or tailor special packages to retain important customers
and their potential followers before the defection takes place.
Finally, Section 6 concludes the work and suggests some implications
of this project.
Because of the confidentiality of the databases used and the sensitivity
of services provided by the company, we deliberately use "the com-
pany" throughout the paper and describe applications through some
intuitive examples as much as possible. In explaining basic concepts,
we use the "credit" database from the UC Irvine Machine Learning
Repository (Merz and Murphy, 1996) for illustrative purpose.
2. Understanding the Domain and Problems
The nature of the company is of service providing. Because of the
confidentiality and sensitivity of the company, details shall be left out.
However, in order for the readers to understand the working of the
techniques used in this data mining application, we use an imaginative
example of international telephone satellite relaying services to explain
the problems and techniques. The scenario is as follows: there are some
major satellite relaying centers for major regions around the world, and
each center has its network; companies doing international business
usually need services from three centers in order to communicate to
any business partner in the world. In other words, three parties are
involved, the sender, the company, and the receiver. The sender and
the receiver are usually determined by the need of the business. But
they can choose the company over others or vice versa as different
relayers provide varying services and charges with contracts of various
periods. Companies (senders and receivers) can form consortiums or
groups to enjoy discounts of various sorts offered by the company. This
work is, in essence, how a relayer can keep the most customers to use its
relaying services. The goal of customer retention is to retain customers
before they switch to other relayers.
Like many organizations, the dependency on information technology
has inevitably resulted in an explosive growth of data, far beyond the
human analyst's ability to understand and make use of the data for
competitive advantages. This is also due to the fact that conventional
databases and spreadsheets used by these analysts are not designed for
identifying patterns from the databases. Neither do they possess the
capability to select nor consolidate the different sources of information
from a large number of multiple databases of heterogeneous sources. In
view of these inadequacies, the company involved sees "data warehous-
ing" and "data mining" as two intuitive solutions. Maintaining a data
warehouse separately from the transactional database allows special
organization, access methods and implementation methods to support
multi-dimensional views and operations typical of OLAP. In fact, some
OLAP tool can be integrated to the data warehouse to support complex
OLAP queries involving multi-dimensional data representation, visualization
and interactive viewing, while not degrading the performance
of the operational databases.
2.1. Problem identification and analysis
The first step in our analysis involves identifying opportunities for
data mining applications. This step is important because not every
problem can be solved by data mining. Some guidelines for selecting a
potential data mining application include "the potential for significant
impact", "availability of sufficient data with low noise level", "relevance
of attributes", and "presence of domain knowledge". In fact, nearly one-fifth
of the whole development time was spent on identifying the "right"
problems for application, as well as justifying the use of data mining
over the conventional approaches. The possibility that an application
can be generalized into solving other similar problems in related industries
is also taken into consideration. With these factors in mind, our
feasibility study has identified the problems of customer retention for
data mining application.
The motivation of our work comes from the fact that the problem
of customer retention is becoming an increasingly pressing issue for organizations
6in the sales (e.g., departmental stores, banking, insurance,
etc.) and services (e.g., providers of Internet and/or Telecommunications
services) related industries. From an economic point of view,
customer retention directly translates to a huge saving in marketing
costs, as highlighted by Coopers & Lybrand 's Vince Bowey :
"A lot of companies have not figured out what
it costs them to acquire a new customer, it's
usually pretty shocking. We estimate that it
costs three to five times more money to acquire a
new customer than to keep the ones you have."
This problem is especially important for organizations that have a
small customer base where each customer represents a group of a large
number of companies. The defection of any single customer means a
significant percentage loss in the revenue of the company. Naturally,
the company has a strong interest to retain each and every one of
them. This is especially important when the company is facing keen
competition from many upcoming relayers in the neighboring countries.
The increased competitiveness means that the marketing division is
facing higher risk of customer defection, which potentially escalates
marketing costs due to defections.
Furthermore, the relaying activities are an entrepot in nature, i.e., an
intermediary center for call transfer and repackaging for conference call.
Hence, the defection of a customer to another relayer is likely to influence
its associated business alliances to also defect, in order to maintain
their already established business relationship. Thus, the possibility of
a "snow-ball effect" from a defection further gives rise to a pressing
need for an effective method to identify the less obvious associations
between customers for the benefit of marketing and customer retention.
Finally, contrary to a popular belief, focusing on retaining customers
is not a passive policy. This is because existing customers can, and
often do, bring in new customers through their business associations
and expansion, as well as through word of mouth.
Despite the importance of the "customer retention" problem, many
organizations simply cannot do anything about it, since their customers
are free to leave without warning when they are dissatisfied with the
services or if better offers come by. Fortunately, this is not the case
in this relaying business. By offering attractive discount and rebate
schemes to the customers, especially the major senders (consortiums,
or groups), the company can usually at least secure them under some
short-term contractual agreements. This means that potential defectors
cannot just pull out abruptly. Moreover, most customers are usually
"committed" by the sheer size of their regular relaying transactions
through the company, which must be pre-scheduled some time (e.g.,
weeks or months) in advance. Hence, potential defectors will have to
take a couple of months to gradually trim-out their outgoing volumes in
relaying business before their eventual withdrawal. This gradual pulling
out process offers the company the opportunity to identify the signals
of defection and to predict the possible chain-effect with each defec-
tion. "Early warnings" like these can give the marketing department
ample lead-time to investigate the causes and take rectification actions
before defections actually take place. As was described in (Matheus
et al., 1994), the "interestingness" of a deviation can be related to the
estimated benefit achievable through available actions. In the course
of our work, it was observed that managers and executives have rarely
realized that the very knowledge that can help them alleviate these
problems lies no further than within the wealth of data already at
their disposal. In the context of our application, the company logs in
every call, its duration, the length of the relay, and other information
into various databases daily. This wealth of information, which has so
often been underestimated and under-utilized, can be invaluable for
our data mining applications.
2.2. Task identification: Customer retention
In this application, the goal is focused and clear. We are concerned
about "customer defection" and the goal is to "identify the potentially
defecting customers so that steps can be taken to retain them before
they actually defect". At first glimpse, it is difficult to start. The key
to finding a solution is to iteratively decompose it into some solvable
sub-problems. The problem analysis and task decomposition for our
application is briefly summarized in Figure 2.
As we can see in the figure, the main problem of customer retention
is decomposed into three sub-problems or sub-goals. In the first sub-
problem, we need to identify a list of "objective indicators" that are
representative of customer profiles. The task is to select pertinent indicators
from many possible ones. We need to employ a reliable method
that can select good indicators from data sets sampled from various
databases. The selection of indicators is recast into a problem of feature
selection for classification (Liu and Motoda, 1998). A list of relevant
attributes is identified if they are influential to the classification of
customer loyalty and their likelihood for defection. As predictive accuracy
is the main concern here, a wrapper model of feature selection is
adopted (John et al., 1994). The updating of "objective indicators"
means to run an induction algorithm on the data sets. Thereafter,
Sub-goals of
Application
Application
Problem/Goal
objective
indicators of
customer's
loyalty
chain-effects
possible
Discover
customers
defecting
Discover
(sources)
performance
Relaying Financial
Performance transactions
Message
classification
induction for
Decision tree
Data mining
Task/goals of
indicator
analysis
Deviation
Mining of
multiple level
association rules
Indica
tors cust
defect'g Take rectifying
steps
cust
Assoc'g
List of objective List of defecting
customer
List of associated
customer
Customer retention
Problem Decomposition Solution
in a relaying business
Figure
2. Task identification for "customer retention" in the relaying business.
we can capture concept drifts by frequently updating these objective
indicators.
The list of objective indicators is then served as the input to the
next sub-problem - identifying the list of potential defectors. This is
achieved by using deviation analysis to measure the actual performance
of a customer with those forecasted from their historical data sets for
each of the objective indicators.
The work of customer retention seems completed at this stage. How-
ever, we pursue further. For each potential defector discovered in the
second phase, we look for their associated business partners who are
likely to follow suit in order to maintain their established relaying busi-
ness. Such predictions can be obtained through the mining of multiple-level
association rules from the organization and association databases.
If they can be convinced to stay with the company, the potential
defectors may have to stay.
In general, our work illustrates how the study and decomposition
of customer retention and the integration of various techniques of data
mining can give rise to an effective solution to a complicated real-world
problem. The details of our implementation will be discussed in the
following.
3. Identifying Concept Drifts
The key to solving the problem of customer retention is to identify the
list of "potential defectors" and predict the consequences following each
potential defection even before they actually take place. Intuitively, this
gives rise to the need for us to first identify a set of relevant attributes or
"indicators" that are representative for the target concept of "customer
loyalty and their likelihood of defection". The knowledge found can
then be cross-validated against the existing knowledge, and employed
to capture concept drifts.
3.1. The conventional approach
Because of the clear importance of this task, most organizations have
had to rely on the judgments of their human experts to devise a set
of "subjective indicators". This has several shortcomings. To begin
with, the human's analytical and pattern recognition abilities are extremely
weak in identifying factors that are relevant to the classification
outcomes. Therefore, the set of subjective indicators specified by the
human experts will often involve uncertainty or incompleteness to some
extent. Nevertheless, the list of subjective indicators is still important,
because it can help us determine which of the many databases contain
important attributes and should be used. Secondly, in the context of a
dynamic environment over a period of time, the set of indicators becomes
susceptible to concept drifts. Human experts may not be able to
detect such subtle changes, especially if they have taken place gradually.
Because of such dynamism, a business may be at risk of monitoring the
outdated and irrelevant indicators, or of missing important indicators.
There are other issues as well. For instance, the comprehensive logging
of daily operations in the company has caused the dimensionality
of databases to be very high (ranging from dozens to hundreds of at-
tributes), although many of the attributes are irrelevant or redundant
to the target concept. Usually, the raw data sets are too large for the
user to monitor effectively and efficiently. Hence, it is necessary for
us to first identify a set of relevant indicators. This is a problem of
feature selection (Blum and Langley, 1997). A comprehensive survey
of various methods can be found in (Dash and Liu, 1997). Since our
problem requires the set of objective indicators to be highly accu-
rate, C4.5 (Quinlan, 1993) - a decision tree induction (Quinlan, 1986)
method for classification was chosen due to its proven good perfor-
mance. The underlying idea is that attributes used in a decision tree
that gives high accuracy are relevant and meaningful indicators.
3.2. Decision tree induction for classification
Classification is one of the most important and frequently seen tasks in
data mining: given a large set of training data of the form fA
its objective is to learn an accurate model of how attribute-values
i s) can determine class-labels C. "Decision trees" is one possible
model (Quinlan, 1986) from which a set of disjunctive if-then classification
rules can be derived. Classification rules having high predictive
accuracy (or confidence) are employed for various tasks. Firstly, the
model can be used to perform classification for future data having
unknown class outcome - prediction. For example, a bank manager can
check a future application against the classification model obtained
from historical data to determine whether this applicantion should be
granted a credit - a screening process. Secondly, since those attributes
appeared in the classification rules are influential to the eventual outcome
of the classification, the user can have a better understanding and
insight into the characteristics for each target class. This is especially
useful in some real-world applications where the users seek to achieve
specific classification outcome.
3.3. Using decision tree induction to identify objective
indicators
Our actual data, sampled from the transactional databases residing
in Oracle, has more than 40 attributes and 60,000 periodical records.
Because of its confidentiality, we choose the "credit" data to illustrate
the idea of classification using decision trees. The data is parially shown
in
Table
I. The last column shows the class values. C4.5 is applied to
the data to derive the updated classification rules about customers of
the following form:
The attributes that appear in the classification rules are objective
indicators as they are found in the data and considered influential to the
target concept "Granted". For instance, from the above classification
rule, the user can conclude that attributes "Jobless", "Bought" and
"Saving" are influential and relevant to the target class of "Granted",
while othere attributes such as "Married", "Age", "Sex" are not.
Table
I. Training data set for classification in credit-screening
Jobless Bought Sex . Age Savings Granted
jewel female . 26 $60K Yes
Periodical applications of this method allow the Marketing Department
to objectively identify the most recent set of influencing indicators
in order to capture possibile concept drifts. The set of objective indicators
are then compared with the set of subjective indicators identified
by the domain experts. As a result of the cross-validation process, the
eventual set of merged indicators is more updated and reliable for gauging
the loyalty of customers and their likelihood of defecting. Monitors
are then placed on these "loyalty-indicators" in the data warehouse
so that if any customer shows significiant deviations beyond a certain
minimum deviation threshold ffi Min , an exception report of defection
will be triggered off. This is described in the next section.
4. Predicting the Potential Defectors
Having identified objective indicators, we now need to identify those
customers who show signs of defecting according to these indicators
and then predict the potentially defecting customers. This is possible
because the customers' contractual agreements with the company
and their sheer size of relaying commitments established through the
company effectively prolong a defecting process.
4.1. The conventional approach
An intuitive way of identifying the potential defectors is to monitor
performances of customers over periods. Similarly, a marketing analyst
in the company will prepare a periodic (for example, weekly, monthly, or
report showing the percentage change of every major customer's
relaying volumes in the previous period to that of the previous month
and year. By doing so, the gauge of the loyalty of a customer (sender
or receiver) is tagged to the relative volume change of the customers.
Sometimes, this method does manage to identify potential defectors
who pull out abruptly. However, in most cases, such simplistic analysis
can be improved.
The flaw is mainly that the comparisons made on the volume performances
should normally be relative. Even when a sender is maintaining
a steady volume with the company, it could still be defecting. For ex-
ample, say sender S 1 is a fast-growing company who decides to change
to another new relayer. Hence, S 1 diverts its increased volume into the
new relayer, while maintaining its existing volumes with the company.
When it has eventually re-established its relaying business over at the
new relayer, it will pull out completely and suddenly. Another situation
involves an economy boom where all the senders in the Asia Service-
2Route are increasing at a rate of 10%, a particular sender in the same
route having no increase in volume should become suspicious.
Besides, such comparison has overly relied on the volume performance
for indication of defection. In fact, there are many other indicators
that can also reflect the customer's loyalty, depending on appli-
cations. The use of multiple indicators can produce more interesting
findings. In fact, the quality and performance of the calls that are
despatched to receivers, as reflected in some of the attributes of the Relaying
Performance database, can also reflect the loyalty of customers.
There are still many other issues such as the seasonality observed in
the relaying business, incorporation of the trends in the external en-
vironments, and the profiles of the subject under study. Solving such
complicated issues requires some predictive modeling methods that can
extrapolate the future performance from the historic performances with
referencing norms. From these defined norms, significant changes can
then be derived. Our approach further compares the magnitude of every
deviation to that of the other subjects having similar profiles. Only then
can we ascertain whether the deviation actually means a defection.
We use a trend-seasonal forecasting model to predict future performance
for every customer, based on past performance in the various
performance databases. With the predicted norms to serve as the refer-
ences, we can then employ deviation analysis (Piatetsky-Shapiro and
Matheus, 1994) to identify "trimming patterns" among the customers.
these predictions and analysis, human analysts can barely
observe any phenomenon of gradual deviations at the initial stage of
defection.
4.2. Deviation analysis
Deviation analysis is the discovery of significant changes or deviations
of some pre-defined measures from its normative value over a time
period in a data set (Piatetsky-Shapiro and Matheus, 1994). In most
applications, the measured normative value is expressed as the expected
value (expectation some time-series, or as a forecasted value
calculated from applying some mathematical models like the "seasonal
model" that describes the series. In our work, a deviation ffi t for time t
is given by:
where A t is the Actual value for the indicator, and E t is the Expected
value for the indicator and obtained over a time period from the time-series
If the analysis detects any deviation ffi t exceeding a certain user-specified
"minimum deviation threshold ffi Min ", i.e.,
pre-defined measures in the temporal database, it suggests that a significant
deviation has occurred. Some exception reports are generated.
Since significant deviations from the norms are unexpected, they should
be "interesting" to the user.
Such statistical analysis method is widely employed in data mining
to discover a few really important and relevant deviations among a
multitude of potentially interesting changes in the temporal databases.
Without such a method, most of the changes are normally "drowned
out" by the mass of data (Matheus et al., 1994) and will remain un-
noticed. Even if human analysts were able to detect the more abrupt
pattern changes in the time-series, it would be extremely difficult to
monitor such a large number of deviations over a long period of time.
Nevertheless, finding these patterns is interesting in discovering higher-level
relationships.
4.3. Forecasting
The measured normative value E t of a time-series is often calculated
by applying some mathematical models. This section will briefly introduce
the concept of time-series forecasting. Forecasting is an area
of predictive modelling that involves the building of an appropriate
mathematical model from the historical data, followed by applying
the model to forecasting future measures. Most management decisions
today depend on information from forecasting. This is especially important
for big organizations. Given the large number of time-series
to be forecasted periodically, computer-based quantitative modelling is
the only realistic alternative.
Developing a Trend-Seasonal Forecasting Model
In general, there are two common classes of forecasting models used in
Management Science. They are time-series models and causal models.
In the former, a series of future "performances" are predicted based on
a period of historical behaviours, whereas in the latter, it is predicted
based on other known and quantifiable factors that will affect "perfor-
mances". In our work, we adopt the former because of the wealth of
available historical data in databases, which can serve as the basis for
the training and adjustment of the model. The causal model has many
complications and difficulties that are still under research.
The particular forecasting model adopted here is a type of multiplicative
trend-seasonal model. A multiplicative seasonal model means
that the expected measure in any season/month t within a year is given
by A S t , where A is the base value of current estimate and S t is the seasonal
4index for period t. A seasonal index defines the ratio of the actual
value of the time series (week, month, or quarter) to the average for the
year. Hence, a value for S t above 1 means that the expected measure in
that period exceeds the base value A, and vice versa. A seasonal model
is adopted because the performance of a relaying business (e.g., the
import and export volumes) follows a definite pattern that repeats itself
cyclically over years. For example, peak volumes are always expected
before the seasons of Christmas and New Year. In addition, since the
relaying business also exhibits a constant but steady growth of business
volumes and customers over years, a trend component is incorporated
into the multiplicative seasonal model. Combining these components
into a single model, we have a multiplicative trend-seasonal model,
which is used to predict the various measures. The model can generally
be stated by the expected deseasonalized forecast for the period (t-1)
seasonal adjustment:
Forecast for month t in [1.12], A : Base value of the current
of the trend line, and
Indices for month t.
To forecast, the user should specify three types of data sets: the
warm-up data, training data, and forecast data. First, the warm-up
data set, which comprises a selected range of historical data, is used to
compute the initial estimates of base value A, slope of the trend line
B and seasonality indices S t for each t month. This provides the initial
unadjusted forecasting model. Next, the training data set is selected
from the next period of data that is not used in the warm-up data set.
This step uses current estimates of A, slope of the trend line B, and
seasonality indices S t to extrapolate the forecasted measures F t . The
difference between the computed forecast F t and the actual measures is
then used to adjust the estimates of A, B and the seasonality constants
of S t based on "exponential smoothing" (to be discussed next). This
step is crucial as it adjusts the above three factors in the forecasting
model proportionally according to the fluctuations observed in the actual
measures from the predicted forecast measures. Last, a range of
future periods after the training data are selected to form the forecast
data set. The predicted measures in these periods are extrapolated
using trained base-value A, slope-value B, and seasonal indices S t .
Exponential Smoothing
The method of exponential smoothing works like an auto-pilot in which
it is designed to continuously use the forecast error in one period to
correct and improve the forecast of the next period. On the basis of
comparing the forecast F t with the actual measure in that period, the
adjustment method can compute new estimates for base A t , slope of the
trend line B t and seasonality indices S t by adjusting three smoothing
constants - ff, fi, and fl respectively.
In the following example, we will only make use of the base value A
and its smoothing constant ff for illustration. From the historical data,
we obtain an estimate of A = 10; 000 which is then used as the forecast
for the actual outcome in 1995. Due to an economy boom in 1995, the
actual performance is 11,000. Thus, we had a forecast error of 11,000
Considering any random fluctuation, we adjust the
estimate of A for the 1996 forecast by the fraction of the forecast error
that we attribute to the actual shift in the value of A. We can specify
this fraction (in the range 0 to 1) in the form of a smoothing constant.
For instance, if ff is set to 0.1, we are actually attributing 10% of the
current forecast error to an actual shift in the value of A and 90% to
randomness. In general, the closer the smoothing constant is to 1, the
larger the fraction of the forecast error we are attributing to an actual
shift.
Usually, these smoothing constants are left to the control of the
end-user in a dynamic environment, although empirical experiments
have shown that a value between 0.10 and 0.30 for all the smoothing
constants often results in reliable forecasts. However, if the user expects
the level of the estimate to change permanently in the immediate
future because of some special circumstances, then a larger value of
a smoothing constant (like 0.7) should be used for a short period of
time. Once the computed level of the forecasting model has changed
in accordance with these special circumstances, the user should then
switch back to a smaller value of smoothing constant.
4.4. Using deviation analysis and forecasting to discover
potentially defecting customers
In our implementation, the normative values E t for each of the five
to seven "indicators" are first developed through a trend-seasonal
forecasting model (Levin et al., 1992), based on the customer's historical
performance in the temporal databases. With an annual size
of more than 50,000 records available for each indicator to train the
forecasting model, accuracy of the predicted normative values can be
increased. This should also be credited to additional factors taken into
consideration in our
\Gamma the use of seasonal indexes adjusts the forecast according to the
annual seasonal pattern in the time-series, and
\Gamma the use of exponential smoothing allows more weights to be assigned
to the recent data, thus taking into account the current
circumstances, like a recent economy downturn in South-East Asia.
With the normative values of every indicator forecasted for every
customer, deviation analysis is performed to detect those customers
who show significant deviations, or These customers are
deemed to be potentially defecting and this warrants a further "in-
terestingness validation" (Matheus et al., 1994). In other words, their
deviations are further compared with those ffi SR of the (aggregated)
customers operating in the same Service-Route. By doing so, we take
into account the trends in the external environment and profiles of the
subject under consideration. This will ensure that a "real" deviation is
exclusive only to the specific subject and not some general phenomenon
experienced by other subjects too in the same Service-Route. For ex-
ample, the Asian boom in the mid of last decade had generally boosted
the volumes of those in the Asia Service-Route. Similiarly, the recent
Asia economy crisis also causes an overall reduction in the volumes of
those senders and receivers in the Asia Service-Route. An illustration
of one such analysis is shown in Figure 3.
Query for deviation
Select distinct Sender
From Indicator I 1
Where
And
Result of deviation analysis
Sender c.f. ServiceRoute Interest'g
Figure
3. An illustration of deviation analysis: (left) - A SQL-like query interface for
"monitor" setting. (right) - An "interestingness" evaluation of the deviation analysis.
In the analysis, senders showed significant
deviations ffi 1 which satisfied the ffi Min of -10%. These are further compared
with the average deviation ffi SR of the aggregated customers in
their respective Service-Routes. In S 1 's case, the general population in
reasonably well (a positive deviation +1%), suggesting
that S 1 's deviation is unexpected and thus interesting. In S 4 's case,
the general population in SR 5 performs equally badly (a deviation of
-10%), suggesting that S 4 's deviation should be expected and thus not
interesting. As mentioned above, we can usually relate S 4 's kind of
deviation to some regional events like the current economy turmoil in
Asia that affects all the relaying operations in the Asia's Service-Routes.
If no such explanation can be found, then it would mean that all the
customers in the Service-Route are declining.
If consistent deviations are also observed across the set of indicators
for "deviating" customers like S 1 and C 2 , then a periodic exception
report is produced to alert the domain experts on these possible "de-
fectors". Domain knowledge and insights are then applied to verify the
findings for each of these cases and the suspected potential defectors
will be monitored closely for the subsequent periods. Persistent deviations
are strong signs of likely defection. Besides performing deviation
analysis on the Customer concept, similar analysis can also be applied
to investigate and identify upcoming or weakening Markets (continents
and countries) and Service-Routes for the purpose of marketing.
5. Avoiding the Chain-Effect
Many organizations would be content if they can predict the potentially
defecting customers. Nevertheless, this work takes one step further. We
ask who else will likely follow suit for each of the potential defectors.
Such association knowledge is especially important to a relaying busi-
ness. This is because the choice of a particular relayer linking different
senders and receivers via different Service-Routes and Markets is usually
dictated by a few major players. Since they carry very large relaying
volumes, they have great influence over the smaller companies. Hence,
the defection of a major sender will encourage similar behavior in their
associated business partners who will attempt to preserve established
relationships. This can inflict a severe dent to the financial health of a
relayer. Hence, there is much incentive for the Marketing Department
to have a full picture of the consequences from an identified potential
defection. If we wait until a "chain-effect" becomes observable to the
human analysts, it would be too late. In short, a preventive measure
should also be taken to take care of the followers when a potential
defector is detected as they can also influence the major players to
change their stands.
5.1. The conventional approach
The conventional approaches include information exchanges between
relayers, senders, and receivers. They offer only subjective and often
unreliable prediction of the association relationships among the cus-
tomers. From the earlier data analysis, we have identified that the
Transactions database, having some 120,000 records monthly, contains
attributes incoming messages and outgoing messages for every relaying
transaction. Table II depicts the design of the Transactions database.
Table
II. A database of Message-Transactions for association rule mining.
Date-Time Msg-Id In-Msg Sender Out-Msg Receiver .
970501-1210 AB0012 M1 HongKong M4 Amsterdam .
970601-0115 AO9912 M7 Frankfurt M2 Kuala Lumpur .
From such transactional records, we can mine for association rules (Agrawal
et al., 1993) which represent the transactional relationships between
messages, senders and receivers in the notation of an association rule:
Although the mining of association rules at the message level will
give a good idea of the association relationships between senders and
receivers, knowledge of this level does not provide much business value
for our application. This is because knowledge at too low a level (over-
specific) will end up looking like the raw data and having little general
meaning. Since most concepts in a real-world context involve multiple
levels of abstractions, it causes many problems for application. Hierarchical
concepts are also present in our application domain. For example,
our analysis of concepts Customer and Market reveals some taxonomies
as follows:
Customers
fcontinents - countries - relayersg 2 Market
As an example, we will elaborate on the concept of Customer. The
taxonomy is shown in Figure 4. A group is an association of consortiums
that forms for the purpose of negotiating a better volume
rebate/discount or qualifying for a better charging scheme. Most consortiums
will therefore form groups with their associated business
partners. Hence, a group consists of many consortiums, and each consortium
owns several individual senders. With this hierarchy, we can
deduce a trivial association rule, which is "a consortium is associated
with some other consortiums in the same group." Therefore, the defection
of one of them may affect their partners in the group. This is
because the remaining may fail to reach the minimum volume quota
in order to enjoy the rebate or discount. Since the association rules
at a low level, say at the sender level, are too specific to have much
application value, we mine for generalized association rules (Srikant
and Agrawal, 1996), or generate rules that are as general as possible
by taking the existing taxonomies into account. A similar approach
to finding multiple-level association rules (Han and Fu, 1996) is
employed here.
1.1.2
1.1 1.2 1.3 2.1 2.2 2.3
1.1.1
1.1.3
1.2.1
1.2.2
1.2.3
1.3.1
1.3.2
1.3.3
2.1.1
2.1.2
2.1.3
2.2.1
2.2.2
2.2.3
2.3.1
2.3.2
2.3.3
Sender
Level
Level
Group
Consortium
G
Figure
4. A taxonomy of the "Customer " concept in a relaying business.
5.2. Multiple-Level Association Rules
Many databases in the real-world are transaction-oriented and do not
contain class-label. The most popular example is the supermarket's
bar-coded transactional data. Generally, the task of mining association
rules over a transactional database can be formally stated as follows:
be the set of items in the database. Each trans-action
T in the database D has a unique identifier and contains a set of
items called an itemset. An itemset with k items is called a k-itemset.
The support of an itemset is the percentage of transactions in D that
contain the itemset. An association rule is a conditional implication
among itemsets, A ! B, where itemsets A; B ae I . The support of
the association rule is given as the percentage of the transactions that
contain both A and B, and the confidence is given as the conditional
probability that a transaction contains B, given that it contains A.
Predicting the followers of a potential defector
The Transactions database contains the transactional associations
between senders and receivers with respect to messages. We can derive
association rules between vessels, having the following form for a
message:
Associations at this level may be too specific to be of any value to a
business. We therefore generalize the associations to higher taxonomy
levels, in accordance to the hierarchical taxonomy of Customer (like
the one in Figure 4). For example, group G 1 includes consortiums
contains consortiums C 2:1 ; C 2:2 , and C 2:3 .
Employing the algorithms presented in (Han and Fu, 1996), association
rules at the consortium and group level are generated:
The first association rule is interpreted as "if consortium C 1:3 de-
fects, so may C 2:1 (50% confidence)". The second association rule says
"if group G 1 defects, so may G 2 (70% confidence)". The following example
illustrates the "chain-effect" of a defection based on the taxonomy of
Customer. If consortium C 1:3 defects, consortiums C 1:1 and C 1:2 within
the same group G 1 are likely to defect too because of their established
business transactions. So will C 2:1 , due to its association with C 1:3 found
in the above association rule. This will further affect the consortiums
C 2:2 and C 2:3 within the same group of G 2 . Similarly, should group
is likely to follow suit. Eventually, all the consortiums in
these two groups will defect together to another relayer. It is imperative
to detect this kind of chain reaction before it is too late.
The mining of multiple-level association rules enables the company
to regularly predict possible chain reaction of a defection, thus giving
it a good chance to take pertinent actions before a major customer
starts leaving for a new relayer, thereby influencing its associated business
partners not to leave either. The knowledge of associations also
allows the marketing department to customize attractive schemes for
the customers' alliances identified, in order to attract more volumes
from them. It should be highlighted here that association rules may
sound relatively easy to be detected by human analysts. However, in
a database containing a few millions of transactional records, these
associations are simply "drowned out".
6. Conclusions
In the course of our work, we have identified some interesting objective
indicators among a large number of attributes. The finding has verified
our earlier conjecture on the limitations of human capabilities. In
addition, preliminary experiments on the historical data sets have successfully
identified some already-defected customers, way before they
showed prominent signs of defection. This work is significant because
our approach can be generalized into solving similar problems in the
sales and services related industries, like Telecommunications, Internet
Service Providers, Insurance, Cargo Transshipment, etc. For instance,
a popular strategy used by many companies in the services industry is
using attractive promotions and discounts to "lure" new customers into
short-term services under them. Even departmental stores in the sales
industry come up with their own VIP smart cards in a bid to retain their
customers. We would like to highlight that the information from the
logs and databases can potentially be turned into valuable knowledge
for competitive advantages. For example, the customers' particulars
and their profiles (like mobile-phone or Internet usage patterns) could
be mined for predicting a list of potential defectors among them. Since
most customers are bound to the services of a company for at least a
period of time (usually around a year), special offers can be made to
those who show signs of dissatisfaction.
Many ideas presented here can in fact be modified to suit various
applications of similar needs. For instance, although the third sub-task
in our work is made possible by the availability of the transactional
associations, unique to a business of hierarchical structure, there are
many other kinds of associations practitioners can look for in different
problem domains. Spatial associations can be identified and applied in
some property-related problems while sequential associations can be
found from a sales-transactional database and applied to predicting
future purchases in E-business. These different associations in different
problem domains can help infer valuable knowledge.
One of the goals of this work is to show that the maturity of data
mining has reached a point where large-scale applications to practical
problems are desirable and feasible. This work will hopefully create
some sort of "chain-effect" in motivating the strategic use of data mining
in business applications where conventional approaches fall short.
The success of practical applications serves to remind the executives
and managers that understanding the underlying concept of the data
mining methods remains the key to a successful data mining applica-
tion. There is no such data mining tool that can fit into every kind
of problems. Neither is there any short-cut solution to a complicated
problem. A detailed analysis, a good design and a systematic development
are necessary for a successful application. This work further
demonstrates that data mining is very much an art, in the context of
practical applications.
Acknowledgements
We would like to thank Farhad Hussain and Manoranjan Dash for
helping us finalize this version of the paper, and the company involved
in the project to make this application possible although it is unfortunate
that no identity of the company can be mentioned. We are also
indebted to the anonymous reviewers and the editor for their detailed
constructive suggestions and comments.
--R
http://www-east
AAAI Press
Advances in Knowledge Discovery and Data Mining.
Quantitative Approach to Management (8th Edition)
Feature Selection for Knowledge Discovery Data Mining.
University of California
--TR
Incremental batch learning
C4.5: programs for machine learning
Mining quantitative association rules in large relational tables
From data mining to knowledge discovery
Attribute-oriented induction in data mining
Selection of relevant features and examples in machine learning
Feature Selection for Knowledge Discovery and Data Mining
Induction of Decision Trees
Database Mining
Fast Algorithms for Mining Association Rules in Large Databases
--CTR
John Hadden , Ashutosh Tiwari , Rajkumar Roy , Dymitr Ruta, Computer assisted customer churn management: State-of-the-art and future trends, Computers and Operations Research, v.34 n.10, p.2902-2917, October, 2007
Efficiently handling feature redundancy in high-dimensional data, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Hee Seok Song , Jae Kyeong Kim , Yeong Bin Cho , Soung Hie Kim, A Personalized Defection Detection and Prevention Procedurebased on the Self-Organizing Map and Association Rule Mining: Applied to Online Game Site, Artificial Intelligence Review, v.21 n.2, p.161-184, April 2004
Lei Yu , Huan Liu, Efficient Feature Selection via Analysis of Relevance and Redundancy, The Journal of Machine Learning Research, 5, p.1205-1224, 12/1/2004
Huan Liu , Hiroshi Motoda , Lei Yu, A selective sampling approach to active feature selection, Artificial Intelligence, v.159 n.1-2, p.49-74, November 2004
Huan Liu , Lei Yu, Toward Integrating Feature Selection Algorithms for Classification and Clustering, IEEE Transactions on Knowledge and Data Engineering, v.17 n.4, p.491-502, April 2005 | data mining;deviation analysis;customer retention;multiple level association rules;feature selection |
373655 | Cryptography in Quadratic Function Fields. | We describe several cryptographic schemes in quadratic function fields of odd characteristic. In both the real and the imaginary representation of such a field, we present a Diffie-Hellman-like key exchange protocol as well as a public-key cryptosystem and a signature scheme of ElGamal type. Several of these schemes are improvements of systems previously found in the literature, while others are new. All systems are based on an appropriate discrete logarithm problem. In the imaginary setting, this is the discrete logarithm problem in the ideal class group of the field, or equivalently, in the Jacobian of the curve defining the function field. In the real case, the problem in question is the task of computing distances in the set of reduced principal ideals, which is a monoid under a suitable operation. Currently, the best general algorithms for solving both discrete logarithm problems are exponential (subexponential only in fields of high genus), resulting in a possibly higher level of security than that of conventional discrete logarithm based schemes. | Introduction
Since the introduction of the well-known Diffie-Hellman key exchange protocol [16], many cryptographic
schemes based on discrete logarithms in a variety of groups (and even semi-groups) have been developed.
Among them, the signature scheme due to ElGamal [17], now the basis of the U.S. Digital Signature Standard
[28], is notable. ElGamal also presented a discrete logarithm based public-key cryptosystem in [17].
Any finite group or semi-group G with a fast operation and a sufficiently difficult discrete logarithm problem
(DLP) lends itself to the use of discrete logarithm based cryptography. Diffie and Hellman as well as ElGamal
used
, the multiplicative group of a finite prime field. Buchmann et al. based a key exchange protocol
on discrete logarithms in the ideal class group of an imaginary quadratic number field [10, 9]. The first
example of a non-group underlying a discrete logarithm based system was the set R of reduced principal
ideals of a real quadratic number field, which admits a structure first explored by Shanks [35] and termed
infrastructure by him. A key exchange protocol using elements of R as keys was introduced in [11] and
Research supported by NSF grant DMS-9631647
implemented in [33]. A signature scheme using the same set was briefly mentioned in [8]. These ideas were
subsequently adapted to real quadratic function fields over finite fields, where the set of reduced principal
ideals exhibits an analogous infrastructure. The first discrete logarithm based system in real quadratic
function fields of odd characteristic was the key exchange protocol of [34], followed by a signature scheme
in [32], and both a key exchange and a signature algorithm for even characteristic in [27]. Other schemes
for real quadratic function fields and number fields, such as an oblivious transfer protocol, were discussed in
[7]. The function field schemes are faster, simpler, and easier to implement than the corresponding number
field systems. They use finite field arithmetic, thereby eliminating the problem of rational approximations,
and their underlying DLP can currently only be solved in exponential time, whereas for the corresponding
problem in number fields, a subexponential algorithm is known. Thus, these fields seem to represent a
promising setting for cryptography.
This paper considers a variety of discrete logarithm based cryptographic schemes in quadratic function fields
of odd characteristic. We present Diffie-Hellman-like key exchange protocols as well as ElGamal-like signature
schemes and public key systems in both the real and the imaginary model of such a field. Implementations
of hyperelliptic systems (see [24]) using the ideal class group of an imaginary quadratic function field are
discussed. For the real setting of a quadratic function field, we simplify the key exchange algorithm of [34].
We also improve on the signature schemes of [32] and [27] by shortening the signatures. Finally, we provide
a more rigorous and meaningful complexity analysis of these systems than [34, 32, 27] and investigate their
security and its relationship to the relevant discrete logarithm problems.
The fields we are considering are function fields k(C) of an elliptic or hyperelliptic curve C of genus g over
a finite field k of odd characteristic. If we write C in the form y is a polynomial
with coefficients in k, then (imaginary case) or 2g+2 (real case). While it is simple to
convert an imaginary representation to a real one (for example, by replacing D(t) by D(t \Gamma1 )t 2g+2 , noting
that k(t the reverse is only possible if D(t) has a root a 2 k (corresponding to a ramified rational
prime divisor of K=k(t); we then replace D(t) by D(t [29]). Hence the real setting is more
general. A preliminary investigation [30] suggests that the arithmetic underlying our cryptographic schemes
has approximately the same complexity and shows roughly equal practical performance in both the real and
the imaginary model. A choice of setting for conducting cryptography (real versus imaginary) would depend
on the performance issue as well as the question of how difficult the DLP is in either model. Currently, the
best known general algorithms for solving this problem are exponential in both cases.
In the next section, we summarize the necessary basics about quadratic function fields. All the required
algorithms and their complexities are stated in section 3. We present our cryptographic schemes in section
4 and analyze their security in section 5.
Quadratic Function Fields
For an introduction to function fields, we refer the reader to [39]. Quadratic function fields are discussed
in considerable detail in [6]. Let be a finite field of odd characteristic with q elements. A quadratic
function field is a quadratic extension K of the rational function field k(t) over k in the variable t. More
specifically, a polynomial in t
with coefficients in k which we may assume to be squarefree. K is (a) real (quadratic function field) if
degree deg(D) of D is even and the leading coefficient sgn(D) of D is a square in k. K is (an) imaginary
(quadratic function field) otherwise; that is, K is imaginary if deg(D) is odd or deg(D) is even and sgn(D)
is not a square in k. In the latter case, K is real quadratic over a quadratic extension of k, so we will
henceforth exclude this case. Elements in K have the form ff k(t). The conjugate of ff
is
If g denotes the genus of K, then K is an elliptic function field if a hyperelliptic function field if
? 1. Then imaginary. If K is real, then we can
"extract" a fixed square root
D of D in the field k((1=t)) of Puisseux series over k, so K k((1=t)). In
this case, every nonzero element ff 2 K has a representation
and am 6= 0. We set
Denote by k[t] the ring of polynomials with coefficients in k in the variable t and let O = k[t] be the integral
closure of k[t] in K. Then O is a k[t]-module of rank 2 with basis f1;
Dg. An (integral O-)ideal a is a
subset of O such that for any ff; fi 2 a and ' 2 O, ff a and 'ff 2 a. A fractional (O-)ideal a is a subset
of K such that da is an integral ideal for some nonzero d 2 k[t]. Every fractional ideal a is an O-submodule
of K. If the O-rank of a is 1, i.e. there exists ff 2 K such that a = f'ff j ' 2 Og, then a is principal and ff
is a generator of a; write a = (ff).
Henceforth, all ideals (fractional and integral) are assumed to be nonzero, so the term "ideal" will always
be synonymous with "nonzero ideal". Then every integral ideal a is a k[t]-module of rank 2 with a k[t]-basis
Dg where We may assume
that S and Q are monic and, after subtracting a suitable multiple of Q from P , that deg(P
S, Q, and P are unique. a is primitive if primitive with Q monic and deg(P
then the pair (Q; P ) is the standard representation of a (see [30]; [36, 37] also use the term adapted basis),
and a is said to be in standard form. In practice, we will require the degree inequality deg(P
we will not insist on Q being monic. A primitive ideal a reduced if deg(Q) g, the genus of K.
Hence every reduced ideal can be uniquely represented by a pair of polynomials
divides g. This "small" representation makes reduced ideals very suitable
for computation.
On the set I of (nonzero) fractional ideals of K, a multiplication is defined as follows. If a and b are fractional
ideals, then the product ab consists of all finite sums of products of the form fffi with ff 2 a and fi 2 b.
Under this multiplication, I is an infinite Abelian group with identity O. The set P of (nonzero) fractional
principal ideals is an infinite subgroup of I of finite index h 0 , the ideal class number of K. The factor group
is the ideal class group of K. Note also that the set of integral ideals is a sub-monoid of I. Two
fractional ideals a and b are equivalent if they lie in the same coset of C, i.e. a = (')b for some ' 2 K .
The element ' is a relative generator of a with respect to b. Write a b. Every equivalence class of ideals
contains at least one and at most finitely many reduced ideals. If K is imaginary, then each class has a
unique reduced representative [6]; however, if K is real, then there can be many reduced representatives in
each ideal class, in fact, as many as O(q g ) reduced ideals.
In the imaginary case, we base our cryptographic schemes on the arithmetic in the ideal class group C of K.
Each ideal class is represented by its reduced representative. The product of two reduced ideals is generally
not reduced; however, one can compute the reduced representative in the class of the product ideal quickly.
Thus, the set of reduced ideals is a monoid under the following operation : given two reduced ideals a and
b, let a b be the reduced ideal in the class of ab. The underlying discrete logarithm problem is the DLP in
the class group C: given reduced ideals d and g with d g
Unfortunately, this approach fails in the real quadratic setting, due to the fact that there are many reduced
representatives in each ideal class. Here, we restrict ourselves to the finite subset R of P of reduced principal
ideals. More exactly, we define the distance ffi (a) of a reduced principal ideal a to be the degree of a generator
of minimal nonnegative degree. For n 2 N0, we call a the reduced principal ideal below n if
minimal. Then R is a monoid under the following operation: given two reduced principal ideals a and b, let
a b be the ideal below (b). Here, the underlying DLP is the the following: given reduced principal
ideals d and g so that d is the reduced principal ideal below xffi(g), find x (mod R) where R is the maximal
distance or the regulator of K. We will see that this problem is polynomially equivalent to the problem of
finding the distance of a reduced principal ideal.
In both the imaginary and the real case, we require efficient algorithms for the following tasks:
ffl Given two reduced ideals in standard form, compute a standard representation of the product ideal.
ffl Given this product ideal, compute a reduced representative in its class.
In the real setting, we need to solve the additional problem:
ffl Given n 2 N, compute the reduced principal ideal below n.
Algorithms
In both the real and the imaginary models, the composition operation , its implementation, and its complexity
have previously been studied in considerable detail. For the imaginary setting, we refer the reader
to [12], [29], and [30]. The real case is discussed in [36], [34], [32], and again [30]. A detailed complexity
analysis with explicit O constants can be found in [30]. To make this paper somewhat self-contained, we
restate the composition procedures here, but we only sketch proofs of correctness or performance.
Algorithms pertaining to just the imaginary setting have the prefix "I" in their name. Analogously, algorithms
that only apply to the real case begin with the letter "R". We use the complexity model of [30]; that is,
all complexity estimates will be stated in terms of elementary field operations of quadratic complexity,
such as multiplication and inversion of field elements. Here, we assume standard complexity estimates for
polynomial arithmetic as described for example on pp. 109f. of [13]; in particular, multiplication of two
polynomials of respective degrees m and n (m n) requires O(mn) field operations, division with remainder
uses O(n(m field operations, and computation of extended gcd's takes O(m 2 ) field operations (see
also [30] for the last result).
Our first algorithm computes the product of two primitive ideals. It is valid in both the real and the
imaginary setting and the output will be in standard form.
Algorithm MULT (ideal multiplication, real and imaginary case)
Input: (Q a ; P a are two reduced ideals (the ideals are principal
in the real case).
primitive ideal in standard form (principal in the real case),
ab. In the imaginary case, S need not be output.
Algorithm:
1. T / gcd(Q a
2. If
else
3. Q c /
Proposition 3.1 The parameters c and S computed by Algorithm MULT satisfy
and the algorithm performs O(g 2 ) field operations.
Proof: The correctness of the algorithm is proved in Section II:2 of [36] for the real case and follows from [12]
for the imaginary case. The degree bounds follow from the fact that a and b are reduced and in standard
form. For the complexity result, see Proposition 5 of [30]. 2
Henceforth, we need to treat the imaginary and real settings separately. We begin with the imaginary
situation and first describe the composition operation which computes the unique reduced representative
in the class of the product ideal ab, where a and b are two reduced ideals. The procedure uses two different
types of reduction steps. The first step is only used once at the beginning of the reduction process. If this
does not produce a reduced ideal, the second step, which is computationally more efficient than the first
step, is used subsequently, until a reduced ideal is obtained.
Algorithm I-RED-STEP1 (initial reduction step, imaginary case)
primitive ideal in standard form.
is an ideal equivalent to a in standard form and a =
Algorithm: Q+ / (D \Gamma P 2 )=Q; a /
\GammaP
Algorithm I-RED-STEP2 (subsequent reduction step, imaginary case)
Input: The output (Q; 2.
is an ideal equivalent to a in standard form and a =
Algorithm: Q+ /
\GammaP
(so again
We observe that for both reduction steps, deg(Q+ )
Algorithm I-COMPOSITION (ideal composition, imaginary case)
Input: (Q a ; P a are two reduced ideals in standard form.
is the reduced ideal equivalent to ab in standard form.
Algorithm:
1.
2. If deg(Q c ) ? g then
2.1
2.2
2.3 While deg(Q c ) ? g
2.3.1
Proposition 3.2 The ideal c computed by Algorithm I-COMPOSITION is the reduced ideal equivalent to
ab in standard form. Furthermore, the algorithm performs O(g 2 ) field operations.
Proof: For the correctness of the algorithm, see [12]. By Theorem 1 of [30], the complexity of the procedure
is
For our cryptographic schemes, we require an algorithm for "exponentiation" of reduced ideals. This method
is based on the standard repated squaring and multiplying technique used for ordinary exponentiation (see
for example Algorithm 1.2.3, p. 9, of [13]).
Algorithm I-EXP (exponentiation, imaginary case)
Input: (Q a ; P a ; n) where a = (Q a ; P a ) is a reduced ideal in standard form and n 2 N 0 .
is the reduced ideal equivalent to a
n in standard form.
Algorithm:
1. If
1.1
else
1.2 Compute the binary representation
1.3
1.4 For i / 1 to l do
1.4.2 If b
Proposition 3.3 The ideal computed by Algorithm I-EXP is the reduced ideal equivalent to a
n and the
algorithm performs O(maxf1; g 2 log ng) field operations.
We now proceed with the real setting. For brevity, we set
Dc, i.e. d is the polynomial part of a
(fixed) square root of D as defined in Section 2. As in the imaginary model, we will have two reduction
steps; the first one is to be applied immediately after ideal multiplication, while the second, more efficient
one is for subsequent use. Here the output ideal will not be in standard form, even if the input ideal is.
Recall that the reduced principal ideals form a finite set g. The reduction steps can
also be used to move from any ideal r i 2 R to the next reduced principal ideal r i+1 (1 applying
reduction to r m yields r 1 , so the movement through R is periodic.
If fQ represents a basis of a reduced ideal r i , then both reduction steps produce a reduced
representation of the next ideal r i+1 ; that is, deg(P
fact d and P i agree in their two highest coefficients [36].
In the real setting, all the reduced ideals produced by our algorithms are in reduced form. If a user prefers
ideals in standard form, he or she can easily convert the reduced representation (Q; P ) to a standard representation
by replacing P by P (mod Q), deg(P
Algorithm R-RED-STEP1 (initial reduction step, real case)
primitive principal ideal.
primitive principal ideal and P
Algorithm:
1. a /
2. P+ / d \Gamma
Algorithm R-RED-STEP2 (subsequent reduction step, real case)
Input: The output (Q; 2.
primitive principal ideal and P
Algorithm:
1. a /
2. P+ / d \Gamma
For both reduction steps, we have deg(Q+ ideal is not reduced.
So in this case, we again have deg(Q+ 2. If the input ideal is reduced, then the output ideal is
also reduced and in reduced representation; in particular deg(Q+
Let a 1 ; a be a sequence of primitive principal ideals where a i+1 is obtained by applying one of the
reduction steps to a i (i 2 N). For i 0, we associate with each ideal a
K. If we set
Y
then a is the conjugate of ' i . Since ff i ff
and hence for i 2
Our next algorithm shows how to obtain from any primitive principal ideal a reduced one. If the input ideal
is not reduced, then we simply apply reduction steps until we obtain a reduced ideal (Q; P ), i.e. deg(Q) g.
Since each reduction step reduces the degree of Q by at least 2, this must eventually happen.
Algorithm R-REDUCE (ideal reduction, real case)
Input: (Q a ; P a ) where a = (Q a ; P a ) is a primitive principal ideal.
reduced principal ideal in reduced form and ffl is the degree of a
relative generator of b with respect to a.
Algorithm:
1.
2. If deg(Q b ) ? g, then
2.1 Q / Q a .
2.2
2.3 While deg(Q b ) ? g do
2.3.1
2.3.2 ffl
2.4 a /
Proposition 3.4 The ideal b computed by Algorithm R-REDUCE is a reduced principal ideal in reduced
form, and ffl is the degree of a relative generator of b with respect to a.
Proof: For the proof that b is a reduced principal ideal in reduced form, see [36]. Suppose the reduction
steps in the algorithm generate ideals a
i is minimal with deg(Q i ) g, so a i+1 is the first reduced ideal in the sequence. Then we have
at the end of the while loop, where ff At the end of the algorithm,
which is correct by (3.1). 2
be a sequence of reduced principal ideals, where as before, r i+1 is obtained by applying a
reduction step to r i (i 2 N). The relative distance from r i to r 1 is
. Then the relative distance ffi i;1 is a nonnegative function on the set R of reduced principal
ideals which strictly increases with i. Since deg(a
reduced representation of r i , (3.1) implies for
In particular, if r is a generator of minimal degree of r i . In this case, ffi simply
called the distance of r i and is denoted by ffi (r Hence, in general, we have
reduction steps, starting with r yield all of R.
is the regulator of K.
R. The reduced principal ideal below k is the unique reduced principal ideal r i with
then the ideal a b is the reduced principal ideal below We point out
that the computation of a b does not require knowledge of ffi (a) or ffi (b).
We will oftentimes need to advance from a reduced principal ideal a certain specified length in R. For
example, if we multiply two reduced principal ideals a and b using MULT and reduce the result using
R-REDUCE, then we still need to perform reduction steps until we actually reach the ideal a b below
The next five algorithms give as output a reduced principal ideal together with its 'error' ffl in distance from
the input; that is, the difference between the distances of the input and the output ideals. This error will
always be an integer between \Gammag and 0. Note, however, that the ideal distances themselves are never used.
Algorithm R-ADVANCE (advancement in R, real case)
Input: (Q a ; P a ; a ) is a reduced principal ideal in reduced form and k 2 N 0 .
is the reduced principal ideal below reduced form and
Algorithm:
1.
2. If ffi k, then
2.1
2.2 While ffi k do
2.2.1
2.2.2
2.3 ffl
Proposition 3.5 The quantities b and ffl computed by Algorithm R-ADVANCE are, respectively, the reduced
principal ideal below ffi (a)+k and the value of ffi (b; a)\Gammak. Furthermore, the algorithm performs O(g \Delta maxfk; gg)
field operations.
Proof: In each step, the value of ffi is the relative distance from the ideal with basis (Q to a by (3.2).
So the final ideal b has maximal distance such that ffi (b; a) k and is thus the reduced principal ideal below
Now the degrees of P a , Q a , and d are all bounded by g + 1, so step 2.1 requires O(g 2 ) field operations. Since
by (3.2), each reduction step advances the value of ffi by at least 1, the loop in step 2.2 is executed at most
k times (provided k ? 0). Each reduction step in step 2.2.1 performs O(g deg(a)) operations. By (3.2), the
sum of the degrees of all the a values is O(ffi(b; O(k). So the total number of operations performed in
step 2 is O(gk). 2
We note that one could slightly improve algorithm R-ADVANCE by replacing the reduction step in step
2.2.1 by a call of R-RED-STEP2 and adding the appropriate inputs to the algorithm. We avoided this here
in order to keep the description of the algorithm simpler and more transparent.
Algorithm R-COMPOSITION (ideal composition, real case)
Input: (Q a ; P a are two reduced principal ideals.
is in reduced form and
Algorithm:
1.
2.
3.
Proposition 3.6 Algorithm R-COMPOSITION computes the reduced principal ideal a b below
and the quantity (b). Furthermore, the algorithm performs O(g 2 ) field operations.
Proof: Let ~ c be the ideal generated by step 2. Then computes the ideal
below ffi (~ c) which is a b, and the correct value of ffl. Since 0
(see [36]), step 3 performs at most 2g reduction steps by (3.2). In fact, according to [30], the algorithm
performs 21g 2 +O(g) field operations. 2
The complexity results in the proofs of Propositions 3.2 and 3.6 show that composition in the real and the
imaginary model perform at essentially the same speed. A computer implementation by the authors of [30]
confirms that the same is true for the performance of both composition operations in practice.
Using repeated applications of Algorithm R-PRODUCT in combination with R-ADVANCE, we can adapt
the binary exponentiation technique mentioned earlier to compute for a reduced principal ideal a and an
the reduced ideal below nffi(a).
Algorithm R-EXP (exponentiation, real case)
Input: (Q a ; P a ; n) where a = (Q a ; P a ) is a reduced principal ideal in reduced form and n 2 N 0 .
is the reduced principal ideal below nffi(a) in reduced form and
Algorithm:
1. ffl / 0.
2. If
2.1
else
2.2 Compute the binary representation
2.3
2.4 For i / 1 to l do
2.4.1
2.4.2
2.4.3 If b
2.4.3.1
2.4.3.2
Proposition 3.7 The ideal computed by Algorithm R-EXP is the reduced principal ideal below nffi(a). Fur-
thermore, the algorithm performs O(maxf1; g 2 log ng) field operations.
Proof: We first observe that (1; d) is the reduced representation of O which is the reduced principal ideal
below nffi(a) for Assume now that n ? 0 and set s
to show that at the end of the i-th iteration of the for loop, b is the reduced
principal ideal below s i ffi (a) and ffl
is correct. Now consider the (i 1)-st iteration of the for loop. Step 2.4.1 produces the reduced principal
ideal b below 2ffi(b i ) and 2.4.2 generates the reduced principal ideal ~ b below
and
which is correct. Suppose b
the reduced principal ideal " b below
Finally, step 2.4.3.2 generates the
reduced principal ideal b i+1 below
and
Now \Gammag ffi; ffl 0 for the values of ffi and ffl throughout the algorithm, so each R-ADVANCE call advances
a distance of O(g). Hence the complexity result follows from Propositions 3.6 and 3.5. 2
Using the previous algorithm, we can now generate the reduced principal ideal below any nonnegative integer
k. Here, we make use of the fact that if r d) (in reduced form), then r d) with distance
g, then we first compute the reduced principal ideal
using R-EXP on the base ideal r 2 and the exponent n. Then we apply reduction
steps until we reach the reduced principal ideal
Algorithm R-BELOW (generates an ideal of specific distance, real case)
Output: (Q; is the reduced principal ideal below k and
Algorithm:
1. n /
2. (Q a ; P a
3. (Q;
Proposition 3.8 Algorithm R-BELOW computes the reduced principal ideal r below k and the quantity
k. Furthermore, the algorithm performs O(g 2 log field operations.
Proof: We have
4 Cryptographic Schemes
We now present three cryptographic schemes for both real and imaginary of quadratic function fields, namely
a key exchange protocol, a public-key system, and signature scheme. Each system uses the "exponentiation"
method corresponding to the composition operation. The field K should be chosen so that q g is large. More
details on the choice of the field will be given in the next section. For simplicity, we specify the generating
polynomial to be monic.
We begin with the imaginary case. All our schemes require the following precomputation.
I-PRECOMPUTATION (precomputation, imaginary case)
1. Generate an odd prime power q,
2. generate a random squarefree monic polynomial D 2 F q [t] of odd degree,
3. generate a random ideal
4. publicize (q; D; Q; P ).
KEY EXCHANGE PROTOCOL (imaginary case)
Precomputation: Alice and Bob jointly obtain (q; D; Q; P ) by performing I-PRECOMPUTATION.
Protocol:
1. Alice
1.1 secretly generates an integer a, 0 ! a ! q bg=2c ,
1.2 computes (Q a ; P a ) / I-EXP(Q;
1.3 transmits (Q a ; P a ) to Bob.
2. Bob
2.1 secretly generates an integer b,
2.2 computes (Q
2.3 transmits (Q b ; P b ) to Alice.
3. Alice computes (Q
4. Bob computes (Q
parties have now computed the
unique reduced ideal equivalent to r
ab in standard form. They can use the polynomials (Q
previously agreed upon portion thereof) as their key. Since a and b are reduced and in standard form, both
parties transmit approximately 2g log q bits of information.
PUBLIC-KEY CRYPTOSYSTEM (imaginary case)
Key Generation:
All participants jointly obtain (q; D; Q; P ) by performing I-PRECOMPUTATION.
Each participant
1. secretly generates an integer a, 0 ! a ! q bg=2c ,
2. computes (Q a ; P a ) / I-EXP(Q;
3. makes (Q a ; P a ) the public key and a the secret key.
Encryption: To encrypt a message M , the sender (with secret key s)
1. looks up the recipient's public
2. computes (Q
3. repeat
3.1 generates the bit string x 2 N obtained by concatenating the coefficients in F q of the polynomial
3.2 removes the first block m with
3.3 sends the ciphertext m \Phi x (the bitwise "exclusive or"of m and x),
until all of M encrypted.
Decryption: To decrypt a ciphertext C, the recipient (with secret key r)
1. looks up the sender's public key (Q
2. computes (Q
3. repeat
3.1 generates the bit string x 2 N obtained by concatenating the coefficients in F q of the polynomial
3.2 removes the first block c with c ! x from C,
3.3 computes m / c \Phi x,
until all of C is decrypted,
4. concatenates all the blocks m to obtain the plaintext M .
Once again (Q parties compute the same reduced ideal (Q
thus the same polynomial Q k and bit string x. Since deg(Q k ) g, each block c of ciphertext is approximately
log q bits long.
Our next scheme requires a collision-resistant one-way hash function hash that takes as input a message M
and the polynomials Q and P of a reduced ideal (in standard form) and produces positive integer values not
exceeding q bg=2c . The inputs of this hash function can also be thought of as bit strings, if we concatenate
M , Q, and P . The idea of using a hash function as described above was first presented in [27], where it was
used for a signature scheme based on a real quadratic function field of characteristic 2.
SIGNATURE SCHEME (imaginary case)
Precomputation:
All participants jointly
1. obtain (q; D; Q; P ) by performing I-PRECOMPUTATION.
2. agree on a cryptographically secure hash function hash : N \Theta F q [t] g \Theta F q [t]
(here, F q [t] m denotes the set of polynomials in F q [t] of degree at most m).
Each participant
1. secretly generates an integer a, 0 ! a ! q bg=2c ,
2. computes (Q a ; P a ) / I-EXP(Q;
3. makes (Q a ; P a ) the public key and a the secret key.
Signature Generation: To sign a message M , the signer (with secret key a)
1. secretly generates an integer r,
2. computes (Q r
3. computes m /
4. computes s / r \Gamma ma,
5. sends the signature (Q along with the message M .
Signature Verification: To verify the signature (Q s) to the message M , the verifier
2. computes (Q
3. looks up the senders public key (Q a ; P a ),
4. computes (Q
5. computes (Q r
6. accepts the signature if and only if Q r 0
To show that the verification procedure is correct, we first observe that if ( ~
P) is any reduced ideal, then
P ) is the reduced representative of the ideal class that is the inverse in the class group C of the ideal
class of ( ~
is clear that ( ~
reduced and in standard form, and multiplying ( ~
using algorithm MULT gives as result the ideal O = (1; 0).
Now we have (Q
To forge a signature, an adversary needs to generate a reduced ideal and an integer s such
that (Q; P It is necessary to use the hash function to prevent the following
attack. Suppose we replaced by a message block m where . Then an
adversary can simply pick a random positive integer s and compute the reduced ideal (Q r ; P r ) equivalent to
Such a forged signature (Q would always be accepted by the verifier. The hash
function forces the signer to generate the reduced ideal (Q r ; P r ) before computing s, rather than making it
possible to choose s first and then generate a "fitting" reduced ideal (Q r ; P r ).
, so the transmission of a signature requires approximately
3g log q bits of information. Signatures can be somewhat shortened by imposing a smaller upper bound on
a and m.
We now continue with cryptographic schemes in the real setting. Some of the schemes require a participant to
generate a reduced principal ideal together with its distance. The easiest way to achieve this is to generate a
random nonnegative integer a of desired size and compute (Q a ; P a
The schemes also require the precomputation of
Dc. This can be done using a Puisseux diagram (see
[23]). Once again, we there is a common precomputation to all schemes.
R-PRECOMPUTATION (precomputation, real case)
1. Generate an odd prime power q,
2. generate a random squarefree monic polynomial D 2 F q [t] of even degree,
3. compute d / b
Dc,
4. publicize (q; D; d).
The protocol given below is a slight improvement over the versions given in [34] and [27] in that it eliminates
the need for including a reduced ideal in the set of public parameters.
KEY EXCHANGE PROTOCOL (real case)
Precomputation: Alice and Bob jointly obtain (Q; D; d) by performing R-PRECOMPUTATION.
Protocol:
1. Alice
1.1 secretly generates a reduced principal ideal a = (Q a ; P a ) with distance
1.2 transmits (Q a ; P a ) to Bob.
2. Bob
2.1 secretly generates a reduced principal ideal
2.2 transmits (Q b ; P b ) to Alice.
3. Alice computes (Q
4. Bob computes (Q
Both parties compute the reduced principal ideal (Q (a)ffi(b). They can use Q k and P k (or
any previously agreed upon portion thereof) as their key. As in the imaginary case, both parties transfer
approximately 2g log q bits of information.
PUBLIC KEY CRYPTOSYSTEM (real case)
Key Generation:
All participants jointly obtain (Q; D; d) by performing R-PRECOMPUTATION.
Each participant
1. secretly generates a reduced principal ideal a = (Q a ; P a ) with distance
2. makes (Q a ; P a ) the public key and ffi (a) the secret key.
Encryption: To encrypt a message M , the sender (with secret key ffi (s))
1. looks up the recipient's public
2. computes (Q
3. repeat
3.1 generates the bit string x 2 N obtained by concatenating the coefficients in F q of the polynomial
3.2 removes the first block m with
3.3 sends the ciphertext m \Phi x (the "exclusive or"of m and x),
until all of M is encrypted.
Decryption: To decrypt a ciphertext C, the recipient (with secret key ffi (r))
1. looks up the sender's public key (Q
2. computes (Q
3. repeat
3.1 generates the bit string x 2 N obtained by concatenating the coefficients in F q of the polynomial
3.2 removes the first block c with c ! x from C,
3.3 computes m / c \Phi x,
until all of C is decrypted,
4. concatenates all the blocks m to obtain the plaintext M .
Both parties compute the reduced principal ideal (Q thus the same polynomial Q k
and bit string x. Again each block c of ciphertext is approximately g log q bits long.
Our next scheme is an improvement over the signature scheme of [27] in that it generates shorter signatures.
Once again, we use a cryptographically secure hash function similar to the one used in the corresponding
scheme in imaginary fields (the signature scheme of [32] failed to take this into consideration).
Precomputation:
All participants jointly
1. obtain (Q; D; d) by performing R-PRECOMPUTATION.
2. agree on a cryptographically secure hash function hash : N \Theta F q [t] g \Theta F q [t] g.
Each participant
1. generates a reduced principal ideal a = (Q a ; P a ) with distance
2. makes (Q a ; P a ) the public key and ffi (a) the secret key.
Signature Generation: To sign a message M , the signer (with secret key ffi (a))
1. secretly generates a reduced principal ideal r with distance
2. computes m /
3. computes s /
4. sends the signature (Q along with the message M .
Signature Verification: To verify the signature (Q s) to the message M , the verifier
2. computes (Q
3. looks up the sender's public key (Q a ; P a ),
4. computes (Q
5. if s 0 then
5.1 computes (Q
5.2 computes (Q r
5.3 accepts the signature if and only if Q r
else
5.4 computes (Q
5.6 accepts the signature if and only if Q b 0
Once again, we check the verification procedure. Write
first that s 0. Then we have
r
0 is the reduced principal ideal below
Now assume that s ! 0. Then
0 is the reduced principal ideal below
To forge a signature (Q r ; P r ; s), an opponent must generate a reduced principal ideal and an
integer s with the following properties. If s 0, then r is the reduced principal ideal below s+mffi(a). If s ! 0,
then b is the reduced principal ideal below s. The similarities in these two cases are best seen as follows.
Let (s; ffi ) be as in step 2 of the verification procedure. If s 0, write
so must be such that r has distance at most 2g below
\Gammag must be such that b has distance within g of r s.
so the transmission of a signature again requires at most
3g log q bits of information. The signatures in [27] are between 3g log q and 4g log q bits long. Once again,
signatures could be shortened by imposing tighter bounds on ffi (a) and m.
Before we discuss possible attacks on our cryptographic schemes, we explore the size of our underlying sets.
We need to ensure that the class group C (in the imaginary setting) and the set R of reduced principal ideals
(in the real setting) are sufficiently large.
The order of C is simply the ideal class number h 0 . Consider now the real case and let
is the regulator of the field. So the size of R is determined by the size of R (in the elliptic
case, we even have h be the order of the Jacobian J of the curve C defining the function field
K. Then
ae h 0 in the imaginary case
Rh 0 in the real case
(see [40]). It is well-known (see for example Theorem V.1.15, p. 166, of [39]) that
is the L polynomial of Kjk. Here,
is an algebraic integer
of absolute value p q by the Hasse-Weil Theorem (see Theorem V.2.1, p. 169, of [39]). It follows that
This means that in an imaginary field, there are approximately q g ideal classes in C. Analyzing the size of
R in the real case is slightly more complicated. To ensure a large regulator, we need to make h 0 as small
as possible. A strong heuristic argument ([34], also Section 3.4.1, pp. 107-111, of [37]), analogous to the
Cohen-Lenstra heuristic in real quadratic number fields [14, 15]), shows that the probability that the order
of the odd part of the ideal class group exceeds x is 1=2x +O((log x)=x 2 ). In fact, in the elliptic case, there
is very strong numerical evidence that ideal class groups behave according to this heuristic [19, 20], and it
can be proved that for sufficiently large q, the probability that h At the same time, it is
easy to find real fields whose ideal class number is odd; for example, by a result of Zhang [41], it suffices to
choose D to be irreducible over k or the product of two odd-degree irreducibles in k[t]. Hence under these
choices, h 0 is small with high probability, and there are close to q g reduced principal ideals in K.
Hence to foil an exhaustive search attack, we should ensure that q g is sufficiently large. Considerations for
good choices of q and g are discussed below, but we point out that within these considerations, users can
take advantage of the following trade-off. For small q and large g, our complexity analysis in section 3 results
in very good performance in terms of field operations, but field arithmetic will dominate our computation
times. If g is small and q is large, then field arithmetic will be very fast, but the number of field operations
performed by our algorithms will increase. Thus, one could select q and g in such a way as to optimize
performance, while ensuring a sufficient level of security in our systems.
We now explore the possibility of breaking our schemes. We begin with the imaginary model. Here, the
relevant problem underlying all three schemes is the DLP in the class group C of K: given two ideals g and
d with d g
x for some x 2 f0; find the discrete logarithm x. It is obvious that for any of
the schemes, there is a polynomial-time reduction from an algorithm for solving the DLP to an algorithm
for breaking the system. Since no other way of compromising any of the schemes is known, we focus our
attention on the difficulty of the DLP in C.
The ideal class group C is isomorphic to the Jacobian J , and the DLPs in C and J are polynomially
equivalent. We first observe that in certain cases, the DLP in the Jacobian of an elliptic or hyperelliptic
curve is reducible to the DLP in a finite field, in which case using (hyper)elliptic function fields represents no
advantage over using finite fields for the implementation of discrete logarithm based cryptosystems. More
exactly, the curve C should not be supersingular [25], and the largest prime divisor of h 0 should not divide
small k for which the DLP in F q k is feasible [18]. It is currently unknown whether such
reductions are possible in situations other than those cited above.
In [4], a probabilistic algorithm for computing discrete logarithms in J in the case where q is a prime is
given. This technique is subexponential of complexity exp(c
log q g log log q g ) where c ? 0 is a constant,
small compared to g. The algorithm may be generalizable to odd prime
powers q, but seems infeasible in practice for sufficiently large parameters, and one could foil an attack based
on this method by choosing q to be large and the genus g to be small.
In general, a technique analogous to that of Pohlig-Hellman [31] can be used to compute discrete logarithms
in C. The complexity of this method is essentially of order p p where p is the largest prime factor of h 0 . This
attack requires that h 0 be known. A technique described in [24] can be used to compute h 0 by generating the
coefficients of the L polynomial L(t); this method works particularly well for small g. Another algorithm for
computing (among other quantities) h 0 given in [3] is polynomial in the size of q and exponential in g. Hence,
while it might be feasible to determine h 0 , particularly for small g, the Pohlig-Hellman attack is infeasible
unless h 0 is smooth, i.e. has only small prime factors.
In the real setting, there are two problems that are relevant to possible attacks on our cryptoschemes. The
distance problem (DP) in R requires the computation of the distance of a reduced principal ideal. The DLP
in R is the problem of finding x (mod R), given reduced principal ideals g and d where d is the reduced
principal ideal below xffi(g). Both problems are equally difficult; hence, the problem of breaking any of our
schemes is polynomial-time reducible to either problem.
Proposition 5.1 There is a polynomial-time reduction from the DLP to the DP and vice versa.
Proof: Suppose first that we can solve any instance of the DLP. Let r be a reduced principal ideal. We wish
to find ffi (r). Let ffi g. Let r
0 be the ideal below (y
Suppose the call R-ADVANCE(Q r sequence a
reduced principal ideals which we store in memory. Then (3g 1. Now by (5.2):
hence
g. Using the
use our DLP algorithm to determine from the ideal r d) with distance
and the reduced principal ideal r
below yffi 2 the discrete logarithm y 2. Of these of our DLP
procedure, one will give a correct answer for y might give a wrong or meaningless
answer or no answer at all). We now have
us
can check which one of these candidates is the correct one by using the following simple technique. If l is a
candidate for ffi (r), compute (Q;
Assume now that we know how to compute distances and let g and d be reduced principal ideals such that
d is the reduced principal ideal below xffi(g) for some x 2 N 0 . Our task is to find x (mod R). First compute
bounds uniquely determine the integer x. If ffi (g) g, then O, in which case
Clearly, our three systems are broken if there is a fast algorithm for the DLP or the DP. The difficulty of
the DLP was already discussed in [34], so we briefly repeat the arguments here. It was shown in [37, 38]
that there is a simple bijection from to the set f0; of multiples
of P (except P itself) that maps r 1 onto 0 and r i onto iP for 2 i m. Here, P is a point on a certain
elliptic curve over F q . Consequently, there is a polynomial-time reduction from the DLP in R to the DLP
in the group of points on the elliptic curve: given two points P and Q on the curve with
then there is also a polynomial-time reduction in the opposite direction, so
the DLP for elliptic real function fields of characteristic not equal to 3 is polynomially equivalent to the DLP
for elliptic curves over a finite field. Since the best known algorithm for computing discrete logarithms on an
elliptic curve over a finite field F q has complexity of order p q, provided the curve is not supersingular, we
require at this point exponential time to compute discrete logarithms in the set of reduced principal ideals
of an elliptic function field.
For hyperelliptic real fields, there is no equivalence of the type discussed above. Here, the best known
general algorithms for computing both discrete logarithms and the regulator R of the field are of complexity
O(q (2g\Gamma1)=5 ) (see Theorem 2.2.33, p. 78, of [37]). If log q 2g small compared to g,
then discrete logarithms, including the regulator, can be computed probabilisticly in subexponential time
exp(c
log q g log log q g ) where c ? 0 is a constant ([26], Theorem 6.3.2, p. 203, of [37]). The algorithm does
not appear feasible in practice; nevertheless, to be safe, one might again wish to choose q to be large relative
to g. The computations in [34] show that the elliptic case performed best computationally; for a 50
digit prime q, a call of R-EXP with a 50 digit exponent required 3.76 seconds on a Silicon Graphics Challenge
workstation, and further optimization of this implementation will undoubtedly produce faster running times.
A Pohlig-Hellman-like technique for computing discrete logarithms in a real quadratic function field of
characteristic 2 described in considerable detail in [27] can easily be adapted to work in real fields of odd
characteristic. The algorithm requires knowledge of the regulator R, and as usual, its running time is
essentially the square root of the largest prime factor of R. Once again, this method does not pose a threat
to our cryptographic schemes at this time if q g is sufficiently large (100 decimal digits seems more than
sufficient with current computer technology) and R is not smooth.
Thus, if the parameters are chosen with some care, the fastest currently known methods for breaking our
schemes are all exponential. This is in contrast to systems based on discrete logarithms in finite fields where
the DLP is subexponential [2], as well as the corresponding systems in quadratic number fields (both real and
imaginary), where the relevant DLPs can also be solved in subexponential time [22, 1]. Thus, our systems
might well be more secure. Our real key exchange protocol is also significantly faster than the corresponding
scheme in real quadratic number fields [11, 33] (see our computations in [34]), although we have no data
available as to how our systems would perform relative to elliptic curve systems such as [5].
Unfortunately, in some instances, more information needs to be transmitted than in the original Diffie-Hellman
and ElGamal systems. Let l be the size of the underlying set, i.e. for the original
Diffie-Hellman and ElGamal schemes over a finite field F p and l g g for our schemes. Diffie-Hellman keys
require log l bits of transmission, whereas our keys are twice as long. Similarly, ElGamal signatures have
size 2 log l, while our signatures are up to 3 log l bits long. However, as mentioned before, they can be made
shorter, say 2 log l bits as well, if we reduce our upper bound on our parameters from q g=2 to q g=4 . Even
with these smaller quantities, we consider the schemes secure.
--R
Ein Algorithmus zur Berechnung der Klassenzahl und des Regulators reellquadratischer Ord- nungen
A subexponential algorithm for discrete logarithms over all finite fields.
Counting rational points on curves and Abelian varieties over finite fields.
A subexponential algorithm for discrete logarithms over the rational subgroup of the Jacobians of large genus hyperelliptic curves over finite fields.
Quadratische Korper im Gebiete der hoheren Kongruenzen I
Cryptographic protocols based on real-quadratic A-fields (ex- tended abstract)
Cryptographic protocols based on discrete logarithms in real-quadratic orders
Computing in the Jacobian of a hyperelliptic curve.
A Course in Computational Algebraic Number Theory.
Heuristics on class groups.
Heuristics on class groups of number fields.
New directions in cryptography.
A public-key cryptosystem and a and a signature scheme based on discrete logarithms
A remark concerning m-divisibility and the discrete logarithm in the divisor class group of curves
A special case of Cohen-Lenstra heuristics in function fields
Class group frequencies of real quadratic function fields: the degree 4 case.
A rigorous subexponential algorithm for computation of class group.
Theorie der Algebraischen Funktionen einer Veranderlichen.
Reducing elliptic curve logarithms to logarithms in a finite field.
Computing discrete logarithms in real quadratic congruence function fields of large genus.
Discrete Logarithm based cryptosystems in quadratic function fields of characteristic 2.
National Institute for Standards and Technology
Real and Imaginary Quadratic Representations of Hyperelliptic Function Fields.
Comparing real and imaginary arithmetics for divisor class groups of hyperelliptic curves.
An improved algorithm for computing logarithms over GF (p) and its cryptographic significance.
Cryptography in real quadratic congruence function fields.
A key exchange protocol using real quadratic fields.
The infrastructure of a real quadratic field and its applications.
Baby step-Giant step-Verfahren in reell-quadratischen Kongruenzfunktionenkorpern mit Charakteristik ungleich 2
Algorithmen in reell-quadratischen Kongruenzfunktionenkorpern
Equivalences between elliptic curves and real quadratic congruence function fields.
Algebraic Function Fields and Codes.
Artins Theorie der quadratischen Kongruenzfunktionenkorper und ihre Anwendung auf die Berechnung der Einheiten- und Klassengruppen
Ambiguous classes and 2-rank of class groups of quadratic function fields
--TR
A key-exchange system based on imaginary quadratic fields
Hyperelliptic cryptosystems
A key exchange system based on real quadratic fields
A remark concerning <italic>m</italic>-divisibility and the discrete logarithm in the divisor class group of curves
A course in computational algebraic number theory
Key-Exchange in Real Quadratic Congruence Function Fields
Discrete Logarithm Based Cryptosystems in Quadratic Function Fields of Characteristic 2
Computing discrete logarithms in real quadratic congruence function fields of large genus
Real and imaginary quadratic representations of hyperelliptic function fields
Class group frequencies of real quadratic function fields
Cryptographic Protocols Based on Discrete Logarithms in Real-quadratic Orders
Cryptographic Protocols Based on Real-Quadratic A-fields
A subexponential algorithm for discrete logarithms over the rational subgroup of the jacobians of large genus hyperelliptic curves over finite fields
Counting Rational Points on Curves and Abelian Varieties over Finite Fields
Comparing Real and Imaginary Arithmetics for Divisor Class Groups of Hyperelliptic Curves
--CTR
Ian F. Blake , Theo Garefalakis, On the complexity of the discrete logarithm and Diffie-Hellman problems, Journal of Complexity, v.20 n.2-3, p.148-170, April/June 2004 | ElGamal signature scheme;quadratic function field;public key cryptosystem;diffie-hellman key exchange protocol;discrete logarithm |
375420 | Slicing concurrent java programs. | Program slicing is an important approach to testing, understanding and maintaining programs. The paper presents a slicing algorithm for concurrent Java programs. Because the execution process of concurrent programs is unpredictable, there are many problems to be solved when slicing. To slice concurrent Java programs, we present concurrent control flow graphs and concurrent program dependence graphs to representconcurrent Java programs. Based on these models, we design an efficient static slicing algorithm for concurrent Java programs. The algorithm may get more precise slices than previous approaches we know. | Introduction
Java is a new object-oriented programming language
and has achieved widespread acceptance because it emphasizes
portability. Java has multithreading capabilities
for concurrent programming. To provide synchronization
between asynchronously running threads, the
Java language and runtime system uses monitors. Because
of the nondeterministic behaviors of concurrent
Java programs, predicting, understanding, and debugging
a concurrent Java program is more difficult than a
sequential object-oriented program. As concurrent Java
applications are going to be accumulated, the development
of techniques and tools to support understanding,
debugging, testing, and maintenance of concurrent Java
software will become an important issue.
Program slicing, originally introduced by Weiser [19],
is a decomposition technique which extracts the elements
of a program related to a particular computation.
A program slice consists of those parts of a program that
may directly or indirectly affect the values computed at
some program point of interest, referred to as a slicing
criterion. The task to compute program slices is
called program slicing. To understand the basic idea of
program slicing, consider a simple example in Figure 1
which shows: (a) a program fragment and (b) its slice
with respect to the slice criterion (Total,14). The slice
has only those statements in the program that might
affect the value of variable Total at line 14. The lines
represented by small rectangles are statements that have
been sliced away.
Program slicing has been studied primarily in the
context of procedural programming languages (for a detailed
survey, see [17]). In such languages, slicing is typically
performed by using a control flow graph or a dependence
graph [6, 11, 8, 15]. Program slicing has many
3 Total := 0.0;
6 Sum := Y;
7 else
9 read(Z);
14 end
3 Total := 0.0;5 if X <= 1 then7 else
(a) A program fragment.
(b) a slice of (a) on the criterion (Total,14).
Figure
1: A program fragment and its slice on criterion
(Total,14).
applications in software engineering activities such as
program understanding [7], debugging [1], testing [10],
maintenance [9], and reverse engineering [2].
As object-oriented software becomes popular, re-
cently, researchers have applied program slicing to
object-oriented programs to handle various object-oriented
features such as classes and objects, class in-
heritance, polymorphism, dynamic binding [4, 5, 12, 13,
14, 18], and concurrency [20]. However, existing slicing
algorithms for object-oriented programs can not be applied
to concurrent Java programs straightforwardly to
obtain correct slices due to specific features of Java concurrency
model. In order to slice concurrent Java programs
correctly, we must extend these slicing techniques
for adapting concurrent Java programs.
In this paper we present the multithreaded dependence
graph for concurrent Java programs on which
ce1 class Producer extends Thread {
private CubbyHole cubbyhole;
3 private int
e4 public Producer(CubbyHole c, int number) {
te8 public void run() {
s9 int i=0;
while (i<10) {
this.number
class Consumer extends Thread {
19 private CubbyHole cubbyhole;
private int
{
{
s27 int i=0;
s28 while (i<10) {
this.number
34 }
ce36 class CubbyHole {
37 private int seq;
private boolean available = false;
synchronized int get() {
while (available == false) {
return seq;
me47 public synchronized int put(int value) {
s48 while (available == true) {
50 }
54 }
ce56 class ProducerConsumerTest {
me57 public static void main(string[] args) {
new CubbyHole();
new Producer(c, 1);
new Consumer(c, 1);
Figure
2: A concurrent Java program.
static slices of the programs can be computed efficiently.
The multithreaded dependence graph of a concurrent
Java program is composed of a collection of thread dependence
graphs each representing a single thread in the
program, and some special kinds of dependence arcs to
represent thread interactions between different threads.
Once a concurrent Java program is represented by its
multithreaded dependence graph, the slices of the program
can be computed by solving a vertex reachability
problem in the graph.
The rest of the paper is organized as follows. Section
briefly introduces the concurrency model of Java. Section
3 discusses some related work. Section 4 presents
the multithreaded dependence graph for concurrent
Java programs. Section 5 shows how to compute static
slices based on the graph. Concluding remarks are given
in Section 7.
Concurrency Model in Java
Java supports concurrent programming with threads
through the language and the runtime system. A
thread, which is a single sequential flow of control within
a program, is similar to the sequential programs in the
sense that each thread also has a beginning, and execution
sequence, and an end and at any given time during
the runtime of the thread, there is a single point of ex-
ecution. However, a thread itself in not a program; it
can not run on its own, Rather, it runs within a pro-
gram. Programs that has multiple synchronous threads
are called multithreaded programs topically. Java provides
a Thread class library, that defines a set of operations
on one thread, like start(), stop(), join(),
suspend() and resume().
Java uses shared memory to support communication
among threads. Objects shared by two or more threads
are called condition variables, and the access on them
must be synchronized. The Java language and runtime
system support thread synchronization through the use
of monitors. In general, a monitor is associated with
a specific data item (a condition variable) and functions
as a lock on that data. When a thread holds the
monitor for some data item, other threads are locked
out and cannot inspect or modify the data. The code
segments within a program that access the same data
from within separate, concurrent threads are known as
critical sections. In the Java language, you may mark
critical sections in your program with the synchronized
keyword. Java provides some methods of Object class,
like wait(), notify(), and notifyall() to support
synchronization among different threads. Using these
operations and different mechanism, threads can cooperate
to complete a valid method sequence of the shared
object.
Figure
2 shows a simple concurrent Java program
that implements the Producer-Consumer problem. The
program creates two threads Producer and Consumer.
The Producer generates an integer between 0 and 9
and stores it in a CubbyHole object. The Consumer
consumes all integers from the CubbyHole as quickly
as they become available. Threads Producer and
Consumer in this example share data through a common
CubbyHole object. However, to execute the program
correctly, the following condition must be satis-
fied, that is, the Producer can not put any new integer
into the CubbyHole unless the previously put integer has
been extracted by the Consumer, while the Consumer
must wait for the Producer to put a new integer in the
CubbyHole is empty.
In order to satisfy the above condition, the activities
of the Producer and Consumer must be synchronized in
two ways. First, the two threads must not simultaneously
access the CubbyHole. A Java thread can handle
this through the use of monitor to lock an object as
described previously. Second, the two threads must do
some simple cooperation. That is, the Producer must
have some way to inform the Consumer that the value is
ready and the Consumer must have some way to inform
the Producer that the value has been extracted. This
can be done by using a collection of methods: wait()
for helping threads wait for a condition, and notify()
and notifyAll() for notifying other threads of when
that condition changes.
3 Program Slicing for Object-Oriented
Programs
In this section, we review some related work on program
slicing which directly or indirectly influence our
work on slicing concurrent Java programs, and explain
why these slicing algorithms can not be applied to concurrent
Java programs straightforwardly.
Larsen and Harrold [13] proposed a static slicing algorithm
for sequential object-oriented programs. They
extended the system dependence graph (SDG) [11] which
was first proposed to handle interprocedural slicing
of sequential procedural programs to the case of sequential
object-oriented programs. Their SDG can be
used to represent many object-oriented features such
as classes and objects, polymorphism, and dynamic
binding. Since the SDG they compute for sequential
object-oriented programs belong to a class of SDGs defined
in [11], they can use the two-pass slicing algorithm
in [11] to compute slices of sequential object-oriented
programs. Chan and Yang [4] adopted a similar
way to extend the SDG for sequential procedural
programs [11] to sequential object-oriented programs,
and use the extended SDG for computing static slices
of sequential object-oriented programs. On the other
hand, Krishnaswamy [12] proposed another approach
to slicing sequential object-oriented programs. He uses
a dependence-based representation called the object-oriented
program dependency graph to represent sequential
object-oriented programs and compute polymorphic
slices of sequential object-oriented programs based on
the graph. Chen et al. [5] also extended the program dependence
graph to the object-oriented dependency graph
for modeling sequential object-oriented programs. Although
these representations can be used to represent
many features of sequential object-oriented programs,
they lack the ability to represent concurrency. There-
fore, the slicing algorithms based on these representations
can not compute static slices of a concurrent Java
program correctly.
Slicing object-oriented programs with concurrency issues
has also been considered. Zhao et al. [20] presented
a dependence-based representation called the system
dependence net to represent concurrent object-oriented
programs (especially Compositional C++ (CC++) programs
[3]). In CC++, synchronization between different
threads is realized by using a single-assignment vari-
able. Threads that share access to a single-assignment
variable can use that variable as a synchronization ele-
ment. Their system dependence net is a straightforward
extension of the SDG of Larsen and Harrold [13], and
therefore can be used to represent many object-oriented
features in a CC++ program. To handle concurrency
issues in CC++, they used a approach proposed by
Cheng [6] which originally used for representing concurrent
procedural programs with single procedures.
However, their approach, when applied to concurrent
Java programs, has some problems due to the following
reason. The concurrency models of CC++ and Java
are essentially different. While Java supports monitors
and some low-level thread synchronization primi-
tives, CC++ uses a single-assignment variable mechanism
to realize thread synchronization. This difference
leads to different sets of concurrency constructs in both
languages, and therefore requires different techniques to
handle.
4 A Dependence Model for Concurrent
Java Programs
Generally, a concurrent Java program has a number
of threads each having its own control flow and
data flow. These information flows are not independent
because inter-thread synchronizations among multiple
control flows and inter-thread communications
among multiple data flows may exist in the program.
To represent concurrent Java programs, we present
a dependence-based representation called the multi-threaded
dependence graph. The multithreaded dependence
graph of a concurrent Java program is composed
of a collection of thread dependence graphs each representing
a single thread in the program, and some special
kinds of dependence arcs to represent thread interactions
between different threads. In this section, we show
how to construct the thread dependence graph for a single
thread and the multithreaded dependence graph for
a complete concurrent Java program.
4.1 Thread Dependence Graphs for Single
Threads
The thread dependence graph (TDG) is used to represent
a single thread in a concurrent Java program.
It is similar to the SDG presented by Larsen and Harrold
[13] for modeling a sequential object-oriented pro-
gram. Since execution behavior of a thread in a concurrent
Java program is similar to that of a sequential
object-oriented program. We can use the technique presented
by Larsen and Harrold for constructing the SDG
of sequential object-oriented programs to construct the
thread dependence graph. The detailed information for
building the SDG of a sequential object-oriented program
can be found in [13]. In the following we briefly
describe our construction method.
The TDG of a thread is an arc-classified digraph that
consists of a number of method dependence graphs each
representing a method that contributes to the implementation
of the thread, and some special kinds of dependence
arcs to represent direct dependencies between
a call and the called method and transitive interprocedural
data dependencies in the thread. Each TDG has
a unique vertex called thread entry vertex to represent
the entry into the thread.
The method dependence graph of a method is an arc-
classified digraph whose vertices represent statements
or control predicates of conditional branch statements
in the method, and arcs represent two types of depen-
dencies: control dependence and data dependence. Control
dependence represents control conditions on which
the execution of a statement or expression depends in
the method. Data dependence represents the data flow
between statements in the method. For each method dependence
graph, there is a unique vertex called method
entry vertex to represent the entry into the method. For
example, me39 and me47 in Figure 3 are method entry
vertices for methods get() and put().
f1_in: cubbyhole=cubbyhole_in
f1_out: cubbyhole_out=cubbyhole
f2_in: this.number=thisnumber_in
f2_out: this.number_out=this.number
f3_in: c=c_in
f5_in: seq=seq_in
f6_in: available=available_in
f2_in
a1_in
f2_in
s26
f6_out
control dependence arc
data dependence arc
f6_in
f5_in
f6_out
f7_in
f6_in
f5_in
parameter dependence arc
call dependence arc
Thread (Producer) Thread (Comsumer)
f6_out: available_out=available
f7_in: value=value_in
a1_in: value_in=i
a2_in: c_in=c
a3_in:
a4_out: cubbyhole=cubbyhole_out
a5_in: this.number_in=this_number
a5_out: this.number=this.number_out
Figure
3: The TDGs for threads Producer and Consumer.
In order to model parameter passing between methods
in a thread, each method dependence graph also
includes formal parameter vertices and actual parameter
vertices. At each method entry there is a formal-in
vertex for each formal parameter of the method and a
formal-out vertex for each formal parameter that may
be modified by the method. At each call site in the
method, a call vertex is created for connecting the called
method, and there is an actual-in vertex for each actual
parameter and an actual-out vertex for each actual parameter
that may be modified by the called method.
Each formal parameter vertex is control-dependent on
the method entry vertex, and each actual parameter
vertex is control-dependent on the call vertex.
Some special kinds of dependence arcs are created for
combining method dependence graphs for all methods
in a thread to form the whole TDG of the thread.
ffl A call dependence arc represents call relationships
between a call method and the called method, and
is created from the call site of a method to the
entry vertex of the called method.
ffl A parameter-in dependence arc represents parameter
passing between actual parameters and formal
input parameter (only if the formal parameter is
at all used by the called method).
ffl A parameter-out dependence arc represents parameter
passing between formal output parameters
and actual parameters (only if the formal parameter
is at all defined by the called method). In
addition, for methods, parameter-out dependence
arcs also represent the data flow of the return value
between the method exit and the call site.
Figure
3 shows two TDGs for threads Producer and
Consumer. Each TDG has an entry vertex that corresponds
to the first statement in its run() method. For
example, in Figure 3 the entry vertex of the TDG for
thread Producer is te8, and the entry vertex of the
TDG for thread Consumer is te25.
4.2 Multithreaded Dependence Graphs for
Concurrent Java Programs
The multithreaded dependence graph (MDG) of a concurrent
Java program is an arc-classified digraph which
consists of a collection of TDGs each representing a single
thread, and some special kinds of dependence arcs
to model thread interactions between different threads
in the program. There is an entry vertex for the MDG
representing the start entry into the program, and a
method dependence graph constructed for the main()
method.
To capture the synchronization between thread synchronization
statements and communication between
shared objects in different threads, we define some special
kinds of dependence arcs in the MDG.
4.2.1 Synchronization Dependencies
We use synchronization dependence to capture dependence
relationships between different threads due to
inter-thread synchronization.
ffl Informally, a statement u in one thread is
ce56
f1_in: cubbyhole=cubbyhole_in
f1_out: cubbyhole_out=cubbyhole
f2_in: this.number=thisnumber_in
this.number
f3_in: c=c_in
f5_in: seq=seq_in
f6_in: available=available_in
f3_in
f4_in f1_out
f2_out
a2_in
a3_in
a4_out
a5_out
s22 s23
f3_in
f4_in f1_out
f2_out
a2_in
a3_in
a4_out
a5_out
control dependence arc
data dependence arc synchronization dependence arc
parameter dependence arc
call dependence arc
communication dependence arc
f2_in
a1_in
f2_in
s26
f6_out
f6_in
f5_in
f6_out
f7_in
f6_in
f5_in
Thread (Producer) Thread (Comsumer)
f6_out: available_out=available
f7_in: value=value_in
a1_in: value_in=i
a2_in: c_in=c
a3_in:
a4_out: cubbyhole=cubbyhole_out
a5_in: this.number_in=this_number
a5_out: this.number=this.number_out
slicing criterion
(s45, seq)
Figure
4: The MDG of a concurrent Java program in Figure 2.
synchronization-dependent on a statement v in another
thread if the start or termination of the execution
of u directly determinates the start or termination
of the execution of v through an inter-thread
synchronization.
In Java synchronization dependencies among different
threads may be caused in several ways. We show
how to create synchronization dependence arc for each
of them.
(1) Wait-notify Relationships
A synchronization can be realized by using wait()
and notify()/notifyall() method calls in different
threads. For such a case, a synchronization dependence
arc is created from a vertex u to a vertex v if u denoted
a notify() or notifyall() call in thread t1 and v
denotes a wait() call in thread t2 for some thread object
o, where threads t1 and t2 are different. A special
case is that there are more than one threads waiting for
the notification from a thread t. For such a case, we
create synchronization dependence arcs from the vertex
denoted notify() call of t to each vertex denoted
wait() call of the other threads respectively.
For example, in the program of Figure 2, methods
put() and get() use Java Object's notify() and
wait() methods to cooperate their activities. This
means that there exists synchronization dependencies
between wait() method call in Producer and notify()
method call in Comsumer, and between notify()
method call in Producer and wait() method call in
Consumer. So we can create synchronization dependence
arcs between s53 and s41, and between s44 and
s49 as showed in Figure 4.
(2) Stop-join Relationships
Another case that may cause inter-thread synchronization
is the stop-join relationship, that is, a thread
calling the join() method of another thread may proceed
only after this target thread terminates. For such a
case, a synchronization dependence arc is created from
a vertex u to a vertex v if u denotes the last statement
in thread t1 and v denotes a join() call in thread t2,
where threads t1 and t2 are different.
4.2.2 Communication Dependencies
We use communication dependence to capture dependence
relationships between different threads due to
inter-thread communication.
ffl Informally a statement u in one thread is directly
communication-dependent on a statement v in another
thread if the value of a variable computed
at u has direct influence on the value of a variable
computed at v through an inter-thread communication
Java uses shared memory to support communication
among threads. Communications may occur when two
parallel executed threads exchange their data via shared
variables. In such a case, a communication dependence
arc is created from a vertex u to a vertex v if u denotes a
statement s 1 in thread t1 and v denotes a statement s 2
in thread t2 for some thread object o, where s 1 and s 2
shares a common variable and t1 and t2 are different. A
special case is that there is more than one thread waiting
for the notification from some thread t, and there is an
attribute a shared by these threads as a communication
element. In such a case, we create communication dependence
arcs from each statement containing variable
a of the threads to the statement containing variable a
in thread t respectively.
For example, in the program of Figure 2, methods
put() and get() use Java Object's notify() and
wait() methods to cooperate their activities. In this
way, each seq placed in the CubbyHole by the Producer
is extracted once and only once by the Consumer. By analyzing
the source code we know that there exist inter-thread
communication between statement s51 in thread
Producer and statement s45 in Comsumer which share
variable seq. Similarly, inter-thread communications
may also occur between statements s52 and s40 and
between s43 and s48 due to shared variable available.
As a result, communication dependence arcs can be created
from s52 to s40, s51 to s45, and s43 to s48 as
showed in Figure 4.
4.2.3 Constructing the MDG
In Java, any program begins execution with main()
method. The thread of execution of the main method
is the only thread that is running when the program is
started. Execution of all other threads is started by calling
their start() methods, which begins execution with
their corresponding run() methods. The construction
of the MDG for a complete concurrent Java program can
be done by combining the TDGs for all threads in the
program at synchronization and communication points
by adding synchronization and communication dependence
arcs between these points. For this purpose, we
create an entry vertex for the MDG that represents the
entry into the program, and construct a method dependence
graph for the main() method. Moreover, a start
arc is created from each start() method call in the
main() method to the corresponding thread entry ver-
tex. Finally, synchronization and communication dependence
arcs are created between statements related
to thread interaction in different threads. Note that in
this paper, since we focus on concurrency issues in Java,
many sequential object-oriented features that may also
exist in a concurrent Java program are not discussed.
However, how to represent these features in sequential
object-oriented programs using dependence graphs has
already been discussed by some researchers [4, 5, 12, 13].
Their techniques can be directly integrated into our
MDG for concurrent Java programs. Figure 4 shows
the MDG for the program in Figure 2. It consists of two
TDGs for threads Producer and Consumer, and some
additional synchronization and communication dependence
arc to model synchronization and communication
between Producer and Consumer.
5 Slicing Concurrent Java Programs
In this section, we show how to compute static slices
of a concurrent Java program. We focus on two types
of slicing problems: slicing a single thread based on the
TDG and slicing the whole program based on the MDG.
5.1 Slicing a Single Thread
In addition to slicing a complete concurrent Java pro-
gram, sometimes we need a slice on a single thread of
the program for analyzing a single thread independently
which the thread interactions can be ignored. Such a
slice can be computed based on its TDG. We define
some slicing notions of a single thread as follows.
A static slicing criterion for a thread of a concurrent
Java program is a tuple (s; v), where s is a statement
in the thread and v is a variable used at s, or a method
call called at s. A static slice of a thread on a given
static slicing criterion (s; v) consists of all statements in
the thread that possibly affect the value of the variable
v at s or the value returned by the method call v at s.
As we mentioned previously, the TDG for a thread is
similar to the SDG for a sequential object-oriented program
[13]. Therefore, we can use the two-pass slicing
algorithm in [11] to compute static slices of the thread
based on its TDG. In the first phase, the algorithm
traverses backward along all arcs except parameter-out
arcs, and set marks to those vertices reached in the
TDG. In the second phase, the algorithm traverses backward
from all vertices having marks during the first
phase along all arcs except call and parameter-in arcs,
and sets marks to reached vertices in the TDG. The
slice is the union of the vertices in the TDG that have
marked during the first and second phases.
Similarly, we can also apply the forward slicing algorithm
[11] to the TDG to compute forward slices of a
thread.
5.2 Slicing a Complete Program
Slicing a complete concurrent Java program may also
use the two-pass slicing algorithm proposed in [11],
which the MDG can be used as a base to compute static
slices of the program. This is because that the MDG for
a concurrent Java program can also be regarded as an
extension of the SDG for a sequential object-oriented
program [13]. Whereas slicing a single thread does not
involve thread interactions, slicing a complete concurrent
Java program may definitely involve some thread
interactions due to synchronization and communication
dependencies between different threads. In the following
we first define some notions for static slicing of a concurrent
Java program, then give our slicing algorithm
that is based on [11].
A static slicing criterion for a concurrent Java program
is a tuple (s; v), where s is a statement in the
program and v is a variable used at s, or a method
call called at s. A static slice of a concurrent Java program
on a given static slicing criterion (s; v) consists of
all statements in the program that possibly affect the
value of the variable v at s or the value returned by the
method call v at s.
The two-pass slicing algorithm based on MDG can
be described as follows. In the first step, the algorithm
traverses backward along all arcs except parameter-out
arcs, and set marks to those vertices reached in the
MDG. In the second step, the algorithm traverses backward
from all vertices having marks during the first step
along all arcs except call and parameter-in arcs, and sets
marks to reached vertices in the MDG. The slice is the
union of the vertices of the MDG have marks during the
first and second steps. Figure 4 shows a backward slice
which is represented in shaded vertices and computed
with respect to the slicing criterion (s45, seq).
Similarly, we can also apply the forward slicing algorithm
[11] to the MDG to compute forward slices of
concurrent Java programs.
In addition to computing static slices, the MDG is
also useful for computing dynamic slices of a concurrent
Java program. For example, we can use a slicing
algorithm, similar to [1, 6], to compute dynamic slices
of a concurrent Java program based on the corresponding
static slices and execution history information of the
program.
Program slicing is useful in program understanding.
For example, when we attempt to understand the behavior
of a concurrent Java program, we usually want
to know which statements might affect a statement of
interest, and which statements might be affected by the
execution of a statement of interest in the program.
These requirements can be satisfied by slicing the program
using an MDG-based slicing and forward-slicing
algorithms introduced in this paper.
6 Cost of Constructing the MDG
The size of the MDG is critical for applying it to
the practical development environment for concurrent
Java programs. In this section we try to predicate the
size of the MDG based on the work of Larsen and Harrold
[13] who give an estimate of the size of the SDG
for a sequential object-oriented program. Since each
TDG in an MDG is similar to the SDG of a sequential
object-oriented program, we can apply their approximation
here to estimate the size of the TDG for a single
thread in a concurrent Java program. The whole cost
of the MDG for the program can be got by combining
the sizes of all TDGs in the program.
Table
1 lists the variables that contribute to the size
of a TDG. We give a bound on the number of parameters
for any method (ParamVertices(m)), and use this
bound to compute upper bound on the size of a method
(Size(m)). Based on the Size(m) and the number of
methods Methods in a single thread, we can compute
the upper bound Size(TDG) on the number of vertices
in a TDG including all classes that contribute to the
size of the thread.
(2*ParamVertices))+2*ParamVertices).
Based on the above result of a single thread, we can
compute the upper bound on the number of vertices
Size(MDG) in an MDG for a complete concurrent Java
program including all threads.
Note that Size(TDG) and Size(MDG) give only a
rough upper bound on the number of vertices in a TDG
and an MDG. In practice we believe that a TDG and
an MDG may be considerably more space efficient.
7 Concluding Remarks
In this paper we presented the multithreaded dependence
graph (MDG) on which static slices of concurrent
Java programs can be computed efficiently. The MDG
of a concurrent Java program consists of a collection
of thread dependence graphs each representing a single
thread in the program, and some special kinds of dependence
arcs to represent thread interactions. Once a
concurrent Java program is represented by its MDG, the
slices of the program can be computed by solving a vertex
reachability problem in the graph. Although here we
presented the approach in term of Java, we believe that
many aspects of our approach are more widely applicable
and could be applied to slicing of programs with
a monitor-like synchronization primitive, i.e., Ada95's
protected types. Moreover, in order to develop a practical
slicing algorithm for concurrent Java programs, some
specific features in Java such as interfaces and packages
must be considered. In [22], we presented a technique
for constructing a dependence graph to represent interfaces
and packages in a sequential Java program. Such
a technique can be integrated directly into the MDG for
representing interfaces and packages in concurrent Java
programs.
The slicing technique introduced in this paper can
only handle the problem of statement slicing. For large-scale
software systems developed in Java, statement
slicing may not be efficient because the system usually
contains numerous components. For such a case,
a new slicing technique called architectural slicing [21]
can be used to perform slicing at the architectural level
of the system. In contrast to statement slicing, architectural
slicing can provide knowledge about the high-level
structure of a software system [21]. We are considering
to integrate the architectural slicing into our statement
slicing framework to support slicing of large-scale software
systems developed in Java not only at the statement
level but also at the architectural level. We believe
that this approach can be helpful in understanding
Table
1 Parameters which contribute to the size of a TDG.
Vertices Large number of statements in a single method
Arcs Large number of arcs in a single method
Params Largest number of formal parameters for any method
ClassVar Largest number of class variables in a class
ObjectVar Largest number of instance variables in a class
CallSites Largest number of call sites in any method
TreeDepth Depth of inheritance tree determining number of possible indirect call destinations
Method Number of methods
large-scale software systems developed in Java.
Now we are implementing a slicing tool using JavaCC
[16], a Java parser generator developed by Sun Mi-
crosystes, to computing static slices of a concurrent Java
program based on its MDG.
Acknowledgements
The author would like to thank the anonymous referees
for their valuable suggestions and comments on
earlier drafts of the paper.
--R
"Debugging with Dynamic Slicing and Backtracking,"
"Program and Interface Slicing for Reverse Engineering,"
"The Compositional C++ Language Definition,"
"A Program Slicing System for Object-Oriented Programs,"
"Slic- ing Object-Oriented Programs,"
"Slicing Concurrent Programs - A Graph-Theoretical Approach,"
"Un- derstanding Function Behaviors through Program Slic- ing,"
"The Program Dependence Graph and Its Use in Optimization,"
"Using Program Slicing in Software Maintenance,"
"Program Slicing-Based Regression Testing Techniques,"
"Interprocedural Slicing Using Dependence Graphs,"
"Program Slicing: An Application of Object-Oriented Program Dependency Graphs,"
"Slicing Object-Oriented Software,"
"Debugging of Object-Oriented Software,"
"The Program Dependence Graph in a Software Development Envi- ronment,"
http://www.
"A Survey of Program Slicing Techniques,"
"Slic- ing Class Hierarchies in C++,"
"Program Slicing,"
"Static Slicing of Concurrent Object-Oriented Programs,"
" Applying Slicing Technique to Software Ar- chitectures,"
"Applying Program Dependence Analysis to Java Software,"
--TR
--CTR
Jens Krinke, Context-sensitive slicing of concurrent programs, ACM SIGSOFT Software Engineering Notes, v.28 n.5, September
Zhenqiang Chen , Baowen Xu , Jianjun Zhao, An overview of methods for dependence analysis of concurrent programs, ACM SIGPLAN Notices, v.37 n.8, August 2002
Zhenqiang Chen , Baowen Xu , Huiming Yu, Detecting concurrently executed pairs of statements using an adapted MHP algorithm, ACM SIGAda Ada Letters, v.XXI n.4, December 2001
Durga P. Mohapatra , Rajeev Kumar , Rajib Mall , D. S. Kumar , Mayank Bhasin, Distributed dynamic slicing of Java programs, Journal of Systems and Software, v.79 n.12, p.1661-1678, December, 2006
Mangala Gowri Nanda , S. Ramesh, Interprocedural slicing of multithreaded programs with applications to Java, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.6, p.1088-1144, November 2006
Baowen Xu , Zhenqiang Chen, Dependence analysis for recursive java programs, ACM SIGPLAN Notices, v.36 n.12, December 2001
Baowen Xu , Ju Qian , Xiaofang Zhang , Zhongqiang Wu , Lin Chen, A brief survey of program slicing, ACM SIGSOFT Software Engineering Notes, v.30 n.2, March 2005 | program slicing;concurrent program dependence graph;concurrent control flow graph |
375508 | Efficient Detection of Vacuity in Temporal Model Checking. | The ability to generate a counter-example is an important feature of model checking tools, because a counter-example provides information to the user in the case that the formula being checked is found to be non-valid. In this paper, we turn our attention to providing similar feedback to the user in the case that the formula is found to be valid, because valid formulas can hide real problems in the model. For instance, propositional logic formulas containing implications can suffer from antecedent failure, in which the formula is trivially valid because the pre-condition of the implication is not satisfiable. We call this vacuity, and extend the definition to cover other kinds of trivial validity. For non-vacuously valid formulas, we define an interesting witness as a non-trivial example of the validity of the formula. We formalize the notions of vacuity and interesting witness, and show how to detect vacuity and generate interesting witnesses in temporal model checking. Finally, we provide a practical solution for a useful subset of ACTL formulas. | Introduction
The ability to generate a counter-example is an important feature of model checking
tools, because a counter-example provides information to the user in the case that the
formula being checked is found to be non-valid. In this paper, we turn our attention
to providing similar feedback to the user in the case that the formula is found to be
valid. At first glance, such a goal may seem strange, because proving formulas valid is
the supposed goal of model checking. However, additional information regarding valid
formulas is indeed important, because a valid formula may hide real problems in the
model.
Several years of experience in practical formal verification of hardware at IBM
[BB+96] have shown us that during the first formal verification runs of a new hardware
design, typically 20% of formulas are found to be trivially valid, and that trivial validity
always points to a real problem in either the design or its specification or environment.
Of the formulas which are found to be non-trivially valid, examination of a non-trivial
example trace discovers a problem for approximately 10% of the formulas.
The problem of a trivially valid formula was first noted by Beatty and Bryant [BB94],
who termed it antecedent failure. Antecedent failure means that a formula is trivially
valid because the pre-condition (antecedent) of the formula is not satisfiable in the
model. If the validity of a formula is trivial, this must be indicated to the user. If it is not,
the usefulness of formal verification is compromised, since a trivially valid formula is
not intentionally part of a specification (and therefore indicates a problem in the design
or an error in the specification). For instance, consider the following formula:
AG(request
In a model M in which a request is never made, i.e., M
is trivially valid.
Antecedent failure is an intuitively easy concept to grasp. However, the fact that it
depends on the use of a particular operator is disturbing. We would like to capture the
same problem in the equivalent formula:
AG(:request - AXack) (2)
Because we are concerned with temporal logic, we would also like the notion of a
trivially valid formula to include a temporal aspect. For instance, consider the following
If p never occurs, and thus M is trivial by antecedent
failure. However, we would also like the notion of a trivially valid formula to cover the
case that, while q may occur, and thus M never occurs at a next state of p,
and thus M
In addition, the notion of a trivially valid formula should capture other potential
problems, such as that illustrated by the following formula:
AG(request ! A[:data valid Uwrite enable]) (4)
As with the previous examples, Formula 4 is trivially valid in a model in which a
request is never made. However, even in a modelM in whichM it is possible
that the validity of Formula 4 is trivial. If M AG(request ! write enable),
then there are no states in which it is required that the sub-formula :data valid hold;
in other words, there is "nothing left for the model checker to check". In such a model,
the validity of Formula 4 is trivial. In this paper, we extend and formalize the notion
of trivial validity to these and other cases. We use the term vacuity for the extended
definition, and call a formula which suffers from vacuity a vacuously valid formula.
Trivial validity is usually an indication of a problem in the model (rather than
the specification). A related problem is a formula which is non-vacuously valid, but
which does not express the property that was intended by the user. In other words,
we would like to provide a way to discover errors in the formula, even when the
formula is non-vacuously valid. We confront this problem by formalizing the notion
of an interesting witness: a trace which shows a non-trivial example of the validity of
a formula. Examining a positive example provides some confidence that the formal
specification accurately reflects the intent of the user, one of the weak links in the
practical application of formal verification to hardware design.
As an example, consider Formula 3. An interesting witness to Formula 3 is a path
on which p occurs at some state s i , q occurs at state s i+1 , and r occurs at state s i+2 .
Note that simply negating the original formula will not provide a non-trivial example.
If we negate Formula 3, we get:
Obviously, since Formula 5 is the negation of Formula 3, Formula 5 is false if
Formula 3 is true. However, because Formula 5 is an existential formula, there is no
trace which can show it is false, and the counter-example mechanisms of [HBK93] and
of SMV [McM93, CG+95] will not generate a trace.
Negating the single operand of the AG operator in Formula 3 as follows:
will also not guarantee an interesting witness. For instance, a valid counter-example to
Formula 6 is a path to a state in which p does not occur. Once again, this is a trivial
positive example of the truth of the original Formula 3.
Our motivation is temporal model checking. However, the notions of vacuity and
interesting witness are not limited to temporal logics. Therefore, we will first define our
terms in general, and only then discuss vacuity detection and generation of interesting
witnesses in temporal model checking. Finally, we will show a practical solution for a
useful subset of ACTL formulas under temporal model checking.
The remainder of this paper is organized as follows. In Section 2 we define some
important temporal logics. In Section 3 we formalize the notion of vacuity, and show
how to efficiently detect vacuity using a model checker. In Section 4 we formalize the
notion of interesting witness and show how to generate interesting witnesses using a
model checker. In Section 5 we provide a practical solution for a useful subset of ACTL.
In Section 6 we compare our work with a previous version of our theory, and with related
work. In Section 7 we conclude.
Preliminaries
is a logic with the following syntax:
1. Every atomic proposition is a formula.
2. If f and g are formulas, then so are :f and f - g.
3. If f is a formula, then Ef is also a formula.
4. If f and g are formulas, then fUg and Xf are also formulas.
Additional operators can be viewed as abbreviations of the above, as follows:
The semantics of a CTL formula is defined with respect to a Kripke structure K.
A Kripke structure is a quadruple (S; S 0 ; R; L), where S is a finite set of states, S 0 ' S
is a set of initial states, R ' S \Theta S is the transition relation, and L is the valuation,
a function mapping each state with a set of atomic propositions true in that state. We
require that there is at least one transition from every state.
A path - of a Kripke structure K is an infinite sequence of states -
such that R(- true for every i. Given a path -, we will denote by - +i the path
starting from the i-th state in -. More formally:
The semantics of CTL is then as follows:
is an atomic proposition.
starting from - 0 , (K; - 0 )
and for all i such that
We say that K for every path - in K , such that - 0 2 S 0 , we have (K; -)
A CTL formula is in normal form when the operator : modifies only atomic
propositions.
is a subset of CTL in which the only path quantifier is A (when the formula
is in normal form).
CTL [CE81] is a subset of CTL in which each temporal operator
X) must be immediately preceded by a path quantifier E).
ACTL [GL91] is a subset of CTL in which the only path quantifier is A (when the
formula is in normal form).
LTL [Pnu81] is a subset of CTL is which there are no path quantifiers.
The intuitive notion of vacuity derives from that of propositional antecedent failure.
Propositionalantecedent failure means that a formula is trivially valid because some pre-condition
is not satisfiable, where a pre-condition is the left-hand-side of an implication.
Another way to think of it is to say that the right-hand-side of the implication does not
affect the validity of the formula. This gives an intuitive extension of vacuity to any
operator: vacuity occurs when one of the operands does not affect the validity of the
formula. We use the notation '[ 0 ] to denote the formula obtained from ' by
replacing sub-formula with 0 .
Definition 1 (Affect). A sub-formula of formula ' affects ' in model M if there is
a formula 0 , such that the truth values of ' and '[ 0 ] are different in M .
vacuous in model M if there is a sub-formula /
of ' such that / does not affect ' in M .
These definitions capture the intuitive notion of vacuity in a manner which is independent
of a particular logic. However, they are not very useful when it comes to vacuity
detection, because there are an infinite number of cases to check. In the remainder of
this section we will show sufficient conditions on logics such that only a finite and small
number of cases are required. We will first show that it is enough to check only a subset
of the sub-formulas. Then, we will define logics with polarity for which it is enough to
check the replacement of a sub-formula by either true or false.
3.1 Vacuity with Respect to a Minimal Set of Sub-formulas
In this section, we will show that vacuity can be checked by examining only a subset
of the sub-formulas. These will be the sub-formulas which are minimal with respect
to the sub-formula pre-order (denoted -). We assume that each sub-formula is unique.
That is, we consider two separate occurrences of the same sub-formula to be different
sub-formulas.
Lemma 3. If - ', and / does not affect ' in model M , then - does not affect
in M .
Proof. Assume / does not affect ' in M , but - does affect '. Then there is a formula
- 0 such that the truth value of '[- 0 in M is different than that of ' in M . Since
- / we get that
But the truth value of '[- 0 in M is different than
the truth value of ' in M . Thus the truth value of '[/ / in M is different than the
truth value of ' in M , which means that / affects the value of ' in M , contradicting
our assumption. ut
For the sequel, we will need the following definitions:
Definition 4 (Vacuity With Respect to a Sub-formula). Let - be a sub-formula of
(denoted - '). Formula ' is vacuous in model M if - does not affect '
in M .
Respect to a Set of Sub-formulas). Let S be a set of
sub-formulas of formula ' ( S ` f-j- 'g ). Formula ' is S-vacuous in model M if
there exists - 2 S such that ' is -vacuous in M .
Definition 6 (Minimal Sub-formulas). Let S be a set of sub-formulas. We define the
minimal sub-formulas of S as:
There is no -
Theorem 7. ' is S-vacuous iff ' is min(S)-vacuous.
Proof. - (=)) If ' is S-vacuous in M , there is a - 2 S that does not affect '. Since
S is finite and - is a pre-order, there is a - 0 2 min(S) such that - 0 -. Using
Lemma 3, since - does not affect ' in M , - 0 does not affect ' in M either. This
means that ' is min(S)-vacuous in M .
- ((=) If ' is min(S)-vacuous in M , there is a - 2 min(S) ' S that does not
affect ' in M , and therefore ' is S-vacuous in M .
ut
It follows immediately that to check vacuity of ' it is enough to check for vacuity
with respect to only the minimal sub-formulas of '. We will now show that for logics
with polarity, it is enough to check the replacement of a sub-formula by either true or
false.
3.2 Logics with Polarity
In this section we will define logics with polarity. First, we will need a notation with
which to denote all models M in which a formula ' is valid. We use the following
We use the notation [[']] c to denote the complement of [[']]. We will now define what
we mean by the polarity of an operand, then define operators with polarity, and finally
define logics with polarity.
Definition 8 (Polarity of an Operand) If oe is an n-ary operator in a logic, we
say that the i-th operand of oe has positive(negative) polarity if for every fixing of
and two formulas
we have that
We say that an operator has polarity if every one of its operands has some polarity
(positive or negative).
Definition 9 (Logic with Polarity). A logic with polarity is a logic in which every
operator has polarity.
For example, the standard Boolean logic with operators -; : is a logic with
polarity, since for every two formulas
This immediately implies that the operands of - and - have positive polarity, and the
single operand of : has negative polarity.
An example of a logic which is not a logic with polarity is the standard Boolean
logic with the addition of the exclusive-or operator:
If we set a = true, we get:
But if we set a = false, we get
In the first case the polarity of the second operand is negative, and in the second positive.
This means that \Phi does not have polarity.
is a logic with polarity.
Proof. First, note that the set of models that a CTL formula satisfies is a subset
of f(K; -)j- is a path in the structure Kg. As we have already shown, the standard
Boolean operators - and : all have polarity.
We now show that the single operand of the path quantifier E has a positive polarity.
Given there is a path - 0
in K that starts at the same state that - does, and (K; - 0 ) This implies that
We proceed to prove that both operands of the U operator have positive polarity.
1. Let us fix the second operand of the U operator to be some . Given ' 1 and ' 2 ,
then there is an integer n, s.t.
- for all
Which proves that (K; -)
2. Let us fix the first operand of U to be some /, Given ' 1 and ' 2 , where [[' 1
if (K; -) there is an integer n, s.t.
- for all
which proves that (K; -)
Finally, we show that the single operand of the operator X has positive polarity.
Given
the assumption (K; - +1 ) meaning that (K; -) which concludes the
proof. ut
3.3 Vacuity Detection in Logics with Polarity
In this section we will show that in logics with polarity it is enough to check the
replacement of a sub-formula by either true or false.
First we define the polarity of a sub-formula, then we will present the main result of
this section.
Definition 11 (Polarity of Sub-formula). Given a formula ', we define the polarity
of sub-formulas of ' recursively:
- ' has positive polarity
is of positive(negative) polarity, then - i has positive
polarity if the i-th operand of oe has a positive(negative) polarity, and- i has negative
polarity otherwise.
Lemma 12. In a logic with polarity, if - ', and- is witha positive(negative) polarity,
then if [[- 0
Proof. The proof proceeds by induction on the size of the formula '.
(base case:) ' is an atom, so - has positive polarity, and '[- 0
Therefore, if
(induction step:) then we have the same as in the
base case. Otherwise, we know that there is one i such that - i . There are two
cases:
1. The i-th operand of oe is of positive polarity. In this case, the polarity of - in
i is as it is in ', therefore, according to the induction hypothesis,
and since the i-th operand of oe is of positive polarity, then by
Definition 8 we have
2. The i-th operand of oe is of negative polarity. In this case, the polarity of - in
i is the opposite of it's polarity in '. Therefore, by the induction hypothesis
and since the i-th operand of oe is of negative polarity, we
have
ut
In [BB+97] we defined a subset of ACTL, and a set of important sub-formulas,
and proved that in order to detect vacuity with respect to this set it is enough to show
that M j= '[ false] where is the minimal sub-formula of all the important
sub-formulas (See section 5). In [KV99], Kupferman and Vardi expand on this result by
showing that for CTL , a formula ' is vacuous iff there is some minimal sub-formula
of ' such that M satisfies '[ true] iff M satisfies '[ false]. We will now
prove a very similar result that holds for all logics with polarity. The proof is practically
the same as the one in [KV99]; we give it here for the sake of completeness.
We define the semantics of true and false as follows: is a modelg
and
Theorem 13. Let be a sub-formula of formula ' in a logic with polarity. Then, for
every model M the following are equivalent:
1. does not affect ' in M .
2.
Where and is of positive polarity, or M 6j= ' and is of
negative polarity. Otherwise
Proof.
does not affect ' in M . This means that for every / 0 , and specifically for
which concludes this
part of the proof.
- ((=) Note that for every / 0 we have:
Two cases:
1. If / is of positive polarity, then using Lemma 12, we get
then by the assumption M but by the
containment above, this implies that for every
meaning that / does not affect ' in M .
then by the assumption M 6j= '[/ / true]. By the same
argument as above we get that for every meaning that
/ does not affect ' in M .
2. If / is of negative polarity, then using Lemma 12, we get
then by the assumption M but by the containment
above, this implies that for every meaning that
/ does not affect ' in M .
then by the assumption M 6j= '[/ / false], but by the
containment above, this implies that for every
meaning that / does not affect ' in M .
ut
In Section 3.1 we showed that it is enough to check vacuity for ' with respect to a
subset of the sub-formulas '. In this section we showed that for logics with polarity, it
is enough to check the replacement of a sub-formula by either true or false. We now
combine these two results in the following corollary:
Corollary 14 In a logic with polarity, for a formula ', and a set S of sub-formulas of
for every model M , the following are equivalent:
- ' is S-vacuous in M
- There is / 2 min(S) such that:
Where and / is of positive(negative) polarity.
Otherwise,
This corollary gives us the ability to check vacuity of a formula in a logic with
polarity by checking a relatively small number of other formulas, each of them of
size not greater than that of '. To be more precise, for S-vacuity, we need to check
1. Check '.
2. For each sub-formula / 2 min(S), check the new formula '[/ / X]. The value
of X is either true or false, according to whether ' is valid or not, and the polarity
of .
Formula ' is S-vacuous iff at least one of these formulas has the same truth value
as that of '.
Since CTL is a logic with polarity, we have shown the result of [KV99]: We can
use a CTL model checker to check vacuity in complexity O(j'j \Delta CM (j'j)), where
CM (n) is the complexity of checking a formula of size n in model M .
Interesting
The definition of vacuity refines the traditional distinction between valid and non-valid
formulas with respect to a model M . We now classify formulas as either non-valid,
vacuously valid, or non-vacuously valid. We would like to make the same refinement
in the method we use to distinguish between the classes. Traditionally, we show that
a formula is valid by means of a proof, and that a formula is non-valid by means of a
counter-example. We will now define an interesting witness, which is the means we will
use to show that a formula is non-vacuously valid. In this section we assume that the
formula in question ' is valid in model M .
To make our definitions clear we use model checking problem of propositionallogic
as an example: The logic is the standard propositional logic on n boolean variables. A
Model M is some non-empty subset of the set of assignments to the n variables. We say
that M is true under all assignments in M . For example, if M is the set of all
assignments, then M is a tautology.
4.1 Pre-order and Counter-Examples
Before defining an interesting witness, we first formalize the notion of a counter-
example. We will require two things from a counter-example to a formula.
1. That its existence proves the non-validity of the formula.
2. That it is small.
The second requirement is natural, since the smaller the counter-example is, the
more useful it is to the user. Our approach is to define a pre-order on the set of models,
such that non-validity on a smaller model always proves non-validity of a larger one.
Then, we will require that a counter-example be a model which is minimal with respect
to this pre-order.
Definition 15 (The natural pre-order of a logic) Given a logic L, we define the
natural pre-order of the logic OE L on the set of models. M 0 OE L M iff for all ' 2 L we
have that M
The natural pre-order of propositional logic is containment.
Proof.
propositional formula that is valid for all assignments
in M , will also be valid for all assignments in M 0 .
, then there is at least one assignment
the n variables that is in M 0 and not in M . We define the following propositional
This formula is true on any assignment that is not equal to ff, and false on ff itself.
Therefore we have that M 0 6j=
ut
We can now define a counter-example:
Definition 17 (Counter-Example) In logic L, a model C is a counter-example to ' in
model M , if it satisfies the following conditions:
1. C OE L M .
2. C 6j= '.
3. C is minimal w.r.t. OE L among the models that satisfy properties 1 and 2.
It follows immediately from the definition that:
there exists a counter-example to ' in M .
We now return to our example of propositional logic, and show that counter-examples
are as we expected:
In propositional logic, If C is a counter-example to formula ' in model M ,
then C is a model with one assignment.
Proof. If C is a counter-example, then C 6j= ', therefore there is an assignment ff 2 C
such that '(ff) is false. fffg is a model, A ` C and A 6j= '. Since C is minimal
w.r.t. ', we get that since there is no model A 0 such that A 0 ' A (we required
that models are non-empty). ut
4.2 Pre-orders and Counter-Examples in temporal logic
We have previously shown that for the case of propositional logic, Definition 17 captures
our intuitive notion of what a counter-example is. Since the motivation of this paper
is temporal logic, we would like to examine more closely the properties of a counter-example
in some important temporal logics.
- LTL: in [Pnu85] Pnueli has proved that the natural pre-order for LTL according to
Definition 15 is:
is a computation path in Mg.
In LTL, if M 6j= ', then there is a computation path - in M , such that f-g 6j= '.
Using the same arguments as in the proof of claim 19, we can show that a counter-example
to ' will always be a model with one computation path in it.
- CTL and CTL : Milner in [Mil71] has proved that for CTL and CTL , the natural
pre-order is:
. This means that CTL and CTL have only
trivial counter-examples that are the model itself. Indeed, the formula EF (p) cannot
be shown false by any model that has less behavior than the original, since we might
have removed states where p was true. Note that even if we did have some method
of specifying larger models as counter-examples, CTL would still be problematic.
The formula EF (p) -AG(q), cannot be proved false using neither a larger model,
nor a smaller one.
Using the same proof as in [Mil71], it can be shown that for
ACTL and ACTL , the natural pre-order is:
. For these logics, it is difficult to characterize
counter-examples. A model M always simulates a computation path - in it (- OE ACTL
meaning that computation paths may serve as counter-examples. For instance,
a counter-example to the formula AG(p) is a path on which :p holds at some state.
However, there are formulas and models for which a path cannot serve as a counter-
example. The formula AX(p)-AX(:p) cannot have a path as a counter-example,
since on any deterministic path it will be evaluated to be true. A counter-example
for this formula must be more complex. Buccafurri, Eiter, Gottlob and Leone have
addressed this problem in detail in [BEGL99].
4.3 Interesting Witnesses
In the case of a non-valid formula, a standard model checker provides a counter-example
to the user. If the formula is valid, and using our vacuity checking procedure we can
prove it non-vacuous, we would like to provide an interesting witness to the user, which
is an analog of a counter-example - it proves non-vacuity, while a counter-example
proves non-validity.
(Interesting Witness With Respect to a Sub-formula). In logic L, a
model W is a -interesting witness to ' in M , if it satisfies the following conditions:
1. W OE L M .
2. W j= ', and ' is not -vacuous in W .
3. W is minimal w.r.t OE L among the models that satisfy properties 1 and 2.
We now get an analogous claim to claim 18:
there exists a /-interesting witness W to ' in M iff ' is not
/-vacuous in M .
Proof.
' is not /-vacuous in W ,
there is a / 0 such that W 6j= '[/ / / 0 ]. Again, since W OE L M , M 6j= '[/ / / 0 ].
This means that ' is not /-vacuous in M .
((=) The set of models that are smaller than M , and ' is not /-vacuous in them
is non-empty, since M is such a model. Therefore any one of the minimal elements
in this set is a /-interesting witness to ' in M .
ut
So, under the assumption that ' is valid in M , an interesting witness proves the non-
vacuity of one sub-formula. Now we would like to have such proofs of more general
non-vacuity, for sets of sub-formulas (and in particular , for the set of all sub-formulas).
However, a single interesting witness will not always suffice.
Consider the formula in a model M such that M
non-vacuous there is no single example which can show non-vacuity. In order to show
p-non-vacuity, q must be set to 0, and in order to show q-non-vacuity, p must be set
to 0. But since M j= ', we cannot show an example in which both p and q are 0
simultaneously.
The naive solution would be to generate one interesting witness for every sub-
formula. However, an interesting witness to one sub-formula, may also be an interesting
witness to a different sub-formula. This is shown in the following proposition.
Proposition 22 Assume M is a /-interesting witness to ' in M , and
then W is also a interesting witness to ' in M .
Proof. Since W is a /-interesting witness to ', / affects ' in W , and according to
affects ' in W , meaning that W is a interesting witness to ' in M .
We shall now use Proposition 22 to get a more general result:
Corollary 23. If M j= ', and ' is not S-vacuous in M , then a set that has a /-
interesting witness for every / in min(S) also has a /-interesting witness for every /
in S.
4.4 Interesting Witness Generation in Logics with Polarity
In Section 3.3 we have shown that if our logic is a logic with polarity, then checking
vacuity is much easier than the general case. The same result holds for interesting
witness generation.
Lemma 24. In a logic with polarity L, if M is of
positive(negative) polarity, then the following are equivalent:
- ' is not -vacuous in C .
Proof.
we get that C since C 6j= '[ X],
' is not -vacuous in C.
we get that C ' is not -vacuous
in C, then using Theorem 13, we get that C 6j= '[ X].
ut
Theorem 25. In a logic with polarity L, if M j= ', and - ' is of positive(negative)
polarity in ', the following are equivalent:
- C is a counter-example to '[ X], Where
- C is a -interesting witness to ' in M .
The proof follows directly from Lemma 24, which proves that the two are equivalent,
but omits the requirement of minimality. Adding this requirement to both of them
obviously leaves them equivalent.
This theorem gives us the ability to easily generate interesting witnesses if we can
generate counter-examples to the formulas in the logic: a -interesting witness to ' in
M is really a counter-example to one specific formula obtained by replacing by true
or false, depending on the polarity of in '. Note that if this formula is valid in M ,
then ' is -vacuous in M .
If we now assume that we have a logic with polarity, and that we have a model
checker for this logic that generates counter-examples to non-valid formulas, then we
can enhance our model checker to have the following properties:
Enhanced Model-Checker
Given a formula ', a model M , and a set S of sub-formulas of ':
1. If M 6j= ', generate a counter-example.
2. If M j= ', and ' is S-vacuous in M , then output all sub-formulas in min(S) that
do not affect ' in M .
3. If M j= ', and ' is not S-vacuous in M , then generate jmin(S)j interesting
of M , such that for each 2 S, at least one of them is a -interesting
witness for ' in M .
The number of formulas checked if the formula is valid is jmin(S)j + 1, since for
each in min(S) we generate a formula to be model checked. If it is valid, then ' is
S-vacuously valid. Otherwise, the model checker returns a counter-example, which is
an -interesting witness for '. Since all formulas we generate are smaller in size than ',
we get that the complexity of the enhanced model checker is O(jmin(S)j \Delta CM (j'j)),
where CM (n) is the complexity of model checking a formula of size n.
In the case where ' is S-vacuous, the enhanced model checker only outputs all
the minimal sub-formulas that do not affect '. However, the user may be interested in
knowing exactly which of the sub-formulas in S are vacuous. To achieve this goal, we
may need to check as many as jSj formulas.
5 Practical Vacuity Detection and Interesting Witness Generation
The motivation of this work was to provide an indication of vacuity and interesting
witnesses to users of model checking. However, the complexity results of Sections 3.3
and 4.4 do not allow this in reasonable time. While the complexity of determining
vacuity and generating interesting witnesses is only j'j times the complexity of model
checking a formula of size j'j, in practical terms this is too high, because a typical
may take hours of CPU time to verify. We would like a method of determining
vacuity and generating an interesting witness for a formula ' that is no more expensive
than model checking '.
In order to give an efficient solution, we will limit ourselves to a subset of ACTL,
called w-ACTL, and to a subset of the sub-formulas, called important sub-formulas
with respect to which we will check vacuity. We will then show that the complexity of
checking vacuity of important sub-formulas in w-ACTL is exactly the complexity of the
model checking ' in M . Finally, we show some examples.
5.1
In this section we define witness-ACTL (w-ACTL), a subset of ACTL, which is in turn a
subset of CTL. Informally, w-ACTL formulas are ACTL formulas in which for all binary
operators (-, AU, AV), at least one of the operands is a propositional formula. We
divide the ACTL operators into propositionaloperators (:, -) and temporal operators
(AX, AG, AF, AU, AV), and call a formula which has only propositional operators, a
simple formula. w-ACTL is the set of state formulas described by the following:
Definition 26 (w-ACTL).
1. Every simple formula is a state formula.
2. If f is a simple formula, - is a state formula, and ffi is some binary operator of
are state formulas.
3. If - is a state formula, and ffi is some unary temporal operator of ACTL (AG, AF,
AX), then ffi(-) is a state formula.
The definition of w-ACTL may seem artificial at first glance. However, in our
experience this is not the case. Most of the formulas written by users are w-ACTL
formulas, which capture nicely the linear nature of most specifications.
5.2 Important sub-formulas
In order to be able to efficiently check vacuity and generate interesting witnesses for w-
we have to restrict ourselves to a subset of the sub-formulas for which
vacuity will be detected. Rather than being a drawback, we show that distinguishing
between important and non-important sub-formulas is an advantage, as it is a reflection
of how engineers use CTL to specify their designs.
We first define the set of important sub-formulas of a formula, with respect to which
vacuity will be checked. Basically, the important sub-formulas are all the temporal
(non-simple) ones.
Definition 27 (Important sub-formulas). Let ' be a w-ACTL formula, we define
Imp(') recursively:
1. If ' is simple, then f'g.
2. If is non-simple, and f is simple, then
3. If are simple, then
4. If
The choice made in item 3 above may seem arbitrary. The reason that only f 1 is
important is that f 2 is the only operand that can cause vacuity. For A[f 1 U f 2
cause vacuity of f 1 if it is always true immediately. However, f 1 cannot cause vacuity
of f 2 because even if f 1 is always true forever, the AU operator still requires something
of eventually it occurs. For the AV operator, f 2 can cause vacuity of f 1 if it is
always true forever, because then nothing is required of f 1 . However, f 1 cannot cause
vacuity of f 2 if it is always true immediately, because in that case, the AV operator still
requires something of f 2 : that it occurs at the same time.
We justify our choice of the temporal sub-formula of a binary operator as an important
sub-formula as follows. The choice is simply a reflection of how engineers tend to
use CTL to code a specification, as well as how they tend to design their hardware. For
instance, consider the following specification:
AG(request
which expresses the requirement that if a request is accepted (which happens or not
one cycle after it appears), then two cycles later either the read busy signal is asserted,
or the write busy signal is asserted. Logically, this is equivalent to the formula:
AG(:request -AX(:req accepted - AXAX(read busy - write busy))) (8)
Vacuity of Formula 7, which detects that M j= AG(:request) would probably
detect a problem in the model, because otherwise the signal called request is mean-
ingless. However, a vacuity which detects that M j= AG(AX(:req accepted -
AXAX(read busy - write busy))) is quite often useless to the engineer, as it is
highly likely that she has designed her logic intentionally for this to be so, and prevents
read busy or write busy from being asserted spuriously by not asserting req accepted
if there was not a request the previous cycle. Thus, for the binary operators, we have
chosen the non-simple operand to be the important sub-formula.
5.3 Vacuity and Interesting Witnesses for w-ACTL formulas
Recall that ' is Imp(')-vacuous, if it is vacuous with respect to a sub-formula
(Theorem 7). We now show that min(Imp(')) has only one
sub-formula in it, meaning that Imp(')-vacuity checking will be easy.
28 For every ' in w-ACTL the size of min(Imp(')) is one.
Proof. The proof proceeds by induction:
1. If ' is simple, then and we are done.
2. If Every sub-formula in Imp( )
is a sub-formula of and therefore of '. This means that ' is not minimal
in Imp('), so using the induction hypothesis
3. If
Using the same argument as above, ' is not in min(Imp(')), meaning that
or in the second case
Again, using the induction hypothesis, we conclude that
ut
Since we are dealing with ACTL formulas (negation can be applied to atomic
propositions only), and because of the way we choose the important sub-formula (an
important sub-formula is never an operand of ":"), we get that min(Imp(')) always
has a positive polarity in '. We now define the formula witness(') as follows:
According to Corollary 14 and Theorem 25, it is enough to check witness(') in order
to detect Imp(')-vacuity and generate an Imp(')-interesting witness. Given a model
checker that can generate counter-examples for ACTL formulas, we can design an
enhanced model checker for w-ACTL (see Section 4.4) with the following properties:
Given a w-ACTL formula ' and model M ,
1. If M 6j= ' generate a counter example.
2. If M output that the formula passed vacuously.
3. If M j= ' and M 6j= witness(') output one interesting witness W , such that
and for every important sub-formula ( 2 Imp(')), W is a -
interesting witness to ' in M .
5.4 Detailed Vacuity
If Imp(')-vacuity is detected by our enhanced model checker, there is no indication of
which of the pre-conditions caused the vacuity. As we said before, we can solve this by
checking jImp(')j+1 formulas instead of just 2. However, in our specific case, we can
actually check only log formulas. One can easily prove (using the
same proof as in Claim 28) that the sub-formulas in Imp(') are linearly ordered. Also,
it follows directly from Lemma 3 that if - then if ' is -vacuous, then ' is also
-vacuous. Combining these observations, we get that there is one minimal sub-formula
2 Imp('), such that for all - 2 Imp('), ' is -vacuous iff - . This means that
we can use binary search on Imp(') to find this . To implement this, we need only
check log
5.5 Semantic refinements
The careful reader will have noted that our definition of important sub-formulas will not
detect vacuity in some basic cases, among them propositional antecedent failure. For
instance, consider the following formula:
AG(read request ! read enable)
The vacuity detection (and witness generation) formula we generate for Formula 9
as defined above is:
only in a model with no fair paths, and thus detects vacuity
only in that case. Intuitively, this is not satisfying. We would like to be able to detect
propositional antecedent failure.
Another problem is shown by the following Sugar 1 formula:
AG(request ! next event(grant)(acknowledge)) (11)
Formula 11 expresses the requirement that the first grant after a request must be
accompanied by an acknowledge. The ACTL normal form of Formula 11 is:
AG(:request -A[grantV (:grant - acknowledge)]) (12)
Thus, the vacuity detection formula for Formula 11 as defined above is:
AG(:request - A[falseV :grant - acknowledge]) (13)
Simplification of the above formula gives:
AG(:request - AG(:grant - acknowledge)) (14)
1 Sugar is a syntactic sugaring of CTL [CE81] formulas, and is the specification language used
by the RuleBase formal verification tool. In [BB+96] we outlined its basic features.
14 will not detect vacuity in the case that a request is never followed by a
grant. Once again, this is not intuitively satisfying. The next event operator expresses
a kind of temporal implication, thus the failure of a grant to occur is a kind of temporal
antecedent failure, and we would like to detect it.
We therefore expand our definition of important sub-formulas as follows:
1. If are simple, and the - operator
is derived from the use of the ! operator by the user
2. If are simple, then
5.6 Implementation details
In theory, a computation path is infinite and therefore, every example is infinite. In prac-
tice, however, the algorithm of [CG+95] will sometimes give finite counter-examples,
when a finite counter-example is enough to show that the formula is false. In every case
but one, the finite counter-example given by [CG+95] is "interesting enough" for our
purposes. The exception is the AU operator. As a positive example to A[-U ], we would
like to see a trace on which occurs, but [CG+95] may give us a counter-example to
A[falseU/] which ends before / has occurred. Therefore, we use A[(AF false)U/] to
get an infinite counter-example, just as [CG+95] uses EG true to get an infinite example.
5.7 Examples
We now show the generation of an interesting witness formula. We use a typical Sugar
formula as an example:
AG(request ! next event(data)[4](last data)) (15)
Formula 15 states that last data should be asserted with the fourth data after a
request. Since last data is considered to be non-simple (because it is the second operand
of a next event operator) the interesting witness formula is:
AG(request ! next event(data)[4](false)) (16)
We convert Formula 16 into ACTL normal form:
AG(:request - A[dataV (:data - AXA[dataV (:data -
(:data - AXA[dataV (:data - false)])])])]) (17)
It is easy to see that Formula 17 is valid iff either a request never occurs, or no
request is ever followed by four datas. Also, it is clear that if Formula 17 is found to be
non-valid, the counter-example will be an interesting witness of Formula 15, on which
a request followed by four datas will occur.
Now examine the following formula, which expresses the fact that we require q to
occur an infinite number of times:
The interesting witness formula for Formula
AG AF false (19)
If Formula cannot be vacuously valid unless there are no fair paths,
and indeed Formula 19 is non-valid in all non-empty models. The counter-example to
Formula 19 will be a computation path, on which q will appear infinitely many times
(because Formula
6 Comparison with Previous and Related Work
In this section, we compare our work with a previous version of our theory, and with
related work.
6.1 Comparison with Previous Work
In a previous version of this paper [BB+97], we required that an interesting witness to
formula ' to be a single path, on which all important sub-formulas affect the validity
of '. This requirement was a result of the practical motivation of our original work. In
this paper, an interesting witness is defined per sub-formula, so that interesting validity
is demonstrated by multiple paths. The new definition is more natural, because, as we
showed in Section 4.4, it allows us to guarantee interesting witnesses whenever we
can guarantee counter-examples. It thus solves the problem raised by [KV99] of the
following
G(request
Consider a model M with two paths, one path that never satisfies request and
a second path that always satisfies grant. If we require that an interesting witness
be a single path, then there is no interesting witness to Formula 20 in such a model,
despite the fact that there exists a counter-example to Formula 20 in any model in which
Formula 20 is not valid.
6.2 Comparison with Related Work
Other works, including [BB94] and [PP95], have noted the problem of trivial validity,
and shown how to avoid them using hand-written checks. Our original paper [BB+97]
was, we believe, the first attempt to formalize the notion of trivial validity, as well as
the first attempt to detect it automatically under symbolic model checking.
Philosophers have also dealt with the problem of trivial validity. Relevance logic
(also known as relevant logic) is a non-standard logic designed to prevent the paradoxes
of material and strict implication. These occur when an antecedent is irrelevant to the
consequent, as in the formula While relevance logic
deals with the problem by defining a new logic, our approach is different. We formalize
the notion of vacuity and provide a method to detect it while leaving the logic itself
unchanged.
In this paper, we use the term interesting witness to mean a computation path
showing one non-trivial example of the validity of a valid formula. We are the first to
use the term interesting witness, and the first to generate positive examples for valid
non-existential formulas. In [HBK93], Hojati, Brayton and Kurshan describe counter-example
generation for model checking using CTL and language containment using
L-automata [Kur90]. In [CG+95], Clarke, Grumberg, McMillan and Zhao describe the
counter-example and witness generation algorithmof SMV [McM93]. Neither [HBK93]
nor [CG+95] produce interesting witnesses for valid non-existential formulas.
In [KV99], Kupferman and Vardi presented an extension of [BB+97] from w-ACTL
to CTL . Their results for vacuity are the same as those presented here, but they require
that an interesting witness to a CTL formula be a single path.
7 Conclusions and future work
We have formalized the notion of vacuity and interesting witness for logics with polarity.
We have shown a practical method for detecting vacuity and generating interesting
witnesses for w-ACTL formulas. As discussed above, the ability to detect vacuity and
provide an interesting witness are extremely important in the practical application of
model checking to industrial hardware designs.
Although the definition of vacuity we have presented is simple and elegant, there
is still territory left uncovered. Pnueli [Pnu97] has suggested the following example:
in a model M such that M j= AGp, the formula AGAFp is valid, but our intuition
tells us that the user is somehow "missing the point". A possible approach for solving
this problem, is to refine our definition of vacuity. Instead of checking whether a sub-formula
can be replaced by any other sub-formula, we will check whether it can be
replaced by some "simpler" formula. The term "simpler" is a vague notion, but there
are some immediate examples: p is simpler than AF (p), AG(p) is simpler than AF (p),
and perhaps even AG(p) - AF (q) is simpler than A[pUq].
A possible improvement could be done to the efficiency of vacuity checking. Instead
of using the model checker as a black box, devise efficient model checking algorithms
specifically for vacuity checking. A trivial enhancement would be to cache intermediate
results in the model checker, since all the vacuity checking formulas are very similar.
--R
The Logic of Relevanceand Necessity.
"Formally verifying a microprocessor using a simulation methodology"
"RuleBase: an Industry-Oriented Formal Verification Tool"
"Efficient Detection of Vacuity in ACTL Formulas"
"Characterizing finite Kripke structures in propositional temporal logic"
"On ACTL Formulas Having Deterministic Counterexamples"
"Design and synthesis of synchronization skeletons using Branching Time Temporal Logic"
"Characterizing Properties of Parallel Programs as Fixed-point"
"Efficient Generation of Counterexamples and Witnesses in Symbolic
MIT Press
"'Sometimes' and 'Not Never' Revisited: On Branching versus Linear Time Temporal Logic"
"Model checkingandmodular verification."
"BDD-based debugging of designs using language containment and fair CTL."
"Vacuity Detection in Temporal
"Analysis of Discrete Event Coordination,"
"Relevance Logic"
"Symbolic
"An Algebraic Definition of Simulation between Programs"
"Formal Verification of a Commercial Serial Bus Interface"
"A Temporal Logic of Concurrent Programs"
"Linear and Branching Structures in the semantics and logics of reactive systems"
"Fair Synchronous Transition Systems and Their Liveness Proofs"
question from the audience
"The Computer-Aided Modular Framework - Motivation, Solutions and Evaluation Criteria"
--TR
MYAMPERSANDldquo;SometimesMYAMPERSANDrdquo; and MYAMPERSANDldquo;not neverMYAMPERSANDrdquo; revisited
Characterizing finite Kripke structures in propositional temporal logic
Model checking, abstraction, and compositional verification
Formally verifying a microprocessor using a simulation methodology
Efficient generation of counterexamples and witnesses in symbolic model checking
RuleBase
Model checking
Symbolic Model Checking
Characterizing Correctness Properties of Parallel Programs Using Fixpoints
Linear and Branching Structures in the Semantics and Logics of Reactive Systems
Vacuity Detection in Temporal Model Checking
Model Checking and Modular Verification
Fair Synchronous Transition Systems and Their Liveness Proofs
Efficient Detection of Vacuity in ACTL Formulaas
Design and Synthesis of Synchronization Skeletons Using Branching-Time Temporal Logic
Analysis of Discrete Event Coordination
BDD-Based Debugging Of Design Using Language Containment and Fair CTL
--CTR
Mats P. E. Heimdahl, Safety and Software Intensive Systems: Challenges Old and New, 2007 Future of Software Engineering, p.137-152, May 23-25, 2007
Hana Chockler , Orna Kupferman , Moshe Y. Vardi, Coverage metrics for temporal logic model checking, Formal Methods in System Design, v.28 n.3, p.189-212, May 2006
Michael W. Whalen , Ajitha Rajan , Mats P.E. Heimdahl , Steven P. Miller, Coverage metrics for requirements-based testing, Proceedings of the 2006 international symposium on Software testing and analysis, July 17-20, 2006, Portland, Maine, USA
I. Pill , S. Semprini , R. Cavada , M. Roveri , R. Bloem , A. Cimatti, Formal analysis of hardware requirements, Proceedings of the 43rd annual conference on Design automation, July 24-28, 2006, San Francisco, CA, USA
Marsha Chechik , Arie Gurfinkel , Benet Devereux , Albert Lai , Steve Easterbrook, Data structures for symbolic multi-valued model-checking, Formal Methods in System Design, v.29 n.3, p.295-344, November 2006
Shoham Ben-David , Cindy Eisner , Daniel Geist , Yaron Wolfsthal, Model Checking at IBM, Formal Methods in System Design, v.22 n.2, p.101-108, March | model checking;interesting witness;formal verification;vacuity;temporal logic |
375681 | Main-memory index structures with fixed-size partial keys. | The performance of main-memory index structures is increasingly determined by the number of CPU cache misses incurred when traversing the index. When keys are stored indirectly, as is standard in main-memory databases, the cost of key retrieval in terms of cache misses can dominate the cost of an index traversal. Yet it is inefficient in both time and space to store even moderate sized keys directly in index nodes. In this paper, we investigate the performance of tree structures suitable for OLTP workloads in the face of expensive cache misses and non-trivial key sizes. We propose two index structures, pkT-trees and pkB-trees, which significantly reduce cache misses by storing partial-key information in the index. We show that a small, fixed amount of key information allows most cache misses to be avoided, allowing for a simple node structure and efficient implementation. Finally, we study the performance and cache behavior of partial-key trees by comparing them with other main-memory tree structures for a wide variety of key sizes and key value distributions. | INTRODUCTION
Following recent dramatic reductions, random access memory
(RAM) is competitive in price with the disk storage of a few years
ago. With multi-gigabyte main memories easily affordable and expandable
(on 64-bit architectures), applications with as much as 1
or 2 GB of data in main memory can be built with relatively inexpensive
systems, and moderate growth in space requirements need
not be a concern. For these reasons, and spurred by the stringent
performance demands of advanced business, networking and internet
applications, a number of main-memory database and main-memory
database cache products have appeared in the market [2,
22, 29]. These products essentially fulfill the expectations of re-search
on main-memory databases of the last fifteen years (see, for
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ACM SIGMOD 2001 May 21-24, Santa Barbara, California, USA
example, [9, 12, 15, 16]), by providing an approximate order-of-
magnitude performance improvement for simple database applica-
tions, when compared to disk databases with data fully resident in
main memory [2, 29].
Adapting main-memory database algorithms to become "cache-
conscious," that is, to perform well on multi-level main-memory
storage hierarchies, has recently received attention in the database
literature [5, 24, 25]. As mentioned out in these papers and in
related work (see, for example, [6]), commonly used processors
can now execute dozens of instructions in the time taken for a read
from main memory (a "cache miss"). For instance, memory access
time on a 450 MHz Sun ULTRA 60 is more than 50 times slower
than the time to access data resident in the on-chip cache. Fur-
ther, the disparity between processor speed and memory latency
is only expected to grow since CPU speeds have been increasing
at a much faster rate (60% per year) than memory speeds (10% per
Consequently, main-memory index structures should
be designed to minimize cache misses during index traversal, while
keeping CPU costs and space overhead low. Intuitively, cache-miss
costs are minimized with small node sizes and high branching fac-
tors. For example, [6] found that optimal node sizes for their B-tree
implementation was slightly larger than 1 cache block (so that
the average number of keys present in a node would fill a cache
block). Low CPU costs for index traversal are important since
cache misses cost no more than a few-dozen instructions. In this
setting, key comparison costs are an important component of CPU
cost, especially for multi-part or variable-length keys. Addition-
ally, space overhead is important since the cost of RAM is approximately
$1/MB, or about 50 times as expensive as disk storage. As
a result, the amount of main memory available to the index may be
limited by cost factors, leading to constraints on index size. The
space usage depends on the space used to represent keys in index
nodes, the space used for pointers, and the average occupancy of
nodes in the tree.
For main-memory OLTP environments which include a mix of
read and update operations, the T-tree 1 and the B + -tree are two
index structures which have been studied previously in the literature
[17, 24]. All main-memory database products of which we are
aware [2, 29], implement the T-tree index structure proposed by
Lehman and Carey [17]. However, in [24], the authors found that
due to the higher cost of cache misses on modern hardware,
trees performed better in experiments conducted with integer keys.
While the assumption of integer keys may be valid in an OLAP
environment assuming suitable pre-processing, a general purpose
database must handle complex keys - multiple parts, null values,
1 The T-tree is similar to a binary tree with multiple keys (instead
of one) stored in each node.
variable-length fields, country-specific sort values, etc. Further, key
size and key storage strategy directly affect the branching factor for
-trees. Since branching factors are small already for node
sizes based on cache blocks, the height of the tree can vary substantially
as key size changes. Thus an initial motivation for our
research was the further examination of T-tree and B-tree performance
in a main-memory OLTP environment, in order to consider
a variety of key storage schemes and key sizes.
In [9, 17], the authors suggest avoiding the key size problem by
replacing the key value in the index with a pointer to the data and
reconstructing the key as needed during index traversal. This indirect
key-storage approach has the advantage of optimizing storage
by eliminating duplication of key values in the index, improving
the branching factor of nodes and simplifying search by avoiding
the complexity of storing long or variable-length keys in index
nodes. However, this approach must be re-examined due to
the additional cache misses caused by retrieval of indirect keys.
A second approach to dealing with large, complex keys is to use
key compression to allow more keys to fit in cache blocks. The
key-compression approach has the benefit that the entire key value
can be constructed without accessing data records or dereferencing
pointers. However, typical compression schemes such as employed
in prefix B-trees [4] have the disadvantage that the compressed keys
are variable-sized, leading to undesirable space management overheads
in a small, main-memory index node. Further, depending on
the distribution of key values, prefix-compressed keys may still be
fairly long resulting in low branching factors and deeper trees.
In this paper, we propose the partial-key approach, which uses
fixed-size parts of keys and information about key differences to
minimize the number of cache misses and the cost of performing
compares during a tree traversal, while keeping a simple node structure
and incurring minimal space overhead. A key is represented in
a partial-key tree by a pointer to the data record containing the key
value for the key, and a partial key. For a given key in the index,
which we refer to as the index key for the purposes of discussion, the
partial key consists of (1) the offset of the first bit at which the index
key differs from its base key, and (2) l bits of the index key value
following that offset (l is an input parameter). Intuitively, the base
key for a given index key is the most recent key encountered during
the search prior to comparing with the index key. The partial-key
approach relies on being able to resolve most comparisons between
the search key and an index key using the partial-key information
for the index key. If the comparison cannot be resolved, the pointer
to the data record is dereferenced to obtain the full index key value.
Using the idea of partial keys, we develop the pkT-tree and the
pkB-tree, variants of the T-tree and B-tree, respectively. We describe
search algorithms for these partial-key trees as well as strategies
for maintaining the partial-key information in the presence of
updates. Finally, we conduct an extensive performance study of the
pkT-tree and pkB-tree structures, comparing them to standard
trees and B-trees with both direct and indirect key storage schemes.
In our experiments, we consider a wide range of parameter settings
for key size and key value distribution (entropy). We also study
the sensitivity of our partial-key algorithms to l, the number of key
value bits stored in the partial key. Our performance results, given
in detail in Section 5.3, indicate that:
ffl Of the indexing schemes studied, partial-key trees minimize
cache misses for all key sizes.
ffl Due to lower CPU costs, B-trees with direct key storage are
faster than partial-key trees for small key sizes, but slower
for larger key sizes.
ffl Partial-key schemes have good space utilization, only slightly
worse than T-trees with indirect key storage, and much better
than direct key storage schemes.
ffl A small, fixed value for l (the amount of partial key informa-
avoids most indirect key accesses for a wide variety of
lengths and entropies.
In summary, partial-key trees incur few cache misses, impose
minimal space overheads and reduce the cost of key comparisons
without introducing variable-length structures into the node, thus
enabling larger keys to be handled with much of the efficiency
of smaller keys. Further, we expect the relative performance of
partial-key trees to improve over time with the increasing cost of
cache misses.
The remainder of the paper is organized as follows. In Section 2,
we discuss related work. In sections 3 and 4, we introduce partial-
comparisons and apply them to search in pkT- and pkB-trees.
In Section 5, we present the results of our performance study. Fi-
nally, in Section 6, we present our conclusions and issues to be
addressed in future work.
2. RELATED WORK
An early study of index structures for main-memory databases
was undertaken in [17]. The authors proposed the T-tree index and,
in order to optimize storage space, advocated storing pointers to
data records instead of key values in the index. However, this design
choice can result in a large number of cache misses since each
pointer dereference to access the key value during a key comparison
could potentially lead to a cache miss. Since at the time of this
early work on main-memory databases, there was little difference
between the cost of a cache-hit and that of a cache-miss, not much
attention was paid to minimizing cache block misses. While most
work on cache-conscious data structures outside of the database
community has focused on optimizing scientific workloads, cache-conscious
behavior was studied in [6] for "pointer-based" structures
including search trees. However, this work focused on actions
which can be taken without programmer cooperation, rather
than explicitly designed data structures.
More recently, Rao and Ross propose two new main-memory
indexing techniques, Cache-Sensitive Search Trees (CSS-tree) [24]
and Cache-Sensitive B +-Trees [25]. Designed for a read-intensive
OLAP environment, the CSS-tree is essentially a very compact and
space-efficient -tree. CSS-tree nodes are fully packed with keys
and laid out contiguously, level by level, in main memory. Thus,
children of a node can be easily located by performing simple arith-
metic, and explicit pointers to child nodes are no longer needed.
Further, in the absence of updates, key values can be mapped to integers
such that the mapping preserves the ordering between key
values. Thus, each key value in a CSS-tree is a compact inte-
ger, which is stored in the node itself, eliminating pointer deref-
erences. In summary, the CSS-tree incurs very little storage space
overhead and exhibits extremely good cache behavior. The CSB
tree adapts these ideas to an index structure which supports efficient
update (for the CSS-tree, the authors recommend rebuilding from
scratch after a batch of updates). This structure stores groups of
sibling nodes adjacent in memory, reducing the number of pointers
stored in the parent node without incurring additional cache misses.
However, this work continues to assume integer keys. To this ex-
tent, the performance improvements of CSB + -trees or CSS-trees
and partial-key trees are likely to be orthogonal, since the former
focuses on reducing pointer overhead and improving space utilization
while the latter focuses on reducing key-size and comparison
cost.
Our partial-key techniques borrow from earlier work on key compression
[4, 11]. However, there are differences, which we discuss
below. Partial-key trees are most similar to Bit Trees that were introduced
in [11]. Bit Trees extend -trees by storing partial keys
instead of full key values for (only) those keys contained in leaf
nodes. The partial key in a Bit Tree consists of only the offset of
the difference bit relative to the previous key in the node. The authors
describes several properties of searches using only the offset
of difference bits, and in particular show the somewhat surprising
result that the precise position of a search key in a leaf node can
be determined by performing exactly one pointer dereference to retrieve
an indirect key. Other than a focus on main-memory rather
than disk, our partial-key trees differ from Bit Trees in the following
respects: (1) partial keys are stored in both internal nodes and
leaf nodes, (2) partial keys contain l bits of the key value following
the difference bit in addition to the difference bit offset, and
(3) searching for a key in a node in a partial-key tree requires at
most one pointer to be dereferenced, and frequently requires no
pointer dereferences, due to the l bits of additional information and
our novel search algorithms.
-trees, proposed in [4], employ key compression to improve
the storage space characteristics and the branching factor of
-trees. Suppose p is the common prefix for keys in the subtree
rooted at node N . For a in node N , the common
prefix p can be computed during tree traversal and only the suffix
s of the key value is stored in N . Further, when keys move
out of a leaf node due to a split, only the separator, or the shortest
portion of the key needed to distinguish values in the splitting
nodes, is moved. Partial-key trees differ from prefix B -trees in
the following respects: (1) while prefix trees factor out the portion
common to all keys in a node, partial-key trees factor out information
in common between pairs of adjacent keys within the node,
typically a longer prefix than is common to the whole node, (2)
while in prefix -trees, the entire suffix of the separator is stored,
in partial-key trees, only the first l bits of the suffix is stored - thus,
partial-key trees may lose key value information while the prefix
does not, (3) in the prefix B
are needed. In contrast, in a partial-key tree, pointer dereferences
must be performed when the comparison cannot be resolved using
the partial-key information, and (4) the partial keys stored in a prefix
tree are variable sized and this complicates implementation.
Further, in some cases the separator may not even fit in a 64-byte
cache line, causing index nodes to span multiple cache blocks and
reducing the branching factor. Thus partial key trees trade off the
guarantee of no indirect key references of partial-key trees for a
low probability of indirect key dereferences, in exchange for simple
node structures and more strongly bounded tree heights. This
is reasonable since the cost of a cache-miss is orders of magnitude
lower than the cost of a random disk access.
Ronstrom in his thesis [28] describes the HTTP-tree, a variation
of prefix trees in which further compression is performed
within a node by storing keys relative to the previous key, factoring
out common suffixes, etc. Nodes are also clustered on pages to
facilitate distribution. However, during searches the full key is re-constructed
in order to perform comparisons, and compressed key
sizes are variable. Other lossless compression schemes primarily
for numeric attributes have recently been proposed in the database
literature [13, 23]. Goldstein, Ramakrishnan and Shaft [13] propose
a page level algorithm for compressing tables. For each numeric
attribute, its minimum value occurring in tuples in the page
is stored separately once for the entire page. Further, instead of
storing the original value for the attribute in a tuple, the difference
between the original value and the minimum is stored in the tuple.00101111101010
rec:
(2)
(1) (Data
Record)
Partial
l bits
Figure
1: A Partial Key
Thus, since storing the difference consumes fewer bits, the storage
space overhead of the table is reduced. Tuple Differential Coding
(TDC) [23] is a compression method that also achieves space savings
by storing differences instead of actual values for attributes.
However, for each attribute value in a tuple, the stored difference is
relative to the attribute value in the preceding tuple.
3. PARTIAL KEY SEARCH
In this section, we describe the partial key approach and algorithms
for performing compares and searches in the presence of
partial keys. We assume that keys are represented as fixed-length
bit strings (though this is not required in general). Further, the bits
are numbered in order of decreasing significance, beginning with
bit 0 (the most significant bit).
3.1 Partial Keys - An Overview
Consider the order in which index keys are visited and compared
with the search key during a traversal of a T-tree or B-tree index.
We observe that for both structures, the key visited so far which
is closest in value to the search key is either the most recent key
to compare less than the search key or or the most recent key to
compare greater. It is easy to see that the most recent key to compare
less than the search key shares more initial bits than any other
which compared less than the search key during the search.
Similarly for the most recent key which compared greater, thus the
observation follows. In fact, very few initial bits may be shared between
the most recent with the search key when, for example, the
keys are on either side of a large power of two, but this event is rare.
In the following discussion, we refer to the current key being visited
as the index key (as opposed to search key), and the previous
visited as the base key. In our partial-key schemes, each key is
represented in the index by three items: (1) a pointer to the record
containing the key, (2) the offset of the first bit at which the index
key differs from the base key, and (3) the first l bits of the index
key following this offset. We illustrate the construction of a partial
key in
Figure
1. In this figure, and in others below, the index key or
search key is generally referred to as k i or k j , and the base key as
One approach to using this partial key information would mirror
the use of prefixes in a prefix B-tree. In this approach, the search
code would maintain the known prefix of the index key as it traversed
the tree, concatenating appropriate portions of partial keys
as they are encountered. If the known portion is sufficient to resolve
a comparison with the search key, then a cache miss is avoided.
However, it turns out that constructing this prefix is not necessary,
and in fact comparisons can often be resolved by noting the offset
at which the search key differed from the base key, and comparing
that to the offset stored in the partial key for the index key. This observation
ensures that most comparisons are performed with small,
fixed-length portions of the key. Precisely how these comparisons
are performed is the topic of the next section.
3.2 Partial-Key Comparisons
In this section we discuss the properties of difference bits more
formally, and present a theorem that bears directly on comparisons
in partial-key trees.
be the offset of the most significant (thus lowest)
bit of difference between keys k i and k j . Also, let c(k
the result, LT, GT or EQ, of the comparison between keys k i and
depending on whether k i is !, ?, or = k j . The partial-key
approach is based on the observation that for an index key k j and
its base key k b , noting in the partial key for key k j will
frequently allow full comparisons of k j with a search key k i to
be avoided during index retrieval. In particular, if the two
and k j compare in the same way (LT, for example) to the base key,
are known, then it is possible
to determine how k i compares to k j as well as
additional reference to the keys unless
THEOREM 3.1. Given k
and
ae
PROOF. Assume without loss of generality that k
Suppose that d(k , then at the first
bit of difference, must be 0 and k b 's bit must be
1. It follows from d(k that the bits of k i agree
with the bits of k b for all bits up to and including d(k so the
corresponding bit of k i is also 1. Thus,
On the other hand, suppose that d(k
Thus, since k , the bit at position d(k must be 0 in k i
and 1 in k b . Further, since d(k must be
the same in both k j and k b , that is, 1. Further, the bit sequences
preceding this bit in both k i and k j must be identical. Thus, it
follows that k The other cases
for follow from symmetry.
The key ideas of Theorem 3.1 are illustrated in Figure 2(a). Here,
are both less than the base key k b , and d(k
9 is greater than d(k 5. Thus, as shown in the figure,
because the first 5 bits of k i , k j and k b
match, but on the 6 th bit, k i and k b are both 1 while k j is 0.
Theorem 3.1 can be used to compute, for most cases, the result
of the comparison between a search key k i and the index key k j .
This is because the partial key for an index key k j stores the difference
bit offset respect to its base key k b that was
encountered previously in the search. Further, since an attempt is
made by the search algorithm to compare k i with k j , it must be the
case that c(k are
available due to the previous comparison between the search key
and the base key. Thus, for the case when d(k
Theorem 3.1 can be used to infer d(k
in turn can be propagated to the next index key comparison (for
which k j is the base key).
The only case not handled by Theorem 3.1 occurs when d(k
In this case, the only inference one can make is that
are identical on the first d(k
one cannot determine how keys k i and k j compare. An example
of this is illustrated in Figure 2(b). In both of the cases shown
in the figure, k
in one, k and in the other . When the difference
bits are equal, the l bits of the key value stored for the index key
are compared with the corresponding bits in the search key. If these
bits are equal, retrieval of the indirectly stored key is required. Note
that, as shown in Figure 1, the difference bit itself in not included
in the l bits stored with a partial key. Since both k i and k j differ
from k b in the value of this bit, the corresponding bits in k i and k j
must be identical.
procedure COMPAREPARTKEY(searchKey, indKey, comp, offset)
begin
1. if (indKey.pkOffset ! offset)
2. if
3. comp := GT
4. else
5. comp := LT
6. offset := indKey.pkOffset
7. else if (indKey.pkOffset = offset)
8. if
9. partKey := searchKey[0:indKey.pkOffset-1]
10. else
11. partKey := searchKey[0:indKey.pkOffset-1]
12. comp, offset := compare(searchKey,
13.
14. if (offset ? indKey.pkOffset
15. comp := EQ;
16. return [comp, offset];
Figure
3: COMPAREPARTKEY: Comparison using Partial Keys.
Procedure COMPAREPARTKEY in Figure 3 utilizes Theorem 3.1
to compute the result of a comparison between a search key and an
index key containing partial-key information. The partial-key information
consists of three fields pkOffset, pkLength and partKey
which are described in Table 1. The input parameters comp and
offset to COMPAREPARTKEY are the result of the comparison and
the difference bit location of the search key with respect to the base
key. Steps 1-6 of the procedure are a straightforward application
of Theorem 3.1. In case the difference bit locations for the search
key and index key are equal, the keys must be identical until the
difference bit. Further, the difference bit itself in both keys must be
either 0 or 1 depending on whether the keys are less than or greater
than the base key. In Step 12, function COMPARE is invoked to
compute the comparison of the l bits following the difference bit
in both keys. Function COMPARE(k1, k2) returns a pair of values
comp, offset with the following semantics. The value comp is
one of EQ, LT or GT depending on whether the bit sequence k1
is equal to, less than or greater than the bit sequence k2 when the
two sequences are compared bit by bit. The return value offset is
the location of the most significant bit in which the two keys differ.
Thus, in steps 13-14, since partKey may not represent the entire
index key, if all bits in partKey agree with the corresponding bits in
searchKey, the comparison between the search key and index key
cannot be resolved and the procedure returns EQ. In this case, the
semantics of the returned offset is simply that the two keys agree
on the first offset\Gamma1 bits.
Note that, in Step 12, function COMPARE only needs to consider
bits starting from bit offset indKey.offset in the two keys (since
the corresponding bits preceding this point are identical for both
2:
Therefore:
(a) (b)
Figure
2: Examples of Comparisons between Keys k i and k j that Can and Cannot be Resolved
Table
1: Partial-Key Notation
Symbol Description
l Maximum number of bits from key value stored in partial key
Offset of difference bit of key k i and its base key
bits following location k i .pkOffset in key k i
.pkLength Number of partial-key bits stored with key k i
sequence consisting of bits between offsets l and m in key k i
k1 \Delta k2 Concatenation of bit sequences k1 and k2
N .numKeys Number of keys in node N
th key in node N
th pointer in node N
keys). Further, in most cases, procedure COMPAREPARTKEY performs
only one integer comparison involving difference bit offsets;
however, additional expense may be incurred since the bit of offset
must be computed in anticipation of the next comparison. While
a greater cost than simple integer compares, it compares well to
comparisons of larger keys, as shown in Section 5.
The partial-key scheme can be adapted to multi-segment keys,
even if some segments are of arbitrary length. The idea is to treat
the pkOffset as a two digit number, where the first digit indicates
the key segment and the second indicates an offset within that seg-
ment. The partKey field may be limited to bits from a single seg-
ment, or at the cost of more complexity, span segments.
3.3 Partial-Key Nodes
The COMPAREPARTKEY procedure described in the previous
section is the basic building block for performing retrievals in pkT-
trees and pkB-trees. Before we present the complete algorithm for
searching in a tree, we present below a linear encoding scheme for
computing the partial keys for an array of keys in an index node N .
We also present a linear search algorithm for finding a search key
in the node if it is present, and if it is not, the pair of adjacent keys
in the node between which the search key lies.
Linear Encoding of Partial Keys. The base key for each key in
N is simply the key immediately preceding it in N . For the first
key N:key[0], the base key is a key in an ancestor of N in the tree
that is compared with the search key during the tree traversal before
node N is visited. Thus, the base key for the first key depends on
the tree structure and is different for the pkT-tree and pkB-tree. We
discuss this further in the following section).
Simple Linear Search Algorithm. When searching for a key in
the index, the node N is visited after the search key is compared
with the base key for N:key[0]. Let comp and offset denote the
result of the comparison and the offset of the difference bit between
the search key and the base key. Then, in order to locate the
position of the search key in N , procedure COMPAREPARTKEY
(see
Figure
can be used to compute the comparison and the difference
bit offset between the search key and N:key[0]. In case
the comparison cannot be resolved using the partial-key information
for N:key[0] (that is, COMPAREPARTKEY returns EQ), then
N:key[0] is dereferenced and the search key is compared with the
full key corresponding to N:key[0]. The result of the comparison
and difference bit offset is then used to compare the search key with
N:key[1], and so on. The above steps of comparing with the search
are repeated for successive keys in N with the comparison and
difference bit offset for the previous key - until a key that is greater
than or equal to the search key is found.
However, this naive linear search strategy may perform unnecessary
dereferences. The following example illustrates this.
3.2. Consider the node N in Figure 4. The difference
bit for each key (with respect to the previous key) is marked by
an arrow and bits following the difference bit are stored in
the partial key for each key. Let the base key for the first key in N
Search Key
Figure
4: Linear Searching for Key in Node
be 00101 and let the search key be 10111.
After the search key is compared with the base key, the comparison
and difference bit offset are GT and 0, respectively. Invoking
COMPAREPARTKEY with the search key, N:key[0], GT and
returns [EQ, 2] since N:key[0]:pkOffset = 0 and the search
matches with N:key[0] on the first two bits. Thus, N:key[0]
would be dereferenced, and comp and pkOffset after the comparison
with N:key[0] is GT and 2, respectively. Next the search key is
compared with N:key[1] by invoking COMPAREPARTKEY. Since
which is greater than 2, the difference bit
offset for the search key and N:key[0], COMPAREPARTKEY returns
[GT, 2], and N:key[1] is not dereferenced. The next invocation
of COMPAREPARTKEY with N:key[2] returns [GT, 3] since
and the bit sequence 1010 for the constructed
for N:key[2] is smaller than 1011, the corresponding
bits of the search key. After returning [GT, 3] for N:key[3]
PAREPARTKEY moves to key 4 and returns [LT, 1] for N:key[4]
since it finds that N:key[4]:pkOffset = 1 is less than 3, the offset
returned for N:key[3]. Thus, the simple linear search algorithm
stops at N:key[4] and the position of the search key in N is determined
using only one key dereference, that of N:key[0].
However, the position of the search key can also be determined
without dereferencing any keys, including N:key[0]. The reason
for this is that COMPAREPARTKEY returns [EQ 2] when it is invoked
with N:key[0]. Thus, at this point, we know that the first two
bits of N:key[0] are 10 since the first two bits of N:key[0] agree
with those of the search key. Since N:key[1]:pkOffset = 3, we
also can conclude that the first two bits of N:key[1] agree with
those of N:key[0] are thus 10. Since
must be the case that the third bit of
N:key[2] is 1 (and the third bit of N:key[1] must have been 0).
Further, the fourth bit of N:key[2] can be obtained from its partial
key and thus we can conclude that the first four bits of N:key[2]
are 1010. Since the search key is 10111, the comparison between
the search key and N:key[2] can be resolved and is [GT, 3]. Subsequent
comparisons can then be carried out as described earlier
to conclude that the search key lies in between N:key[3] and
N:key[4]. Thus, the position of the search key can be determined
without dereferencing a single key.
Linear Search Algorithm Requiring at most One Key Deref-
erence. Procedure FINDNODE, shown in Figure 5, avoids the un-necessary
dereferences made by the simple linear search algorithm.
When comparing the search key with an index key in node N , in
case procedure COMPAREPARTKEY returns EQ, that is, the comparison
between a search key and index key cannot be resolved,
FINDNODE does not immediately dereference the index key. In-
stead, it exploits the semantics of [EQ, offset] returned by COM-
procedure FINDNODE(N , searchKey, offset)
begin
1. high := N .numKeys;
2. low := -1;
3. cur off := offset;
4. cur cmp := GT;
5. cur := low +1;
6. while (cur ! high)f
7. cur cmp, cur off :=
8. COMPAREPARTKEY(searchKey, N .key[cur],
cur cmp, cur off)
9. if (cur
10. high := cur;
11. break;
12. else if (cur
13. low := cur;
14. offset := cur off;
15. cur++;
17. if (high - low ? 1)
18. low, high, offset := FINDBITTREE(N , searchKey, low, high)
19. /* cache miss */
20. return [low, high, offset];
Figure
5: FINDNODE: Linear Searching for a Key in a Node
Using Partial Keys.
PAREPARTKEY (which is that the search key and the index key
agree on the first offset-1 bits) to try and resolve comparisons with
subsequent keys. This was illustrated earlier in Example 3.2, where
[EQ, 2], the result of the comparison with N:key[0], was useful in
resolving the comparison operation with key N:key[2]. In fact, procedure
COMPAREPARTKEY as already stated correctly handles the
value of EQ as an input parameter when called from FINDNODE, ,
and an informal proof of this fact can be found in Appendix A.
Procedure FINDNODE accepts as input parameters node N in
which to perform the linear search and the difference bit offset between
the search key and the base key for N:key[0]. It assumes that
both the search key and N:key[0] are greater than the base key with
respect to which N:key[0]'s partial key is computed. Similar to the
simple linear search algorithm described earlier, it compares the
search key with successive index keys in the node until an index key
larger than the search key is found. Procedure COMPAREPARTKEY
is used to perform every comparison and the results of the previous
comparison (stored in variables cur cmp and cur off) are passed
as input parameters to it. Unlike the simple linear search scheme,
an index key is not immediately dereferenced if the precise result
of the comparison between the key and the search key cannot be
computed. Instead, variables low and high are used to keep track
of the positions of index keys in N that the search key is definitely
known to be greater than and less than, respectively.
At the end of a sweep of keys in node N , if high - low is greater
than 1, then it implies that the precise position of the search key in
N is ambiguous. In this case, procedure FINDBITTREE is used to
locate the exact position of the search key between low and high
it returns the consecutive keys between which the search key lies
and the difference bit offset of the search key with respect to the
lower key. Procedure FINDBITTREE employs the search algorithm
for Bit Trees described in [11] and requires exactly one key to be
dereferenced. In a nutshell, the algorithm performs a sequential
scan of keys in N between offsets low and high. It maintains a
variable pos, which is initially set to low. For each key examined,
if for the difference bit offset in its partial key, the bit value in the
search key is 1, then variable pos is set to the position of the key in
N . On the other hand, if the bit value in the search key is 0, then
keys for which the difference bit offset is greater than the current
are skipped and the next key examined is the key whose difference
bit offset is less than that of the current key. Once all the
have been examined, N:key[pos] is dereferenced and is compared
with the search key. If comp, offset is the pair returned by
compare(searchKey, N:key[pos]), then FINDBITTREE takes one
of the following actions:
1.
2. Suppose that high is the position of the
first key to the right(left) of pos for which the difference bit
offset is less than that of N:key[pos]. Return [high-1, high,
offset].
The correctness of FINDBITTREE for Case 2 above is due to the
following property of pos: Between pos and a key whose difference
bit offset is equal to that of pos, there is a key whose difference
bit offset is less than pos's. We refer the reader to [11] for
details.
The variable offset returned by FINDNODE is the difference bit
offset between the search key and N .key[low - 1]. In case low =
-1, that is, the search key is less than N:key[0], then offset is the
difference bit offset between the search key and the base key for
N:key[0]. Finally, if the search key is contained in N , then the procedure
returns the position of the index key that equals the search
key. Revisiting Example 3.2, FINDNODE determines the position
of the search key in node N without requiring any keys to be deref-
erenced. Successive invocations of procedure COMPAREPARTKEY
for the sequence of keys in N return [EQ, 2], [EQ, 2], [GT, 3], [GT,
3] and [LT, 1].
Maintaining Partial-Key Information in the Presence of Up-
dates. With the linear encoding strategy, maintaining the partial-
key information is quite straightforward. Insertion of a new key in
the node requires the partial keys of the inserted key and the key
following it to be recomputed, while deletion requires only the partial
key for the key following the deleted key to be recomputed.
4. PARTIAL-KEY TREES
Building on the partial-key comparison and single-node partial-
search algorithms presented in the previous section, we now
discuss how partial keys can improve performance in main-memory
index structures, by reducing the L2 cache miss rate. In particular,
we present partial-key variants of the T-tree [17] and B-tree [3] index
structures suitable for use in main-memory. The pkT-ttree and
pkB-tree, as we refer to them, extend their main-memory counter-parts
by representing keys by their partial-key information as described
in Section 3. The linear encoding scheme described in the
previous section is used to compute partial keys for the index keys
in a node. With linear encoding at the node level, we thus only need
to specify the base key with respect to which the first key in each
node is encoded since the base key for every other key in the node
is simply the key preceding it.
4.1 pkT-tree
The T-tree is a balanced binary tree with multiple keys stored
in each node. The leftmost and the rightmost key value in a node
define the range of key values contained in the node. Balancing is
handled as for AVL trees [1]. We refer the reader to [17] and [26]
for additional information about T-trees, including details of update
strategies and concurrency control. The pkT-tree is similar to the T-tree
except that in addition to a pointer to the data record containing
the full key value, each index key entry also contains partial-key in-
formation. In the following, N:ptr[0] and N:ptr[1] denote pointers
to the left and right children of node N .
Storing Partial-Key Information. For the first key in each node
N , the base key with respect to which the partial key is computed is
the first key in the parent node. This is because, as described below,
the leftmost key in the parent node is the key with which the search
key is compared before node N is visited. Figure 6(a) depicts an
example pkT-tree - in the figure, the solid arrows denote the base
for index keys while the dashed arrows represent pointers to
child nodes.
procedure FINDTTREE(searchKey, T)
begin
1. N := root of tree T;
2. laN := nil;
3. offset := 0;
4. comp := GT;
5. while (N != nil) f
6. comp, offset := COMPAREPARTKEY(searchKey,
7. if
8. dereference N:key[0] /* cache miss */
9. comp, offset := compare(searchKey,N:key[0]);
10. if
11. return [N; 0; 0; offset]
12. else
13. N := N:ptr[0];
14. else
15. laN := N ;
16. laNOffset := offset;
17. N := N .ptr[1];
19. return [laN, FINDNODE(laN - laN[0], searchKey, laNOffset)];
Figure
7: FINDTTREE: Searching for a Key in T-tree Using Partial
Keys.
Searching for Key. Searching for a key value in the pkT-tree is
relatively straightforward and is performed as described in procedure
FINDTTREE (see Figure 7). Procedure FINDTTREE includes
an optimization from [17] which requires that at each node, the
search key be compared to only the leftmost key value the node
(Step 6). The variables comp and offset keep track of the results
of the most recent comparison of the search key with the leftmost
key in the parent node, and are passed as parameters to COM-
PAREPARTKEY. In case COMPAREPARTKEY cannot resolve the
comparison, the leftmost key in the current node is dereferenced
(Step 8). If the search key is found to be less than this key, the
search proceeds with the left subtree. If it is found to be greater,
the search proceeds with the right subtree, with the current node
noted in variable laN (Step 15). The significance of laN is that
when the search reaches the bottom of the tree, the search key, if
present in the tree, is in the node stored in laN and is greater than
laN[0]. Procedure FINDNODE can thus be employed in order to
determine the position of the search key in node laN. Since FIND-
NODE requires the leftmost key to be greater than the base key for
it, laN[0] is deleted from laN before passing it as an input parameter
to FINDNODE.
Maintaining Partial-Key Information in the Presence of Up-
dates. Inserts and deletes of keys into the pkT-tree can cause ro-
tations, movement of keys between nodes and insertions/deletions
(a) pkT-tree
(b) pkB-tree
pointers
pointers
Figure
pkT-trees and pkB-trees
of keys from nodes. The partial-key information for these cases is
updated as follows.
ffl In the case of a rotation, the parent of a node involved in the
rotation may change. Thus, the partial-key information for
the leftmost key in the node is recomputed with respect to
the new parent.
ffl If the leftmost key in a node changes, the partial-key information
is recomputed for the leftmost keys in the node and
its two children.
ffl In case the key to the left of a key in a node changes (due to
a key being inserted or deleted), the partial-key information
for the key is recomputed relative to the new preceding key.
4.2 pkB-Tree
The pkB-tree is identical to a B-tree except for the structure of
index keys. Each index key consists of a pointer to the data record
for the key and partial-key information. Leaf nodes contain only index
while internal nodes also contain N:numKey+1 pointers
to index nodes - the subtree pointed to by N:ptr[i] contains keys
between
Storing Partial-Key Information. The base key for the leftmost
key in N is the largest key contained in an ancestor of N that is less
than the leftmost key. Thus, if N 0 is the node such that N 0 :ptr[i],
points to N or one of its parents, and for all N 00 in the path
from N 0 to N , N 00 :ptr[0] points to N or one of its parents, then the
base key relative to which N:key[0] is encoded is N
This is illustrated in Figure 6(b), where the solid arrows denote the
base keys for index keys of the pkB-tree and the dashed arrows represent
pointers to child nodes.
Searching for Key. Procedure FINDBTREE, in Figure 8, contains
the code for searching for a key in a pkB-tree. Beginning with the
root node, for each node, procedure FINDNODE is invoked to determine
the child node to be visited next during the search. The
variable offset stores the offset of the difference bit between the
search key and the base key for N:key[0] (that is, the largest key in
an ancestor of N that is also less than N:key[0]). This is because
FINDNODE simply returns the offset input to it if the search key is
less than N:key[0].
Maintaining Partial-Key Information in the Presence of Up-
dates. An insert operation causes a key to be inserted in a leaf
node of the pkB-tree. If the key is inserted at the leftmost position
in the leaf, then its partial key needs to computed relative to the
largest key that is less than it in its ancestor. On the other hand,
if there are keys to the left of it in the node, then its partial key is
procedure FINDBTREE(searchKey, T)
begin
1. N := root of tree T;
2. pN := nil;
3. offset := 0;
4. while (N != nil) f
5. pN := N ;
6. low, high, offset := FINDNODE(N , searchKey, offset);
7. if (low = high)
8. return [N; low; high]
9. N := N:ptr:[high]
11. return [pN, low, high];
Figure
8: FINDBTREE: Searching for a Key in B-tree Using
Partial Keys.
computed easily relative to its preceding key and the partial key of
the next key is computed with respect to it. In case node N is split,
the splitting key from N is inserted into its parent. Splits can thus
be handled by simply updating the partial-key information in the
parent similar to the key insertion case.
Key deletion from a pkB-tree is somewhat more complicated. In
case the leftmost key in a leaf is deleted, the partial-key information
for it needs to be recomputed from its base key in its ancestor.
Deletion of a non-leftmost key in the leaf simply requires the partial
key for the key following it to be recomputed. Finally, deletion
of a key N:key[i] from an internal node N of the pkB-tree causes
it to be replaced with the smallest key in the subtree pointed to by
is the node containing this key. Then for
every node N 00 between N and N 00 , N 00 [0]'s partial key is recomputed
with this smallest key (that replaces the deleted key N:key[i])
as the base key.
5. PERFORMANCE
The goal of our performance study was to compare the lookup
performance of T-trees, B-trees, pkT-trees and pkB-trees in a main-memory
setting. Our particular goals were as follows: (1) Study
performance over a wide range of key sizes and key value distri-
butions. (2) Evaluate the impact of changing the amount of partial
information used for pkT- and pkB-trees. (3) Evaluate space
usage and the space-time tradeoff. In subsequent sections we describe
our hardware platforms, the design of our experiments and
present selected results.
5.1 The Memory Hierarchy
The latencies observed during memory references depend primarily
on whether the data is present in cache and whether the vir-
System CPU L1 (data) L2 (data) DRAM
Cycle Time Size Block Latency Size Block Latency L2 Miss Latency
Sun ULTRA 3.7ns 16K 64 6ns 2M 64 33ns 266ns
Sun ULTRA 2.2ns 16K 64 4ns 4M 64 22ns 208ns
Pentium III 1.7ns 16K 5ns 512K 40ns 142ns
Pentium IIIE 1.4ns 16K 4ns 256K 10ns 113ns
Table
2: Latency of Cache vs. Memory.
tual address is in the Translation Lookaside Buffer (TLB). 2
A modern main-memory architecture typically includes two levels
of cache, a small, fast, on-CPU L1 cache and a larger, off-CPU, 3
and therefore slower L2 cache. Typical parameters for cache and
memory speed are shown in Table 2 (see [19, 20, 7]). The latency
information is generated with version 1.9 of lmbench [18] on locally
available processors, and is intended to give the reader a feel
for current cache parameters, not as a comparison of these systems.
Another component of the memory hierarchy is the TLB, which
caches translations between virtual and physical addresses. While
are shown in [5] to have a significant effect on perfor-
mance, we do not focus on TLB issues in this paper. One justification
for this approach is the fact that almost all modern TLBs
are capable of using "superpages" [14], essentially allowing single
TLB entries to point to much larger regions. While posing difficulties
for operating system implementors [27], this facility may
effectively remove the TLB miss issue for main-memory databases
by allowing the entire database to effectively share one or two TLB
entries. While we do not focus on TLB effects, they were apparent
during our experimentation in the form of better performance when
index nodes span multiple cache lines (these results are not shown
due to limited space). Determining the effect of superpages on the
TLB costs of main-memory data structures remains future work.
5.2 Experimental Design
We implemented T-trees and B-trees for direct and indirect storage
of keys. We also implemented pkT-trees and pkB-trees, and
varied the size of the partial keys stored in the node. Our T-tree
algorithms are essentially those of Lehman and Carey, with the
optimization of performing only a single key-comparison at any
given level. In the direct key and partial-key variants, we stored en-
tire/partial key values only for the leftmost key in each node, used
during the initial traversal. While the T-tree code was adapted from
a system with additional support for concurrency control [26], next-
locking [21] and iterator-based scans, these features were not
exercised in our tests.
For the partial-key trees, we implemented two schemes for storing
offsets, bit-wise and byte-wise. The bitwise scheme was used
for the description in Section 3, since in this scheme the concepts
can be more clearly articulated. However, it may be more convenient
in an implementation to store difference information at a
larger granularity. In particular, we consider the byte granularity.
Clearly, all the results of Sections 3 and 4 hold when byte offsets
differ, since the bit offset will also differ in the same manner. How-
ever, when byte offsets are equal, it may still be the case that the
bit offsets would differ. In this case, one simply stores all the bits
at which the difference could occur, in other words, the entire byte.
Thus, if the offsets compare equal, the keys will be disambiguated
2 The physical characteristics of the memory module, especially repeated
access to the same page, may also be a factor, but are not
considered in this paper.
3 In the Pentium IIIE, the L2 cache, though relatively small, is on-chip
by the first byte. Storing offsets at a larger granularity trades off
distinguishing power for coding simplicity. With bit offsets one
always stores the precise l bits most capable of distinguishing the
otherwise, only some of those bits are stored.
Keys. We model keys as unique, fixed-length sequences of unsigned
bytes. Key comparisons are performed byte-wise in the context
of a separate function call. Indirect keys are stored in separate
L2 cache lines since they are typically retrieved from data records.
It is intuitive that partial keys would be sensitive to the distribution
of keys, and in particular to the entropy of the keys. Since
in our tests we generate bytes of the key independently, the entropy
for each byte depends only on the number of symbols from
which each byte is selected. Specifically, when each byte is selected
uniformly from an alphabet of n symbols, each byte contains lg n
bits of Shannon entropy [8]. Intuitively, keys with higher entropies
will be distinguished earlier during the compare, leading to lower
comparison costs. In terms of partial keys, lower entropy leads to
larger common prefixes, and a lower chance that two keys will differ
within the l bytes of a common prefix.
While, as mentioned in Section 3, partial-key trees may be used
with multi-part and variable-length keys, we did not implement
these options in our tests. We note that to some degree, not implementing
a wider variety of keys (and thus more expensive key com-
parisons) works against partial-key schemes, since these schemes
reduce the impact of key comparison costs in cases when the partial
key is sufficient. However, byte-wise comparisons may be
somewhat less efficient than, for example, single-instruction integer
comparisons. We selected byte-wise comparison as a reason-able
model of key comparison expense, and did not attempt to vary
this cost as an additional parameter in the current study.
Performance Metrics. We evaluated the various indices based on
the following three performance metrics: wall-time, number of L2
cache misses and storage space requirements. The number of L2
was measured using special registers available on the Ultra-SPARC
via the PerfMon software [10].
Parameter Settings. Unless otherwise stated, each index node
spanned three L2-cache blocks for a total of 192 bytes, each index
stored 1M keys, and keys were chosen uniformly and at random,
but rejected if they were not unique. Three cache blocks were chosen
because that size could handle larger in-node key sizes and, in
our experiments (not shown) performed comparably or better than
smaller or larger node sizes for all of the studied algorithms. 1M
records represented the largest size our machine could easily hold
in memory - a relatively large number of records is required to see
the effect of L2 cache misses on timing numbers. For most tests,
we present results for two choices for the byte entropy for the generated
keys: 3.6 bits and 7.8 bits, corresponding to alphabet sizes
of 12 and 220, respectively, though in the actual experiments we
considered a wide variety of entropies in-between. In most of the
experiments, key size was the independent parameter. We fixed the
Time
per
lookup
(usec)
L2 misses per lookup
Time vs Cache misses, for
pkB-tree
pkT-tree
Time
per
lookup
(usec)
L2 misses per lookup
Time vs Cache misses, for
pkB-tree
pkT-tree
(b)
Figure
9: Time and L2 Cache Performance of Various Key Strategies, High and Low Entropy
size l of partial keys stored at 2 bytes and stored offsets in a byte
granularity since we found partial-key trees to perform optimally
or near-optimally for these choices.
Hardware Environment. Our experiments were conducted on a
Sun Ultra workstation with a 296MHZ UltraSPARC II processor
and 256 Megabytes of RAM. As shown in Table 2, this machine has
a 16K L1 data cache with size and a 2M L2 direct-mapped
cache with a 64 byte block size. The latencies shown by
lmbench [18] are 6ns or 2 cycles for the L1 cache, 33ns or 11 cycles
for L2, and 266 ns or 88 cycles for main memory. We implemented
the index structures using Sun's C++ compiler version 4.2 and optimization
level -O3. In our experiments, we ensured that all virtual
memory accessed during the runs was resident in RAM.
Experimental Runs. In most cases, a run consisted of 100,000
lookups from a (pregenerated) list of randomly selected keys from
the tree. All searches were successful. Each run was repeated 10
times and averaged. We ensured that the overall standard deviation
on time was very low (less than 1%). All figures shown in this
document are for a tree with 1.5 million elements, the maximum
that fit into main memory on the platform.
5.3 Selected Results
Index performance, in a main-memory environment, is dominated
by CPU costs of performing key comparisons and cache miss
costs. Thus, it is reasonable that B-trees with direct key storage will
perform better than partial-key trees for small keys, since space usage
is comparable to the space required for the partial key, and
our partial-key comparison code is somewhat more expensive than
simple byte-wise key compares. However, as keys become longer,
B-tree performance can be expected to become worse than partial-
trees due to a lower branching factor and higher key comparison
costs at low byte-entropies. In all cases, we expect indirect
indexes to perform poorly in comparison with direct indexes of the
same data structure, because indirect indexes will require an extra
cache misses to perform each comparison. These expectations are
confirmed by our experiments.
Figure
9 summarizes the experimental results for all indexing
schemes, on data sets with 1.5 million elements. The Y-axis shows
the number of L2 cache misses; the X-axis shows the average time
of a lookup in microseconds. Plots are parametric in key size, with
key sizes 8, 12, 20, 28, and 36 bytes; the high entropy case has an
additional point at size 4. Figure 9(a) shows behavior for low
entropy, with entropy per byte of 3.6. Figure 9(b) shows the same
experiment, run with entropy 7.8. For a given key size and entropy,
down and left defines improved performance. Performance is thus
a partial ordering, where one algorithm outperforms another if for
all values of key size and entropy value, the algorithm is faster and
has fewer cache misses. Using the metrics of cache misses and
lookup time, we make the following observations.
ffl pkB-trees consistently outperform the other algorithms in L2
cache misses.
ffl Direct B-trees outperform the other algorithms in time for
small key sizes, as would hold for integer keys.
ffl Direct T-trees outperform the other algorithms in time for
large key sizes, very slightly outperforming pkB-trees.
ffl Direct T-trees and indirect B-trees have essentially the same
cache performance. This occurs because T-trees suffer about
due to tree levels, while B-trees suffer
about the the same due to key dereferencing.
ffl Indirect T-trees, perform poorly compared to all other strate-
gies, primarily due to cache misses from both tree levels and
dereferencing.
ffl For all key sizes, the cache-miss behavior of partial-key trees
is as good as that of the corresponding tree structure with
direct storage of 4 byte keys.
One of the reasons that the superior cache-miss characteristics of
partial-key trees does not always translate into better timing numbers
(especially for smaller key sizes) is that other factors like CPU
costs for performing key comparisons, etc. are a significant component
of the overall performance. However, based on the cache-miss
statistics, we expect that the performance of partial-key trees will
improve relative to trees with direct key storage as long as processor
speeds improve more quickly than main-memory latency.
Choice of l. Larger values of l are necessary when entropy is low,
because sufficient entropy must be present in the partial key to have
a high probability that it will differ from the corresponding bytes
of the search key. (In general, random keys should have length
l - 2 lg 2 N=H to ensure that no two keys collide; most keys will
be disambiguous at length lg 2 N=H [8].) One can see that key-
wise difference information adapts to low entropy keys: when keys
Time
per
lookup
(usec)
Partial key length (bytes)
pkB bit
pkB byte
pkT bit
pkT byte
Time
per
lookup
(usec)
Overhead per key (bytes)
pkB-tree
pkT-tree
(b)
Figure
10: Varying Partial-key Size and Time-Space Tradeoffs
have low entropy, adjacent keys are likely to have larger common
prefixes. Further, increasing l adversely affects the branching factor
in nodes, thus there is a tradeoff between reducing cache misses
by avoiding references to indirect keys and reducing cache misses
with bushier, and thus shallower, trees. We investigated these issues
by running experiments with a wide variety of key entropies
and values for l. In this experiment, the keys have relatively low
entropy (3.6 bits per byte), but the results are similar over a wide
range of entropy values, and from this we expect partial keys to
perform well over a wide variety of key distributions. In fact, performance
is almost always optimal with small values of l - 2 or 4
bytes - due to the efficacy of storing difference offsets.
Storing zero bytes of key information is a special case which reduces
to an algorithm similar to the Bit Tree [11], but generalized
to handle internal levels of the tree and incur fewer cache misses.
While this option did not perform as well as l ? 0, our experiments
confirm the following intuition - storing differences at the bit level
is important for in order to increase distinguishing power.
Space Usage. Space overhead is a critical attribute of a main-memory
index, In Figure 10(b), we show the space-time trade-off
of different algorithms for a variety of key sizes. In this graph, the
X-axis is space and the Y -axis is lookup time (the lower left-hand
corner is optimal). The key size parameter varies between 0 to 8
bytes. The space numbers are obtained from the tree built by random
insertions of 1M keys. We see from this graph that indirect
storage, while a poor time performer, excels in space. How-
ever, schemes with direct key storage trade space for time, with
storage overheads that increase significantly with key size. Again,
pkT- and pkB-trees provide a nice tradeoff, taking approximately
twice the space of indirect storage for all key sizes, but less space
than direct-storage B-trees for all key sizes greater than 4.
6. CONCLUSIONS AND FUTURE WORK
In this paper, we have introduced two new index structures, pkT-
and pkB-trees, designed to optimize the space, time and cache-miss
performance of indices in main-memory, OLTP databases. These
index structures are based on partial keys, small fixed-size representations
of keys which allow index nodes to retain a simple struc-
ture, improve their branching factor and speed up key comparisons,
yet resolve most key comparisons without reference to indirectly
stored keys. In our performance study, we found that partial-key
trees perform better then B-Trees (in which keys are stored directly
in the node) for keys larger than 12-20 bytes, depending on key
distribution. Further, the partial-key trees incur fewer cache misses
than B-Trees with all but the smallest key sizes, leading to an expectation
that the performance of pkB-Trees relative to B-trees will
improve over time as the gap between processor and main memory
speeds widen causing the penalty for a cache miss to be severe. Fi-
nally, pkB-Trees take up much less space than standard B-trees for
all but the smallest trees.
While pkT-trees, and direct T-trees perform well, pkB-trees perform
better and are only slightly larger. However, we expect that
time T-trees will be replaced with variations of the B-tree in
main-memory databases, because of their dramatically better L2
cache coherence. For optimal performance on all key sizes, our performance
results lead one to consider a hybrid approach in which
direct storage is used for small, fixed-length keys and partial-key
representations are used for larger and variable-length keys.
In future work, we intend to explore other ways in which architectural
trends affect performance-critical main-memory DBMS
code. One such trend is the increasing availability of instruction-level
parallelism, and a related trend is the increasing cost of branch
misprediction and other "pipeline bubbles". A second trend is the
availability of "superpages" for TLBs which may significantly reduce
the TLB cost of in-memory algorithms.
7.
--R
The Design and Analysis of Computer Algorithms.
storage manager: Main memory database performance for critical applications.
Organization and maintenance of large ordered indexes.
Prefix B-trees
Database architecture optimized for the new bottleneck: Memory access.
Imporving pointer-based codes through cache-conscious data placement
Pentium III processor for the SC242 at 450 MHz to 800 MHz datasheet.
Elements of Information Theory.
Implementation techniques for main memory database systems.
Perfmon user's guide.
Main memory database systems: An overview.
"Compressing Relations and Indexes"
Virtual memory in contemporary microprocessors.
Dali: A high performance main-memory storage manager
An evaluation of Starburst's memory resident storage component.
A study of index structures for main memory database management systems.
Portable tools for performance analysis.
The Ultra
Ultra workstation datasheet.
ARIES/KVL: A key-value locking method for concurrency control of multiaction transactions operating on Btree indexes
Microsoft COM
Cache conscious indexing for decision-support in main memory
Making B
Logical and physical versioning in main-memory databases
Reducing TLB and memory overhead using online superpage promotion.
Design and Modelling of a Parallel Data Server for Telecom Applications.
The TimesTen Team.
--TR
Elements of information theory
Bit-Tree: a data structure for fast file processing
Reducing TLB and memory overhead using online superpage promotion
storage manager
In-memory data management for consumer transactions the timesten approach
Prefix <italic>B</italic>-trees
Making B+- trees cache conscious in main memory
The Design and Analysis of Computer Algorithms
Implementation techniques for main memory database systems
Virtual Memory in Contemporary Microprocessors
Main Memory Database Systems
An Evaluation of Starburst''s Memory Resident Storage Component
Block-Oriented Compression Techniques for Large Statistical Databases
Compressing Relations and Indexes
Logical and Physical Versioning in Main Memory Databases
A Study of Index Structures for Main Memory Database Management Systems
Cache Conscious Indexing for Decision-Support in Main Memory
Database Architecture Optimized for the New Bottleneck
--CTR
Peter Bumbulis , Ivan T. Bowman, A compact B-tree, Proceedings of the 2002 ACM SIGMOD international conference on Management of data, June 03-06, 2002, Madison, Wisconsin
Bin Cui , Beng Chin Ooi , Jianwen Su , Kian-Lee Tan, Contorting high dimensional data for efficient main memory KNN processing, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Phil Garcia, Multithreaded architectures and the sort benchmark, Proceedings of the 1st international workshop on Data management on new hardware, June 12-12, 2005, Baltimore, Maryland
B. Barla Cambazoglu , Cevdet Aykanat, Performance of query processing implementations in ranking-based text retrieval systems using inverted indices, Information Processing and Management: an International Journal, v.42 n.4, p.875-898, July 2006
Inga Sitzmann , Peter J. Stuckey, Compacting discriminator information for spatial trees, Australian Computer Science Communications, v.24 n.2, p.167-176, January-February 2002
Main Memory Indexing: The Case for BD-Tree, IEEE Transactions on Knowledge and Data Engineering, v.16 n.7, p.870-874, July 2004
Jingren Zhou , John Cieslewicz , Kenneth A. Ross , Mihir Shah, Improving database performance on simultaneous multithreading processors, Proceedings of the 31st international conference on Very large data bases, August 30-September 02, 2005, Trondheim, Norway
Ke Wang , Yabo Xu , Jeffrey Xu Yu, Scalable sequential pattern mining for biological sequences, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004, Washington, D.C., USA
Bin Cui , Beng Chin Ooi , Jianwen Su , Kian-Lee Tan, Indexing High-Dimensional Data for Efficient In-Memory Similarity Search, IEEE Transactions on Knowledge and Data Engineering, v.17 n.3, p.339-353, March 2005
Shimin Chen , Phillip B. Gibbons , Todd C. Mowry , Gary Valentin, Fractal prefetching B+-Trees: optimizing both cache and disk performance, Proceedings of the 2002 ACM SIGMOD international conference on Management of data, June 03-06, 2002, Madison, Wisconsin
Richard A. Hankins , Jignesh M. Patel, Effect of node size on the performance of cache-conscious B+-trees, ACM SIGMETRICS Performance Evaluation Review, v.31 n.1, June
Shimin Chen , Phillip B. Gibbons , Todd C. Mowry, Improving index performance through prefetching, ACM SIGMOD Record, v.30 n.2, p.235-246, June 2001
Bingsheng He , Qiong Luo, Cache-oblivious nested-loop joins, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Jeong Min Shim , Seok Il Song , Jae Soo Yoo , Young Soo Min, An efficient cache conscious multi-dimensional index structure, Information Processing Letters, v.92 n.3, p.133-142, 15 November 2004 | key compression;b-trees;cache coherence;main-memory indices;t-tree |
375684 | Efficient and effective metasearch for text databases incorporating linkages among documents. | Linkages among documents have a significant impact on the importance of documents, as it can be argued that important documents are pointed to by many documents or by other important documents. Metasearch engines can be used to facilitate ordinary users for retrieving information from multiple local sources (text databases). There is a search engine associated with each database. In a large-scale metasearch engine, the contents of each local database is represented by a representative. Each user query is evaluated against he set of representatives of all databases in order to determine the appropriate databases (search engines) to search (invoke) In previous word, the linkage information between documents has not been utilized in determining the appropriate databases to search. In this paper, such information is employed to determine the degree of relevance of a document with respect to a given query. Specifically, the importance (rank) of each document as determined by the linkages is integrated in each database representative to facilitate the selection of databases for each given query. We establish a necessary and sufficient condition to rank databases optimally, while incorporating the linkage information. A method is provided to estimate the desired quantities stated in the necessary and sufficient condition. The estimation method runs in time linearly proportional to the number of query terms. Experimental results are provided to demonstrate the high retrieval effectiveness of the method. | The Internet has become a vast information resource in recent years. To help ordinary users find
desired data in this environment, many search engines have been created. Each search engine has
a text database that is defined by the set of documents that can be searched by the search engine.
Usually, an inverted file index for all documents in the database is created and stored in the search
engine. For each term which can represent a significant word or a combination of several (usually
adjacent) significant words, this index can identify quickly the documents that contain the term.
Frequently, the information needed by a user is stored in the databases of multiple search
engines. As an example, consider the case when a user wants to find research papers in some
subject area. It is likely that the desired papers are scattered in a number of publishers' and/or
universities' databases. Substantial effort would be needed for the user to search each database
and identify useful papers from the retrieved papers. From a different perspective, as more and
more data are put on the Web at faster paces, the coverage of the Web by any single search
engine has been steadily decreasing. A solution to these problems is to implement a metasearch
engine on top of many local search engines. A metasearch engine is just an interface. It does not
maintain its own index on documents. However, a sophisticated metasearch engine may maintain
representatives which provide approximate contents of its underlying search engines in order to
provide better service. When a metasearch engine receives a user query, it first passes the query
to the appropriate local search engines, and then collects (sometimes, reorganizes) the results from
its local search engines. With such a metasearch engine, only one query is needed from the above
user to invoke multiple search engines.
A closer examination of the metasearch approach reveals the following problems. If the number
of local search engines in a metasearch engine is large, then it is likely that for a given query, only
a small percentage of all search engines may contain sufficiently useful documents to the query. In
order to avoid or reduce the possibility of invoking useless search engines for a query, we should
first identify those search engines that are most likely to provide useful results to the query and
then pass the query to only the identified search engines. Examples of systems that employ this
approach include WAIS [12], ALIWEB [15], gGlOSS [8], SavvySearch [10], ProFusion [7, 6] and
D-WISE [36]. The problem of identifying potentially useful databases to search is known as the
database selection problem. Database selection is done by comparing each query with all database
representatives. If a user only wants the m most desired documents across all local databases, for
some positive integer m, then the m documents to be retrieved from the identified databases need
to be specified and retrieved. This is the collection fusion problem.
In this paper, we present an integrated solution to the database selection problem and the
collection fusion problem, while taking into consideration the linkages between documents. In the
Web environment, documents (web pages) are linked by pointers. These linkages can indicate the
degrees of importance of documents. It may be argued that an important document is pointed to by
many documents or by important documents. For example, the home page of IBM is an important
document, as there are numerous documents on the Web pointing to it. Consider a query which
consists of a single term "IBM". There could be thousands of documents in the Internet having
such a term. However, it is likely that the user is interested in the home page of IBM. One way to
retrieve that home page is to recognize that among all the documents containing the term "IBM",
the home page of IBM is the most important one due to the numerous links pointing to it. This
phenomenon has been utilized in [22, 14] for retrieving documents in a centralized environment. In
this paper, we generalize the use of linkages in distributed text database environments to tackle the
database selection problem and the collection fusion problem. We believe that this is the first time
that the linkage information is utilized in distributed database environments involving a metasearch
engine. Our techniques for retrieval in a distributed environment yield retrieval performance that
is close to that as if all documents were stored in one database.
The rest of the paper is organized as follows. In Section 2, we incorporate the linkage information
into a similarity function so that the degree of relevance of a document to a query is determined
by both its similarity to the query as well as its degree of importance. In Section 3, we sketch our
solutions to the database selection problem and the collection fusion problem. In Section 4, we
describe the construction of the representative of a database, which indicates approximately the
contents of the database. More importantly, the representative permits the degree of relevance of
the most relevant document in the database to be estimated. This estimated quantity is central
to the database selection problem and the collection fusion problem. An estimation process is
presented. In Section 5, experimental results are provided. Very good performance is obtained for
the integrated solution of the database selection and the collection fusion problems. The conclusion
is given in Section 6.
1.1 Related Works
In the last several years, a large number of research papers on issues related to metasearch engines
or distributed collections have been published. In this subsection, we first identify the critical
difference between our work here and existing works. It is followed by other differences which are
individually identified. Only the most closely related differences are presented. Please see [21] for
a more comprehensive review of other work in this area.
1. We are not aware of any solution utilizing the linkage information among documents in
solving the database selection problem and the collection fusion problem in a metasearch
engine environment, although such information has been utilized [22, 14] in determining
the importance of documents and in their retrieval in a single search engine environment.
Thus, this is the first time that such information is utilized in a distributed text database
environment.
2. Our earlier work [32] utilizes the similarity of the most similar document in each database to
rank databases. However, in [32], we stated the condition as sufficient for ranking databases
optimally. It turns out the condition is both necessary and sufficient and is generalized in
this paper to incorporate the linkage information among documents.
3. The gGlOSS project [8] is similar in the sense that it ranks databases according to some mea-
sure. However, there is no necessary and sufficient condition for optimal ranking of databases;
there is no precise algorithm for determining which documents from which databases are to
be returned.
4. A necessary and sufficient condition for ranking databases optimally was given in [13]. How-
ever, [13] considered only the databases and queries that are for structured data. In contrast,
unstructured text data is considered in this paper. In [2], a theoretical framework was provided
for achieving optimal results in a distributed environment. Recent experimental results
reported in [3] show that if the number of documents retrieved is comparable to the number
of databases, then good retrieval effectiveness can be achieved; otherwise, there is substantial
deterioration in performance. We show good experimental results in both situations [33]. Our
theory differs from that given in [2] substantially.
5. In [29], experimental results were given to demonstrate that it was possible to retrieve documents
in distributed environments with essentially the same effectiveness as if all data were
in one site. However, the results depended on the existence of a training collection which have
similar coverage of subject matters and terms as the collection of databases to be searched.
Upon receiving a query, the training collection is searched, terms are extracted and then
added to the query before retrieval of documents from the actual collection takes place. In
the Internet environment where data are highly heterogeneous, it is unclear whether such a
training collection can in fact be constructed. Even if such a collection can be constructed,
the storage penalty could be very high in order to accommodate the heterogeneity. In [30],
it was shown that by properly clustering documents, it was possible to retrieve documents in
distributed environments with essentially the same effectiveness as in a centralized environ-
ment. However, in the Internet environment, it is not clear whether it is feasible to cluster
large collections and to perform re-clustering for dynamic changes. Our technique does not
require any clustering of documents. In addition, linkage information was not utilized in
[30, 29].
Incorporating Linkage Information into a Similarity Function
A query in this paper is simply a set of words submitted by a user. It is transformed into a vector
of terms with weights [23], where a term is essentially a content word and the dimension of the
vector is the number of all distinct terms. When a term appears in a query, the component of
the query vector corresponding to the term, which is the term weight, is positive; if it is absent,
the corresponding term weight is zero. The weight of a term usually depends on the number of
occurrences of the term in the query (relative to the total number of occurrences of all terms in
the query) [23, 34]. This is the term frequency weight. The weight of a term may also depend
on the number of documents having the term relative to the total number of documents in the
database. The weight of a term based on such information is called the inverse document frequency
weight [23, 34]. A document is similarly transformed into a vector with weights. The similarity
between a query and a document can be measured by the dot product of their respective vectors.
Often, the dot product is divided by the product of the norms of the two vectors, where the norm
of a vector
. This is to normalize the similarity between 0 and 1. The
similarity function with such a normalization is known as the Cosine function [23, 34]. For the
sake of concreteness, we shall use in this paper the version of the Cosine function [28] where the
query uses the product of the inverse document frequency weight and the term frequency weight
and the document uses the term frequency weight only. Other similarity functions, see for example
[26], are also possible.
We first define a function R which assigns the degree of relevance of a document d with respect
to a query q based on two factors: one based on the similarity between the document d and the
query q, and the other based on the rank (degree of importance) of the document. The rank of a
document in a Web environment usually depends on the linkages between the document and other
documents. For example, if a document d is linked to by many documents and/or by important
documents, then document d is an important document. Therefore, d will have a high rank. This
definition is recursive and an algorithm is given in [22] to compute the ranks of documents. The
following function incorporates both similarity and rank.
(1)
where sim() is a similarity function such as the Cosine function, NRank(d) is the normalized
rank of document d and w is a parameter between 0 and 1. In order to avoid the retrieval of
very important documents (i.e., documents with high normalized ranks) which are unrelated to
the query, the degree of relevance of a document is greater than zero only if its similarity with the
query is greater than zero. The normalized rank of a document can be obtained from the rank
computed in [22] by dividing it by the maximum rank of all documents in all databases, yielding a
value between 0 and 1. The higher the normalized rank a document has, the more important it is.
When it takes on the value 1, it is the most important document. The intuition for incorporating
rank is that if two documents have the same or about the same similarity with a query, then a user
is likely to prefer the more important document, i.e., the document with a higher rank. We assume
that the normalized ranks have been pre-computed and we are interested in finding the m most
relevant documents, i.e., the m documents with the highest degrees of relevance with respect to a
given query as defined by formula (1) in a distributed text database environment. This involves
determining the databases containing these m most relevant documents and determining which
documents from these databases need to be retrieved and transmitted to the user. In the next
section, we shall present our solution to the database selection problem and the collection fusion
problem.
Two-Level Architecture of Metasearch
In this architecture, the highest level (the root node) contains the representative for the "global
database". The global database which logically contains all documents from all local databases does
not exist physically. (Recall that all documents searchable by a search engine form a local database.
Although we call it a "local" database, it may contain documents from multiple locations.) The
representative for the global database contains all terms which appear in any of the databases and
for each such term, it stores the number of documents containing the term (i.e., the global document
frequency). This permits the global inverse document frequency weight of each query term to be
computed. There is only one additional level in the hierarchy. This level contains a representative
for each local database. The representative of each local database will be defined precisely in the
next section. When a user query is submitted, it is processed first by the metasearch engine against
all these database representatives. The databases are then ranked. Finally, the metasearch engine
invokes a subset of the search engines and co-ordinates the retrieval and merging of documents
from individual selected search engines.
3.1 Optimal Ranking of Databases
We assume that a user is interested in retrieving the m most relevant documents to his/her query
for a given m.
set of databases is said to be optimally ranked in the order [D
respect to a given query q if for every positive integer m, there exists a k such that
contain the m most relevant documents and each D i , 1 i k, contains at least one of the m
most relevant documents.
A necessary and sufficient condition to rank databases optimally is as follows. For ease of
presentation, we shall assume that all degrees of relevance of the documents with the query are
distinct so that the set of the m most relevant documents to the query is unique.
are ranked optimally if and only if the degree of relevance
of the most relevant document in D i is larger than that of the most relevant document in
Proof:
Sufficiency: Let R i be the degree of relevance of the most relevant document in database D i .
. We need to show that [D 1
is an optimal order. We
establish by induction that for any given m, there exists a k such that are the only
databases containing the m most relevant documents with each of them containing at least one
such document.
For contains the most relevant document. Thus,
For contain the i most relevant documents with each of them containing
at least one of the i most relevant documents. When consider the (i + 1)-th most
relevant document. It appears either in one of the databases D 1 ; :::; D s
or in one of the remaining
databases. In the former case, D 1 ; :::; D s contain all of the relevant documents and
In the latter case, the (i + 1)-th most relevant document must appear in D s+1
because
. Thus, for the latter case,
Necessity: Suppose the optimal rank order of the databases is [D We now show that
the most relevant document is in database D 1 . Thus, R 1
be increased to i 1 so that the most relevant documents appear in database
1 and the i 1 -th most relevant document appears in another database D. This i 1 -th most relevant
document must be the most relevant document in D and because
are optimally
ranked, the database D must be D 2 . Thus, R 2 be increased from i 1 to
so that the i 1 -th to (i 2 \Gamma 1)-th most relevant documents appear in database D 1 or database D 2
and the i 2 -th most relevant document appears in another database D 0 . Again by the optimal rank
ordering of [D database D 0 must be D 3 and hence, R 3 repeatedly
increasing m in the manner described above, we obtain R 4
. By
combining all these derived inequalities, we obtain R 1
Example 1 Suppose there are 4 databases. For a query q, suppose the degrees of relevance of the
most relevant documents in databases are 0.8, 0.5, 0.7 and 0.2, respectively.
Then the databases should be ranked
This result applies to any similarity function as well as any function which assigns degrees of
importance to documents. It is also independent of data types. Thus, it can be applied to any type
of databases, including text databases, image databases, video databases and audio databases. The
necessary and sufficient condition to rank databases optimally is also independent of the number
of documents desired by the user.
In [32], we ranked databases in descending order of the similarity of the most similar document
in each database and its sufficiency of optimal ranking of databases was proved. The result turns
out to be generalizable to capture the degrees of relevance of documents and it is a necessary
condition as well, as stated above in Proposition 1.
3.2 Estimation of the Degree of Relevance of the Most Relevant Document in
Each Database
In the last subsection, we showed that the best way to rank databases is based on the degree
of relevance of the most relevant document in each database. Unfortunately, it is not practical
to retrieve the most relevant document from each database, obtain its degree of relevance and
then perform the ranking of the databases. However, it is possible to estimate the degree of
relevance of the most relevant document in each database, using an appropriate choice of a database
representative. This will be given in Section 4.
3.3 Coordinate the Retrieval of Documents from Different Search Engines
Suppose the databases have been ranked in the order [D briefly review an algorithm
which determines (1) the value of k such that the first k databases are searched, and (2)
which documents from these k databases should be used to form the list of m documents to be
returned to the user. Suppose the first s databases have been invoked from the metasearch engine.
Each of these search engines returns the degree of relevance of its most relevant document to the
metasearch engine which then computes the minimum of these s values. Each of the s search
engines returns documents to the metasearch engine whose degrees of relevance are greater than
or equal to the minimum. (If the number of documents in a single search engine whose degrees
of relevance are greater than or equal to the minimum value is greater than m, then that search
engine returns the m documents with the largest degrees of relevance. The remaining ones will
not be useful as the user wants only m documents with the largest degrees of relevance.) If an
accumulative total of m+add doc, where add doc is some constant which can be used as a tradeoff
between effectiveness and efficiency of retrieval, or more documents have been returned from multiple
search engines to the metasearch engine, then these documents are sorted in descending order
of degree of relevance and the first m documents are returned to the user. Otherwise, the next
database in the order will be invoked and the process is repeated until documents
are returned to the metasearch engine. It can be shown that if the databases have been ranked
optimally (with the databases containing the desired documents ahead of other databases) for a
given query, then this algorithm will retrieve all the m most relevant documents with respect to
the query. (The proof is essentially the same as that given in [32], except we replace the similarity
of a document by its degree of relevance.) When add doc ? 0, more documents will be received
by the metasearch engine (in comparison to the case add doc =0). As a result, it can select the m
best documents from a larger set of retrieved documents, resulting in better retrieval effectiveness.
However, efficiency will decrease.
4 Estimate the Degree of Relevance of the Most Relevant Document
in Each Database
We first define the representative of a database so that the degree of relevance of the most relevant
document can be estimated. It consists of all terms which appear in any document in the database.
For each term, three quantities are kept. They are the maximum integrated normalized weight, the
average normalized weight and the normalized rank of the document where the maximum integrated
normalized weight is attained. They are defined as follows. The normalized weight of the ith term
of a document
is the length or norm of document
vector d. The average normalized weight of the ith term, aw i , is simply the average value of the
normalized weights over all documents in the database, including documents not containing the
term. The integrated normalized weight of the ith term is w d
otherwise, the integrated normalized weight is defined to be 0. In the above expression, r is the
normalized rank of document d and w is the weight of similarity relative to normalized rank in
determining the degree of relevance of a document (see formula (1)). The maximum integrated
normalized weight of the term, miw i , is the maximum value of the integrated normalized weights
over all documents in the database. Suppose the maximum value is attained by a document with
normalized rank r i . Then, for term t i , the three quantities (miw are kept.
Consider a query frequency information and inverse
document frequency information have been integrated into the query vector and the components
have been normalized.
Consider a document
is the normalized weight of term t i
and r is
the normalized rank of d. Its degree of relevance with respect to query q is w sim(q;
This document d may have the maximum integrated normalized weight for the term t 1 and
the average normalized weights for the other terms. In that case, the above expression becomes
Finally, the rank of d is the normalized
rank of the document where the maximum integrated normalized weight (miw 1 ) for the term t 1 is
attained. As described earlier, this rank is stored in the database representative. Let it be denoted
by r 1 as it is associated with term t 1 .
In general, we estimate the degree of relevance of the most relevant document in the database by
assuming that the document contains one of the query terms having the new maximum normalized
weight. Thus, its degree of relevance may be estimated by the following expression
where the maximum is over all query terms. It is easy to see that the estimation takes time linearly
proportional to the number of query terms.
One important property of this method is that it guarantees optimal retrieval for single term
queries which are submitted frequently in the Internet. The reason is that this method estimates
the degree of relevance of the most relevant document for any given single-term query exactly. As
a result, the necessary and sufficient condition for ranking the databases optimally is satisfied. For
optimally ranked databases, the algorithm to co-ordinate the retrieval of documents from multiple
databases guarantees optimal retrieval.
Lemma 1 For any single term query, the estimate given by the above estimation method for the
degree of relevance of the most relevant document in a database is exact.
Proof: For a single term query, say the estimate given by the above method =
miw 1 . This can be obtained by setting equation (2). A
document having that term has degree of relevance
r is the rank of the document d. By the definition of the maximum integrated normalized rank of
. Thus, since miw 1 is actually achieved by a document in the database, it is the
degree of relevance of the most relevant document in the database.
Proposition 2 For any single term query, optimal retrieval of documents for the query using this
method and the co-ordination algorithm is guaranteed.
Proof: By Lemma 1, the degree of relevance of the most relevant document in each database
is estimated exactly. Using these estimates for each database guarantees optimal ranking of
databases, since the necessary and sufficient condition for optimal ranking is satisfied. Finally,
the co-ordination algorithm guarantees optimal retrieval of documents, if databases are optimally
ranked.
5 Experimental Results
In this section, we report some experimental results. Two sets of data and queries with different
characteristics are utilized. The first set of data consists of 15 databases. These databases are
formed from articles posted to 52 different newsgroups in the Internet. These articles were collected
at Stanford University [8]. Each newsgroup that contains more than 500 articles forms a separate
database. Smaller newsgroups are merged to produce larger databases. Table 1 shows the number
of documents in each database. There are altogether 6,597 queries submitted by real users. Both
database
#docs 761 1014 705 682 661 622 526 555 629 588 558 526 607 648 564
Table
1: Databases Used in Experiments
the data and the queries were used in the gGlOSS project [8]. From these 6,597 queries, we obtain
two subsets of queries. The first subset consists of the first 1,000 queries, each having no more than
6 words. They will be referred later as short queries. The second subset consists all queries having
7 or more words. There are long queries.
The second set of data consists of a subset of TREC data which were collected by NIST. We
use the first databases (20 from FBIS - Foreign Broadcast Information Service and 10 from
Congressional Record) and all of the 400 queries used in the first 8 TREC conferences.
There are approximately 57,000 documents in these databases. A typical TREC query consists
of three parts (i.e., topic, description and narrative). Since real user queries are typically short,
we only used the topic part of each query in our experiments. They are on the average longer
than the Stanford queries. Specifically, the average numbers of words in a short query (no more
than 6 words each) and in a long query (more than 6 words each) in the TREC query set are 3.29
and 12.62, respectively; the corresponding numbers for the Stanford query set are 2.3 and 10.3,
respectively. Studies show that a typical Internet queries has 2.2 terms. Thus, the Stanford short
queries resemble typical Internet queries better.
One problem we have is that the documents in these two sets do not have linkages among them-
selves. In order to simulate the effect of linkages among documents on their degrees of importance,
we assign normalized ranks to documents based on some distribution. We note that if normalized
ranks were randomly assigned to documents, then since each database has quite a few documents,
then the maximum normalized rank and the average normalized rank of each database would be
close to 1 and 0.5, respectively. This would not reflect reality. Instead, for each database, we
randomly generate a number, say x. Then the normalized ranks of the documents in that database
will be randomly assigned in the range between 0 and x. In this way, some databases have much
higher maximum normalized rank than others.
The performance measures of an algorithm to search for the m most relevant documents in a set
of databases are given as follows. The first two measures provide effectiveness (quality) of retrieval
while the last two measures provide efficiency of retrieval.
1. The percentage of correctly identified documents, that is, the ratio of the number of documents
retrieved among the m most relevant documents over m. This percentage is denoted by
cor iden doc.
2. The percentage of the sum of the degrees of relevance of the m most relevant documents
retrieved, that is, the ratio of the sum of the degrees of relevance of the m retrieved documents
over the sum of the degrees of relevance of the m most relevant documents. This percentage is
denoted by per rel doc. This represents the percentage of the expected number of relevant
documents retrieved.
Suppose a user is interested in retrieving the most relevant document which has degree of
relevance 0.95. If the retrieval system retrieves the second most relevant document with degree
of relevance 0.9 but not the most relevant document. According to the measure cor iden doc,
the percentage is 0, as none of the m 1 in this example) most relevant documents is
retrieved. Using the measure per rel doc, the percentage is 0.9/0.95. Thus, this measure is
more lenient than the measure cor iden doc. In information retrieval, the standard recall
and precision measures are related with the number of retrieved relevant documents. They
are not concerned with specific documents. In other words, if k relevant documents out of m
retrieved documents are replaced by k other relevant documents, then both recall and precision
remain unchanged. Thus, the measure per rel doc is more indicative of the standard recall
and precision measure than the measure cor iden doc.
3. The database search effort is the ratio of the number of databases searched by the algorithm
over the number of databases which contain one or more of the m most relevant documents.
This ratio is denoted by db effort. The ratio is usually more than 1.
4. The document search effort is the ratio of the number of documents received by the metasearch
engine (see Section 3.3) over m. This is a measure of the transmission cost. This ratio is
denoted by doc effort.
The experimental results for short and long queries for the Stanford data when the parameter
w in formula (1) is 0.8 are presented in Tables 2 and 3, respectively. Those for the TREC data
are given in Tables 4 and 5, respectively. The reasons for choosing w ? 0:5 are as follows: (i)
the similarity between a document and a query should play a more important role in determining
its degree of relevance between the document and the query; (ii) the way the normalized ranks of
documents are assigned makes it possible for the normalized rank of a document to be close to 1,
while in most cases, the similarity between a similar document and a query is usually no higher
than 0.3 (since a document has many more terms than the number of terms in common between
the document and a query).
cor iden doc db effort doc effort per rel doc
5 96.1% 122.0% 135.7% 99.7%
Table
2: Short queries with data
cor iden doc db effort doc effort per rel doc
5 94.7% 132.5% 150.8% 99.6%
Table
3: Long queries with data
cor iden doc db effort doc effort per rel doc
Table
4: Short queries with data
A summary of the results from Tables 2 to 5 are given as follows.
1. The method gives very good retrieval effectiveness for short queries. For the Stanford data,
the percentages of the m most relevant documents retrieved range from 96% to 98%; the
corresponding figures for the TREC data are from 88% to 96%. Recall that the short queries
cor iden doc db effort doc effort per rel doc
Table
5: Long queries with data
and in particular the Stanford queries resemble the Internet queries. The percentage of the
number of relevant documents retrieved is more impressive; it ranges from 99.7% to 99.9%
for the Stanford data; the range is from 98.8% to 99.7% for the TREC data. Thus, there is
essentially little or no loss of the number of useful documents using the retrieval algorithm in
the distributed environment versus the environment in which all documents are placed in one
site. As the number of documents to be retrieved increases, it is usually the case that both
the percentage of the most relevant documents retrieved and the percentage of the number
of relevant documents retrieved increase. The reason is that as the number of databases
accessed increases, the chance of missing desired databases will be reduced.
2. For long queries, the retrieval performance of the method is still very good, although there
is a degradation in performance. The degradation in the percentage of the most relevant
documents varies from less than 1% to 2.2% for the Stanford data; it is less than 5% for the
TREC data. As the number of terms in a query increases, the estimation accuracy decreases,
causing the degradation. The degradation is higher for the TREC data, because TREC
queries are longer. When the percentage of the number of relevant documents retrieved is
considered, there is essentially no change for the Stanford data; for the TREC data, the
percentage varies from 98.4% to 99.5%, essentially giving the same performance as if all data
were in one location.
3. The number of databases accessed by the methods is on the average at most 32.5% more than
the ideal situation in which only databases containing the desired documents are accessed. In
the tables, there are a few cases where the number of databases accessed is less than the ideal
situation. The reason is that when an undesired database is accessed, the degree of relevance,
say d, of the most relevant document in that database is obtained. If d is significantly less than
the degree of relevance of the most relevant document in a desired database D, it will cause
substantial number of documents, including some undesired documents, to be retrieved from
D. When the total number of retrieved documents is m or higher, the algorithm terminates
without accessing some desired databases.
4. The number of documents transmitted from the databases to the metasearch engine by the
method can on the average be up to 156.8% times the number of documents desired by
the user. The reason for this behavior is, based on thoroughly analyzing the results for
several queries, as follows. Suppose for a query q, the databases containing the desired
documents are fD 1 documents are to be retrieved. Suppose the ranking of
the databases is This gives optimal retrieval, guaranteeing that all the m
most relevant documents to be retrieved. However, the number of documents retrieved can
be very substantial, because after accessing the first 3 databases, finding the degrees of the
most relevant documents in these databases, taking the minimum of these degrees and then
retrieving all documents from these databases with degrees of relevance larger than or equal
to the minimum, it is possible that fewer than m documents are retrieved. As a consequence,
after accessing database D 4 , obtaining the degree of relevance of the most similar document
in D 4 and using it to retrieve more documents from the first three databases, then a lot
more documents are retrieved from these databases. However, it should be noted that in
practice, the actual documents are not transmitted from the databases to the metasearch
engine. Instead, only the titles of the documents and their URLs are transmitted. Thus, the
transmission costs would not be high.
There is a tradeoff between efficiency and effectiveness of retrieval which can be achieved using
the co-ordination algorithm given in Section 3.3. The results in Tables 2 to 5 were produced when
the metasearch engine received at least m documents. If the number of documents received by the
metasearch engine is at least m + add doc with the m most relevant documents being returned to
the user, then higher effectiveness will be achieved at the expense of more retrieved documents and
more accessed databases. Tables 6 to 9 show the results when add doc is 5, i.e., 5 more documents
are to be received at the metasearch engine. It is observed from these tables that as far as measure
per rel doc is concerned, close to optimal retrieval (at least 99.1% of the number of relevant
documents) is achieved. If the measure to retrieve the m most relevant documents is used, close to
optimal retrieval results are obtained for both the short and the long queries for the Stanford data,
but there is some room for improvement for the long queries for the TREC data. The number of
databases accessed is on the average at most 93.4% beyond what are required and the number of
documents transmitted is on the average at most 157.2% beyond what are required. These excess
high values are obtained when the method is used on long queries and
5, at least 10 documents are to be retrieved by the metasearch engine and thus doc effort is at
least 200%. As m increases and add doc stays constant, the percentage of additional documents
received at the metasearch engine decreases and thus doc effort decreases. Although doc effort
is still reasonably high for the transmission cost should not be excessive, as usually only
document titles and URLs are transmitted. The number of databases accessed is on the average
only 19.5% higher than the ideal situation, when
cor iden doc db effort doc effort per rel doc
5 99.0% 171.2% 220.6% 99.9%
Table
Short queries with additional documents retrieved
cor iden doc db effort doc effort per rel doc
5 98.4% 193.4% 257.2% 99.9%
Table
7: Long queries with additional documents retrieved
cor iden doc db effort doc effort per rel doc
Table
8: Method ITR for short queries with additional documents
retrieved
6 Conclusion
We have shown that linkage information between documents can be utilized in their retrieval from
distributed databases in a metasearch engine. This is the first time the information is employed in a
metasearch engine. Our experimental results show that the techniques we provide can yield retrieval
effectiveness close to the situation as if all documents were located in one database. The strengths
of our techniques are their simplicity (the necessary and sufficient conditions for optimal ranking
of databases, the co-ordination algorithm which guarantees optimal retrieval if the databases are
optimally ranked) and flexibility (the estimation algorithms to rank databases, while taking into
cor iden doc db effort doc effort per rel doc
5 90.6% 157.0% 264.3% 99.1%
Table
9: Long queries with additional documents retrieved
consideration the linkage information between documents). The techniques given here are readily
generalizable to the situation where there are numerous databases. In that case, it may be desirable
to place database representatives in a hierarchy and search the hierarchy so that most database
representatives need not be searched and yet the same retrieval effectiveness is achieved as if all
database representatives were searched [35].
It should be noted that the linkage information in the two collections is simulated. On the
other hand, we are not aware of an existing data collection, which has linkage information, has
information about which documents are relevant to which queries and its queries resemble Internet
queries. When such a collection is made available to us, we will perform experiments on it.
Acknowledgement
: We are grateful to L. Gravano and H. Garcia-Molina for providing us with
one of two collections of documents and queries used in our experiments. We also like to thank W.
Wu for writing some programs used in the experiments.
--R
Characterizing World Wide Web Queries.
A Probabilistic Model for Distributed Information Retrieval.
A Probabilistic solution to the selection and fusion problem in distributed Information Retrieval
Combining the Evidence of Multiple Query Representations for Information Retrieval.
Searching Distributed Collections with Inference Networks.
Adaptive Agents for Information Gathering from Multiple
Intelligent Fusion from Multiple
Generalizing GlOSS to Vector-Space databases and Broker Hierar- chies
Merging Ranks from Heterogeneous Internet sources.
Real Life Information Retrieval: A Study of User Queries on the Web.
An Information System for Corporate Users: Wide Area information Servers.
The Information Manifold.
Authoritative sources in Hyperlinked Environment.
A Statistical Method for Estimating the Usefulness of Text Databases.
The Search Broker.
Determining Text Databases to Search in the Internet.
Estimating the Usefulness of Search Engines.
Detection of Heterogeneities in a Multiple Text Database Environ- ment
Challenges and Solutions for Building an Efficient and Effective Metasearch Engine.
The PageRank Citation Ranking: Bring Order to the Web.
Introduction to Modern Information Retrieval.
Automatic Text Processing: The Transformation
The MetaCrawler Architecture for Resource Aggregation on the Web.
Learning Collection Fusion Strategies for Information Retrieval.
Learning Collection Fusion Strategies.
Effective Retrieval with Distributed Collections.
Finding the Most Similar Documents across Multiple Text Databases.
A Methodology to Retrieve Text Documents from Multiple Databases.
Principles of Database Query Processing for Advanced Applications.
Efficient and Effective Metasearch for a Large Number of Text Databases.
Server Ranking for Distributed Text Resource Systems on the Internet.
--TR
ALIWEBMYAMPERSANDmdash;Archie-like indexing in the WEB
Combining the evidence of multiple query representations for information retrieval
Searching distributed collections with inference networks
Learning collection fusion strategies
Pivoted document length normalization
A probabilistic model for distributed information retrieval
Principles of database query processing for advanced applications
Real life information retrieval: a study of user queries on the Web
Effective retrieval with distributed collections
Infoseek''s experiences searching the internet
Phrase recognition and expansion for short, precision-biased queries based on a query log
A probabilistic solution to the selection and fusion problem in distributed information retrieval
Cluster-based language models for distributed retrieval
Authoritative sources in a hyperlinked environment
Efficient and effective metasearch for a large number of text databases
The impact of database selection on distributed searching
Towards a highly-scalable and effective metasearch engine
Introduction to Modern Information Retrieval
A Methodology to Retrieve Text Documents from Multiple Databases
A Statistical Method for Estimating the Usefulness of Text Databases
Merging Ranks from Heterogeneous Internet Sources
Determining Text Databases to Search in the Internet
Generalizing GlOSS to Vector-Space Databases and Broker Hierarchies
Server Ranking for Distributed Text Retrieval Systems on the Internet
Finding the Most Similar Documents across Multiple Text Databases
--CTR
King-Lup Liu , Adrain Santoso , Clement Yu , Weiyi Meng, Discovering the representative of a search engine, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
King-Lup Liu , Clement Yu , Weiyi Meng, Discovering the representative of a search engine, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Clement Yu , George Philip , Weiyi Meng, Distributed top-N query processing with possibly uncooperative local systems, Proceedings of the 29th international conference on Very large data bases, p.117-128, September 09-12, 2003, Berlin, Germany
Fang Liu , Clement Yu , Weiyi Meng, Personalized web search by mapping user queries to categories, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Zonghuan Wu , Weiyi Meng , Clement Yu , Zhuogang Li, Towards a highly-scalable and effective metasearch engine, Proceedings of the 10th international conference on World Wide Web, p.386-395, May 01-05, 2001, Hong Kong, Hong Kong
Fang Liu , Clement Yu , Weiyi Meng, Personalized Web Search For Improving Retrieval Effectiveness, IEEE Transactions on Knowledge and Data Engineering, v.16 n.1, p.28-40, January 2004
Weiyi Meng , Zonghuan Wu , Clement Yu , Zhuogang Li, A highly scalable and effective method for metasearch, ACM Transactions on Information Systems (TOIS), v.19 n.3, p.310-335, July 2001
Weiyi Meng , Clement Yu , King-Lup Liu, Building efficient and effective metasearch engines, ACM Computing Surveys (CSUR), v.34 n.1, p.48-89, March 2002
J. Bhogal , A. Macfarlane , P. Smith, A review of ontology based query expansion, Information Processing and Management: an International Journal, v.43 n.4, p.866-886, July, 2007 | metasearch;linkages among documents;information retrieval;distributed collection |
375760 | Causality representation and cancellation mechanism in time warp simulations. | The Time Warp synchronization protocol allows causality errors and then recovers from them with the assistance of a cancellation mechanism. Cancellation can cause the rollback of several other simulation objects that may trigger a cascading rollback situation where the rollback cycles back to the original simulation object. These cycles of rollback can cause the simulation to enter a unstable (or thrashing) state where little real forward simulation progress is achieved. To address this problem, knowledge of causal relations between events can be used during cancellation to avoid cascading rollbacks and to initiate early recovery operations from causality errors. In this paper, we describe a logical time representation for Time Warp simulations that is used to disseminate causality information. The new timestamp representation, called Total Clocks, has two components: (i) a virtual time component, and (ii) a vector of event counters similar to Vector clocks. The virtual time component provides a one dimensional global simulation time, and the vector of event counters records event processing rates by the simulation objects. This time representation allows us to disseminate causality information during event execution that can be used to allow early recovery during cancellation. We propose a cancellation mechanism using Total Clocks that avoids cascading rollbacks in Time Warp simulations that have FIFO communication channels. | Introduction
Rollback is an inherent operation in the Time Warp
mechanism. Rollback restores the state of an LP to a
causally consistent state from which normal event processing
can continue. During cancellation, rollbacks occurring
in one LP can propagate to other LPs to cancel out causally
incorrect event computations. Conventionally, rollbacks are
informed through anti-messages with the timestamps specifying
the rollback time of the LPs. Rollbacks can occur
frequently and may be cascaded and inter-related. In contemporary
Time Warp simulators, time representations generally
maintain only the local simulation time and do not
usually carry information about causal relations between
rollbacks and the associated events. However, logical time
representations can be designed to carry causal information
that can be exploited during rollback to accelerate the cancellation
process.
Logical time can be used to order events in distributed
processes [11]. Ordering events among arbitrary processes
is dependent on the size (number of bits) of the logical
time representation. Several representations such as scalar
clocks, vector clocks, and matrix clocks have been used to
represent logical time in distributed systems [16].
In this paper we present Total Clocks for the maintenance
of time in Time Warp synchronized parallel simulations.
Total Clocks can be used to determine causal relationships
between events among any arbitrary processes. Two events
can be concurrent or causally dependent and precise knowledge
of their relation enables optimizations in various Time
Warp algorithms. In particular, causal information can be
useful while canceling events during rollback. In this paper,
we present and prove properties of Total Clocks. We present
cancellation mechanisms to avoid cascading rollbacks using
the Total Clocks representation.
The remainder of this paper is organized as follows. Section
2 presents the background work in logical time representations
and cancellation mechanisms in Time Warp. Section
presents scenarios in Time Warp simulations where
causal information is useful to perform intelligent decisions.
Section 4 presents the Total Clocks representation for Time
Warp simulations that captures causality information between
events. Section 5 presents a new cancellation mechanism
exploiting the causal information disseminated by Total
Clocks. Section 6 discusses implementation considerations
and concluding remarks.
Figure
1. Cascading Event Dependency
Background and Related Work
Logical Time representation is of critical importance in
distributed systems. Several logical time representations
such as Scalar Clocks [11], Vector Clocks [5, 13], and Matrix
Clocks [16] have been proposed to model time in distributed
systems. In the scalar clock representation, a scalar
quantity (integer) is used to represent the logical time. Each
process increments the value of its local clock before executing
an event and then piggybacks this value at the time
of sending a message. The vector clock representation has
an n-element array of non-negative integers. Each process
maintains this n-element array and an element at index
i represents the logical time progress at P i and other
indices specify the latest known time value of other processes
[16,5,1,19,13]. While vector clocks provide a mechanism
to represent causality information, the vector clock
representation is not readily usable in optimistic protocols
due to the forward and backward motion of time. In matrix
clock representation, a process maintains a n n matrix of
non-negative integers [18,21]. A process maintains this matrix
as its clock value. This representation has all the properties
of vector clocks. In addition, a process P i knows the
time value of process P k that is known to every
other process P j . This allows processes to discard obsolete
information received from processes [16]. Matrix clocks
can also be used to assist in the calculation of GVT [4].
In a Time Warp synchronized discrete event simulation,
Virtual Time is used to model the passage of the time in
the simulation [7]. The simulation is executed via several
processes called Logical Processes (LPs). Each LP has an
associated event queue and maintains a Local Virtual Time
clock. A causality error arises if an LP receives a
message with a time-stamp earlier than its LVT value (a
straggler message). Canceling events in the input queue
of other LPs is performed by cancellation strategies such
as Aggressive, Lazy, or Dynamic Cancellation [8, 15, 17].
However, these cancellation strategies do nothing to prevent
cascading rollbacks. Several strategies have been suggested
to stop the incorrect computations [3, 12, 20]. Deelman
et al propose a Breadth First rollback mechanism to
stop the propagation of erroneous computations in adjacent
Figure
2. Inter-related Event Dependencies
simulation objects on the same processor [3]. However, this
does not handle the propagation of erroneous computation
across processors where conventional cancellation mechanisms
are employed. Madisetti [12] proposes the use of
"Wolf calls" where cancellation information is sent as a
high-priority broadcast or multicast to halt the erroneous
computations. However, these are reactive strategies and
do not avoid cascading or inter-related rollbacks.
Motivation
In Time Warp simulations, rollbacks recover LPs from
causality errors. Rollbacks can be cascaded and inter-
related. In a simulation, useful computation performed is
inversely proportional to the number of events rolled-back
and hence frequent rollbacks reduce efficiency and degrade
performance. Optimizations in Time Warp concentrate on
reducing the number of rollbacks or reducing the effect of
rollbacks to improve performance [8, 17, 15].
In Time Warp, the anti-messages sent to other LPs could
result in new anti-messages to the LP that sent the initial
anti-messages. Such cascaded rollbacks degrade efficiency
and run-time performance of the simulation. In addition to
cascading rollbacks, a positive straggler event in a causally
preceding LP can rollback another LP several times. These
rollbacks are due to an LP sending events through several
parallel paths of computation to another LP. The cause for
such cascading and inter-related rollbacks is due to the fact
that the events that are causally dependent on the events
rolled-back are not identified at the time of a causality error.
Any causal information that can specify this information
can save a huge amount of computation and communication
time by ignoring events that will be rolled-back eventually.
Figure
1 shows a typical scenario for the occurrence
of cascading rollbacks. Event e1 causes the generation of
event eN . Rolling back event e1 could trigger rolling back
events e2; e3; ::: and so on and this will result in rolling back
event eN . Figure 2 shows event e1 causing several events
in parallel paths to LP Pn. Rolling back event e1 could
O R
C O U
R
Figure
3. TOTAL CLOCK Representation
result in several rollbacks at LP Pn. In the event of LPs
knowing the causal dependency between events can avoid
such cascading and inter-related rollbacks. The above explained
scenarios are frequent and motivates us to a logical
time representation that captures causality information and
exploit this information during rollbacks.
4 Total Clocks
We have seen in the previous sections that Time Warp
mechanism implementing Virtual Time paradigm allows the
forward and backward motion of simulation time. In con-
trast, a vector clock representation allows tracking causal
relationship between events in a distributed system. In Time
Warp simulations, a logical time representation that implements
virtual time paradigm and tracks the causal relation
between events will be useful for the LPs to make intelligent
decisions while detecting causality errors. In this paper we
present Total Clocks, a logical time representation, for Time
Warp simulations which is an attempt towards this goal. Total
Clocks have two components namely, virtual time component
and a vector counter component. Virtual time component
is the global one dimensional temporal coordinate
system that ticks virtual time [10]. The virtual time component
is a scalar value that denotes the progress in simulation
time of the LP. The second component of Total Clocks is
a vector of event counters. The number of elements in the
vector is equal to the total number of LPs in the simula-
tion. Each LP maintains a counter called event counter that
is incremented based on specific update rules during sim-
ulation. In addition, each LP maintains a local copy of the
event counter values of other LPs. This set of event counters
is called the Vector Counter(VC). While sending an event,
an LP sends the virtual time and the vector counter as the
timestamp. This two component representation of timestamp
consisting of virtual time and vector counter is called
Total Clocks due to the fact that this representation can provide
a total ordering of events with ordering rules imposed
(this value is denoted
by TC((P i )after e
execute event e; (update, save state;
send events to other LPs)
END
Figure
4. Total Clock value update while processing
an Event e in a simulation Object P i
on the virtual time component and the vector counter com-
ponent. Figure 3 shows the Total Clock and its components.
The Virtual time component of Total Clock is referred as VT
and the vector counter as VC in the logical time representa-
tion. V C[i] refers to the ith element in the vector counter
containing the event counter value of LP i. TC(P i ) refers
to the Total Clock value of LP P i and TC(e) refers to the
Total Clock value of an event e. TC(P i ):V T refers to the
virtual time component and TC(P i ):V C refers to the vector
counter component of Total Clock.
Each LP maintains a Total Clock. An LP processes
events in the order of the virtual time component of the
event's Total Clock. While sending an event, the LP assigns
the time at which the event has to be processed as
the VT component of the timestamp. The vector counter
of the LP is assigned as the VC component of the times-
tamp. Thus elements of the vector in the timestamp denotes
last known event counter values of the sending LP.
An LP i learns about the latest value of the event counter
of another LP j through an event from j or from an object
that has learnt about LP j. Therefore, the vector counter
of an LP specifies the latest event counter values of other
LPs. However, there could exist LPs and events that are
causally independent of other LPs and events respectively.
The causal relation and causal independence of the states
of LPs and its events can be determined easily using Total
Clocks. The operators(<; ; >; ; k; sup; !; succ) on
the vector counter component of Total Clocks is same as the
definition for vector clocks [13, 5].
The LPs executing an event must follow specific rules to
update their virtual time component and the event counter
values. While processing an event, virtual time component
is updated to the virtual time component specified by the
event e. The vector counter of clock value is updated using
the sup operator that performs element wise maximum
operation [5, 13]. Figure 4 shows the steps updating the
total clock value maintained by an LP during event process-
ing. The primitive execute event consists of updating the
state, sending events to other LPs and saving the state. The
execute event process may perform different functions depending
on the optimizations and algorithms enabled in the
Simulation Time at which the
event has to be processed;
send message e;
END
Figure
5. Total Clock value propagation while
sending an Event e
IF LP has to rollback
THEN ROLLBACK(e) and CANCEL(e);
END
Figure
6. LP receiving an event e from communication
layer
kernel [14]. The value of the clock after the sup operation
and event counter increment is denoted by TC((P i ) after e ).
The value of LP's clock value immediately before processing
event e is denoted by TC((P i ) before e ). Figure 5 shows
the operations performed while sending an event. The virtual
time component of the event is set to the simulation
time at which the event has to be processed and the vector
counter values is set to that of the LP. The primitive
send message sends the event through the physical communication
layer.
Figure
6 shows the steps performed while receiving an
event from the communication layer. The LP checks for
rollback and rolls-back to the time before the simulation
time of the event. In addition, the LP cancels the events that
are to be undone due to rollback. The cancellation mechanisms
such as aggressive cancellation and lazy cancellation
are usually performed to cancel out the messages [17]. In
the following section, we will present a new cancellation
mechanism that takes advantage of the information present
in the Total Clocks to cancel out events upon receiving a
straggler event.
Having seen the clock update and propagation rules, certain
properties could be inferred from Total Clocks. The
value of the event counter of an LP increases monotonically
irrespective of the progress of the simulation time. This
property disambiguates between two simulation times (VT
component) of same value before and after a rollback. This
feature of Total Clocks can be used in optimizations and algorithms
that clearly needs to disambiguate such scenarios.
The events that are considered in the following theorems
are the events that have been processed by an LP and not
rolled-back by any cancellation message. The relations and
theorems may not hold good between events that consists of
one being processed and the other being rolled-back. In ad-
dition, the following theorems may not be applicable when
an LP is canceling the events that it has sent before or at the
time of state restoration due to a straggler event.
The following theorems are stated without proof and can
be easily verified from procedure PROCESS (Figure 4).
Theorem 4.1 Vector Counter component of Total Clock of
an LP increases monotonically with event processing.
Theorem 4.2 If ! denotes causally precedes relation
then e 1
Theorem 4.3 if e i1 and e i2 are events scheduled at LP P i
and e i1 ! e i2 then
)g.
Theorem 4.4 If e i1 is an event scheduled at LP P i , then an
arbitrary LP j can determine the states and set of events in
its input queue that are causally related to event e i1 .
Corollary 4.1 If e i1 and e i2 are two events scheduled in
then an arbitrary LP P j can
determine the states and set of events in its input queue
that are causally related to the events that are in the set
5 Total Clocks and Event Cancellation
In the previous section we see that Total Clocks representation
can capture causal dependencies between events
and states. This property can be exploited in cancellation
strategies. Total Clocks disseminate causality information
and new cancellation mechanism can be designed to exploit
this information. Cancellation messages can be designed
instead of anti-messages to specify the events to be
rolled-back in an LP. In addition, if the events that are
causally related to the events to be rolled-back are identi-
fied, then those events can be rolled-back along with the
events specified by the cancellation message. This is due
to the fact that, assuming aggressive cancellation strategy,
the events causally dependent on the rolled-back event will
eventually be rolled-back. Doing pro-active cancellation
can completely prevent cascading rollbacks. In addition,
early recovery operations such as restoring state and ignoring
events that will be rolled-back can be performed for
rollbacks that are inter-related. Thus, knowing the causal
relation between rolled-back events and other events and
performing cancellation can save huge amount of computational
and communication time by not spending resources
on events that will be eventually undone.
In conventional Time Warp, anti-messages are used to
initiate singleton cancel information. Anti-messages are
restore the state of the LP to
the state before e.TC.VT;
restore the input queue with events
moved from processed queue to
unprocessed queue whose events
are e.TC.VT;
END
Figure
7. LP rolling back due to an event e
from communication layer
similar to positive events and the distributed control is
through messages and anti-messages. The cancellation
mechanism presented here deviates from this paradigm. In
particular, the cancellation mechanism introduces a new set
of messages in the simulation called CANCEL MESSAGE
to inform LPs of causality errors in the preceding LPs. The
virtual time update rules at the time of processing events
are similar to that of conventional Time Warp. However,
the virtual time update rules with cancellation messages are
different and are performed only when the LP has not performed
the recovery operations due to the causality error informed
by the cancellation message. CANCEL MESSAGE
consists of a VT component and a VC component similar
to that of an event. In addition, CANCEL MESSAGE
has a field called SIGNATURE. At the time of creating a
new cancellation message, the VC component of the LP
along with its id is used as the signature. In addition,
CANCEL MESSAGE contains the minimum and maximum
event counter values called the event counter range.
This range along with the LP id in the signature specifies
the events to be removed due to rollbacks.
The new cancellation mechanism uses the event counter
ranges to keep track of the events to be rolled-back. Events
causally dependent on rolled-back events generated from an
LP lie within the same event counter range (Corollary 4.1).
The cancellation mechanism propagates the event counter
range in the cancellation messages. A virtual time component
is specified along with the event counter range in the
cancellation message to specify the rollback time of the LP.
A cancellation message could rollback an LP and hence can
generate new cancellation messages and the same signature
field is used to these set of cancellation messages . This
helps to identify related cancellation messages and hence
inter-related rollbacks and cascading rollbacks.
Figure
7 shows the steps performed during rollback due
to event e at LP P i . Upon rollback, an LP must cancel
messages to undo the events that has been sent to other
LPs. This is performed in the procedure CANCEL(e) (Fig-
ure 8). Each LP maintains a data structure called CANCEL
RANGE LIST. This data structure contains the list of
ranges with an LP id as the index. This list is used to
ignore messages whose VC component value at index i
of the timestamp lies within this specified range of event
counter values. The CANCEL RANGE LIST is built and
maintained by the LP based on specific rules. Procedure
ADD TO CANCEL RANGE LIST (Figure performs the
necessary rules to add the range information from CANCEL
MESSAGE(cancellation message). If a cancellation
message has a signature different from other cancellation
messages, then this cancellation message is added to the
CANCEL RANGE LIST. If a new cancellation message
has the same signature as that of any CANCEL MESSAGE
received previously and its VC component is concurrent
to the VC component of the new cancellation message,
then the new cancellation message range is added to CANCEL
RANGE LIST. This implies that this cancellation
message is due to anti-message generation in a different
path of computation when compared to the cancellation
messages received with the same signature. When a cancellation
message received with the same signature as that of
any previously received cancellation messages and its VC
component is less than the cancellation messages with the
same signature, then the new event counter range is added
to CANCEL RANGE LIST. In addition, the event counter
ranges with greater VC component (with same signature)
that were received before are removed. This rule is to avoid
rollbacks in an LP due to cascading rollbacks (since the recovery
operations for the initial rollback can remove all the
causally dependent events).
Procedure CHECK FOR ROLLBACK (Figure 10) performs
the check to see if the LP must rollback for the cancellation
message. An LP removes all the causally dependent
events upon receiving a positive straggler or cancellation
message. Hence an LP checks to see if the cancellation
message informs a different causality error from the
one learnt by the LP. The rules to check for rollback are
similar to rules in ADD CANCEL RANGE LIST and in
addition checks if the VT component is less than the current
simulation time.
Procedure CANCEL LOCAL EVENT (Figure 11) cancels
the events causally dependent on the rolled-back
events. This is performed by checking if the events
in the input queue lie within the event counter range
specified by the cancellation information in the CANCEL
RANGE LIST. Events within the rollback range
specified by CANCEL MESSAGE at the index i of the
VC component of timestamp are either to be rolled-back or
causally dependent on the events to be rolled-back. CANCEL
LOCAL EVENT checks all the events in the input
queue for this condition and removes them. This procedure
performs recovery operation for the current cancellation
message and pro-active recovery operations for the cancellation
messages that will be received due to this cancellation
message and therefore avoids cascading rollbacks.
IF e is a CANCEL MESSAGE
THEN ADD TO CANCEL RANGE LIST(e);
Cancel(e)=fer j er has to be rolled backg;
to be rolled back by eg;
Find Rollback time RollbackTime i for
objects in S cancel ;
er 2 Cancel(er),
IF e is a CANCEL MESSAGE
SIGNATURE,emin, emax, TC(P i ));
ADD TO CANCEL RANGE LIST(cancelmessage);
END
Figure
8. Canceling events due to event e
ADD TO CANCEL RANGE LIST(e) BEGIN
IF e is a CANCEL MESSAGE THEN
THEN add e to CANCEL RANGE LIST;
IF 9 cm, cm 2 CANCEL RANGE LIST,
THEN remove all such cm and
add e to CANCEL RANGE LIST;
THEN add e to CANCEL RANGE LIST;
END
Figure
9. LP P i checking a Cancel Message to
add to Cancel Range List
IF e is a CANCEL MESSAGE THEN
THEN Rollback is true;
THEN Rollback is true;
END
Figure
checking if it has to rollback
due to event e
event e.event countermin;
event e.event countermax;
Remove all events in input queue
whose timestamp is within the range
(event countermin,event countermax)
at index j of the vector counter;
END
Figure
11. LP P i canceling events in its input
queue due to CANCEL MESSAGE e
IF e is a CANCEL MESSAGE THEN
ADD TO CANCEL RANGE LIST(e);
THEN insert into input queue;
END
Figure
12. LP P i inserting event e
Procedure CANCEL cancels the events in its local input
queue and sends cancellation messages to other LPs to
inform them about causality errors. A cancellation message
with a new SIGNATURE is created only when an LP is
rolled-back due to a positive straggler event. On the other
hand, the signature of the received CANCEL MESSAGE is
used when this message sends new cancellation messages
to other LPs. This is to identify cancellation messages generated
due to inter-related and cascading rollbacks. Seeing
the signature, an LP can ignore a cancellation message if
recovery operations have been already performed. An LP
determines Cancel(e) that contains the set of events to be
rolled-back due to event e and the set S cancel that contains
the LP ids to send a cancellation message. The minimum
event counter value eventcountermin and maximum
event counter value eventcountermax are determined
from Cancel(e). The rollback time, RollbackTime k ,
to each LP k is determined from Cancel(e). The event
counter value(T C(P i ):V C[i]) of LP i is incremented
initially in the procedure and the VC component's
value is updated to the sup(TC(P i ):V C; TC(e):V C)
before sending the CANCEL MESSAGE. Procedures
ADD TO CANCEL RANGE LIST and CANCEL
LOCAL EVENTS are called in procedure CANCEL
to update the data structure having the cancellation range
information and to cancel out the events in the input queue.
While inserting an event (Figure 12), the event
is checked against the ranges specified in the CANCEL
RANGE LIST and events are ignored if they lie
within any one of the ranges. This is to handle events that
have been in transit at the time of canceling events in the
local input queue.
Theorem 5.1-5.6 prove properties of the new cancellation
mechanism. Canceled(e) introduced in the following
theorems defines the set of events that are rolled-back due
to receiving event e in LP i.
Theorem 5.1 Given CANCEL MESSAGE cm 1
Theorem 5.2 Given CANCEL MESSAGE cm
Theorem 5.3 Given CANCEL MESSAGE cm
9
Theorem 5.4 Procedure CANCEL LOCAL EVENTS removes
the events from input queue of an LP for all CANCEL
MESSAGE and event stragglers that have been received
Theorem 5.5 Procedure INSERT inserts only events that
are not causally dependent on events that have been canceled
Theorem 5.6 Procedure CANCEL at LP P i generates cancellation
messages that cancel only the events that are
causally dependent on the events canceled during the call
to CANCEL(e).
We can see that the cancellation mechanism is efficient
in removing all the events in the input queue that are to
be undone either directly or indirectly due to a rollback
in causally preceding LP. Figure 13 shows the update of
the Total Clock value in the processes and the cancellation
mechanism through a space time diagram. The timestamp
consist of a VT component followed by 3-element vector
counter for three object simulation. Upon receiving a straggler
from process P2, process P0 sends a cancellation message
to P1 and discards the events from other objects that
are causally dependent on the rolled-back events. This saves
huge amount of computational resources that would have
been spent on processing those events. In addition, the constant
vigil on the events that are inserted to the input queue
removes any extra overhead incurred on the messages that
will be rolled-back eventually. LPs in the critical path are
frequently rolled-back and they are exposed to cascading
Figure
13. Space Time diagram explaining Total
Clock update and Cancellation
rollbacks. The huge amount of time spent in such uncommitted
computations can be avoided by employing the cancellation
mechanism with Total Clocks.
6 Conclusion
Total Clock representation is suitable for models that
spend most of their time in recovering from rollbacks. Total
Clocks representation is also useful in taking decisions
among LPs in the critical path of the simulation that are
exposed to frequent rollbacks. Total Clocks inherit all the
pros and cons of vector clocks in addition to the applicability
in the field of Time Warp simulation. Several researchers
have found efficient and practical implementations of vector
clocks [19, 6, 9]. Studies performed by Chetlur et al [2],
found that the overhead in the communication caused by the
size of the message is not that significant compared to the
frequency of calls to the communication subsystem. The
overhead of communication for messages within a range of
message sizes remains the same. With optimizations that
propagate only changes to values of the vector counters, the
message sizes can be kept within the same range as that of
messages without Total Clocks representation.
The Cancel Range list maintained by the LP to ig-
nore/accept the incoming events may grow larger if they
are not pruned frequently. A simulation time can be attached
to the CANCEL MESSAGE that specifies the time
up to which the event counter range check has to be per-
formed. This time can be the maximum of the VT component
of rolled-back events. Cancel range information could
be removed from the CANCEL RANGE LIST once GVT
sweeps past this VT value.
In this paper, we have shown the representation of causal
information using Total Clocks and the utility of such a
representation in cancellation mechanism. We presented
a cancellation mechanism that exploits Total Clocks repre-
sentation. The cancellation mechanism described reduces
to sending a message to all its output channels about the
rollback of its predecessors and can cancel events received
from its predecessors that could also be rolled back. This
cancellation mechanism avoids cascading and inter-related
rollbacks. Further exploration is done to exploit the Total
Clocks representation in Time Warp algorithms such as
State Savings and other synchronization mechanisms.
--R
Concerning the size of logical clocks in distributed systems.
Optimizing communication in Time-Warp simula- tors
System knowledge acquisition in parallel discrete event simulation.
Logical time in distributed systems.
Causal distributed breakpoints.
Parallel discrete event simulation.
Rollback mechanisms for optimistic distributed simulation systems.
Dependency tracking and filtering in distributed computations.
Virtual time.
A rollback algorithm for optimistic distributed simulation sys- tems
Virtual time and global states of distributed sys- tems
A comparative analysis of various Time Warp algorithms implemented in the WARPED simulation kernel.
Dynamic cancellation: Selecting Time Warp cancellation strategies at runtime.
Logical time: Capturing causality in distributed systems.
Cancellation strategies in optimistic execution systems.
Discarding obsolete information in a replicated database system.
An efficient implementation of vector clocks.
Efficient solutions to the replicated log and dictionary problems.
--TR
Virtual time
Discarding obsolete information in a replicated database system
Parallel discrete event simulation
Logical Time in Distributed Computing Systems
Concerning the size of logical clocks in distributed systems
An efficient implementation of vector clocks
Logical Time
Breadth-first rollback in spatially explicit simulations
Optimizing communication in time-warp simulators
Wolf
Time, clocks, and the ordering of events in a distributed system
Efficient solutions to the replicated log and dictionary problems
A Comparative Analysis of Various Time Warp Algorithms Implemented in the WARPED Simulation Kernel
--CTR
Yi Zeng , Wentong Cai , Stephen J Turner, Batch based cancellation: a rollback optimal cancellation scheme in time warp simulations, Proceedings of the eighteenth workshop on Parallel and distributed simulation, May 16-19, 2004, Kufstein, Austria
Nasser Kalantery, Time warp - connection oriented, Proceedings of the eighteenth workshop on Parallel and distributed simulation, May 16-19, 2004, Kufstein, Austria
Malolan Chetlur , Philip A. Wilsey, Causality information and fossil collection in timewarp simulations, Proceedings of the 37th conference on Winter simulation, December 03-06, 2006, Monterey, California
Hussam M. Soliman Ramadan, Throttled lazy cancellation in time warp simulation, Proceedings of the 18th conference on Proceedings of the 18th IASTED International Conference: modelling and simulation, p.166-171, May 30-June 01, 2007, Montreal, Canada
Yi Zeng , Wentong Cai , Stephen J. Turner, Parallel distributed simulation and modeling methods: causal order based time warp: a tradeoff of optimism, Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana | time warp;logical time;vector clocks;virtual time |
375847 | Concurrent threads and optimal parallel minimum spanning trees algorithm. | This paper resolves a long-standing open problem on whether the concurrent write capability of parallel random access machine (PRAM) is essential for solving fundamental graph problems like connected components and minimum spanning trees in O(logn) time. Specifically, we present a new algorithm to solve these problems in O(logn) time using a linear number of processors on the exclusive-read exclusive-write PRAM. The logarithmic time bound is actually optimal since it is well known that even computing the OR of nbit requires &OHgr;(log n time on the exclusive-write PRAM. The efficiency achieved by the new algorithm is based on a new schedule which can exploit a high degree of parallelism. | INTRODUCTION
Given a weighted undirected graph G with n vertices and m edges, the minimum
spanning tree (MST) problem is to nd a spanning tree (or spanning forest) of G
with the smallest possible sum of edge weights. This problem has a rich history.
A preliminary version of this paper appeared in the proceedings of the Tenth Annual ACM-SIAM
Symposium on Discrete Algorithms (Baltimore, Maryland). ACM, New York, SIAM, Philadel-
phia, pp. 225-234.
This work was supported in part by Hong Kong RGC Grant HKU-289/95E.
Address: Ka Wong Chong and Tak-Wah Lam, Department of Computer Science and Information
Systems, The University of Hong Kong, Hong Kong. Email: fkwchong,twlamg@csis.hku.hk; Yijie
Han, Computer Science Telecommunications Program, University of Missouri { Kansas City, 5100
Rockhill Road, Kansas, MO 64110, USA. Email: han@cstp.umkc.edu.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for prot or direct commercial
advantage and that copies show this notice on the rst page or initial screen of a display along
with the full citation. Copyrights for components of this work owned by others than ACM must
be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, to redistribute to lists, or to use any component of this work in other works, requires prior
specic permission and/or a fee. Permissions may be requested from Publications Dept, ACM
Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.
Han, and T.W. Lam
Sequential MST algorithms running in O(m log n) time were known a few decades
ago (see Tarjan [1983] for a survey). Subsequently, a number of more e-cient MST
algorithms have been published. In particular, Fredman and Tarjan [1987] gave an
algorithm running in O(m(m;n)) time, where (m; n) = minfi j log (i) n m=ng.
This time complexity was improved to O(m log (m; n)) by Gabow, Galil, Spencer,
and Tarjan [1986]. Chazelle [1997] presented an even faster MST algorithm with
time complexity O(m(m;n) log (m; n)), where (m; n) is the inverse Ackerman
function. Recently, Chazelle [1999] improved his algorithm to run in O(m(m;n))
time, and later Pettie [1999] independently devised a similar algorithm with the
same time complexity. More recently, Pettie and Ramachandran [2000] obtained
an algorithm running in optimal time. A simple randomized algorithm running in
linear expected time has also been found [Karger et al. 1995].
In the parallel context, the MST problem is closely related to the connected component
(CC) problem, which is to nd the connected components of an undirected
graph. The CC problem actually admits a faster algorithm in the sequential con-
text, yet the two problems can be solved by similar techniques on various models
of parallel random access machines (see the surveys by JaJa [1992] and Karp and
Ramachandran [1990]). With respect to the model with concurrent write capability
(i.e., processors can write into the same shared memory location simultaneously),
both problems can be solved in O(log n) time using n+m processors [Awerbuch and
Shiloach 1987; Cole and Vishkin 1986]. Using randomization, Gazit's algorithm
[1986] can solve the CC problem in O(log n) expected time using (n
processors. The work of this algorithm (dened as the time-processor product) is
O(n +m) and thus optimal. Later, Cole et al. [1996] obtained the same result for
the MST problem.
For the exclusive write models (including both concurrent-read exclusive-write
and exclusive-read exclusive-write PRAMs), O(log 2 n) time algorithms for the CC
and MST problems were developed two decades ago [Chin et al. 1982; Hirschberg
et al. 1979]. For a while, it was believed that exclusive write models could not
overcome the O(log 2 n) time bound. The rst breakthrough was due to Johnson
and Metaxas [1991, 1992]; they devised O(log 1:5 n) time algorithms for the CC
problem and the MST problem. These results were improved by Chong and Lam
[1993] and Chong [1996] to O(log n loglog n) time. If randomization is allowed, the
time or the work can be further improved. In particular, Karger et al. [1995] showed
that the CC problem can be solved in O(log n) expected time, and later Halperin
and Zwick [1996] improved the work to linear. For the MST problem, Karger [1995]
obtained a randomized algorithm using O(log n) expected time (and super-linear
work), and Poon and Ramachandran [1997] gave a randomized algorithm using
linear expected work and O(log n log log n 2 log n ) expected time.
Another approach stems from the fact that deterministic space bounds for the
st-connectivity problem immediately imply identical time bounds for EREW algorithms
for the CC problem. Nisan et al. [1992] have shown that st-connectivity
problem can be solved deterministically using O(log 1:5 n) space, and Armoni et al.
[1997] further improved the bound to O(log 4=3 n). These results imply EREW
algorithm for solving the CC problem in O(log 1:5 n) time and O(log 4=3 n) time,
respectively.
Optimal Parallel MST Algorithm 3
Prior to our work, it had been open whether the CC and MST problems could be
solved deterministically in O(log n) time on the exclusive write models. Notice that
(log n) is optimal since these graphs problems are at least as hard as computing
the OR of n bits. Cook et al. [1986] have proven that the latter
requires
qui n)
time on the CREW or EREW PRAM no matter how many processors are used.
Existing MST algorithms (and CC algorithms) are di-cult to improve because of
the locking among the processors. As the processors work on dierent parts of the
graph having dierent densities, the progress of the processors is not uniform, yet
the processors have to coordinate closely in order to take advantage of the results
computed by each other. As a result, many processors often wait rather than doing
useful computation. This paper presents a new parallel paradigm for solving the
MST problem, which requires minimal coordination among the processors so as to
fully utilize the parallelism. Based on new insight into the structure of minimum
spanning trees, we show that this paradigm can be implemented on the EREW,
solving the MST problem in O(log n) time using n +m processors. The algorithm
is deterministic in nature and does not require special operations on edge weights
(other than comparison).
Finding connected components or minimum spanning trees is often a key step in
the parallel algorithms for other graph problems (see e.g., Miller and Ramachandran
[1986]; Maon et al. [1986]; Tarjan and Vishkin [1985]; Vishkin [1985]). With
our new MST algorithm, some of these parallel algorithms can be immediately
improved to run in optimal (i.e., O(log n)) time without using concurrent write
(e.g., biconnectivity [Tarjan and Vishkin 1985] and ear decomposition [Miller and
Ramachandran 1986]).
From a theoretical point of view, our result illustrates that the concurrent write
capability is not essential for solving a number fundamental graph problems ef-
ciently. Notice that EREW algorithms are actually more practical in the sense
that they can be adapted to other more realistic parallel models like the Queuing
Shared Memory (QSM) [Gibbons et al. 1997] and the Bulk Synchronous Parallel
(BSP) model [Valaint 1990]. The latter is a distributed memory model of parallel
computation. Gibbon et al. [1997] showed that an EREW PRAM algorithm can be
simulated on the QSM model with a slow down by a factor of g, where g is the band-width
parameter of the QSM model. Such a simulation is, however, not known for
the CRCW PRAM. Thus, our result implies that the MST problem can be solved
e-ciently on the QSM model in O(g log n) time using a linear number of processors.
Furthermore, Gibbon et al. [1997] derived a randomized work-preserving simulation
of a QSM algorithm with a logarithmic slow down on the BSP model.
The rest of the paper is organized as follows. Section 2 reviews several basic
concepts and introduces a notion called concurrent threads for nding minimum
spanning trees in parallel. Section 3 describes the schedule used by the threads,
illustrating a limited form of pipelining (which has a
avor similar to the pipelined
merge-sort algorithm by Cole [1988]). Section 4 lays down the detailed requirement
for each thread. Section 5 shows the details of the algorithm. To simplify the
discussion, we rst focus on the CREW PRAM, showing how to solve the MST
problem in O(log n) time using (n m) log n processors. In Section 6 we adapt
algorithm to run on the EREW PRAM and reduce the processor bound to linear.
4 K.W. Chong, Y. Han, and T.W. Lam
Remark: Very recently, Pettie and Ramachandran [1999] made use of the result
in this paper to further improve existing randomized MST algorithms. Precisely,
their algorithm is the rst one to run, with high probability, in O(log n) time and
linear work on the EREW PRAM.
2. BASICS OF PARALLEL MST ALGORITHMS: PAST AND PRESENT
In this section we review a classical approach to nding an MST. Based on this
approach, we can easily contrast our new MST algorithm with existing ones.
We assume that the input graph G is given in the form of adjacency lists. Consider
any edge G. Note that e appears in the adjacency lists of u and
v. We call each copy of e the mate of the other. When we need to distinguish
them, we use the notations hu; vi and hv; ui to signify that the edge originates from
respectively. The weight of e, which can be any real number, is denoted
by w(e) or w(u; v). Without loss of generality, we assume that the edge weights
are all distinct. Thus, G has a unique minimum spanning tree, which is denoted
by T
G throughout this paper. We also assume that G is connected (otherwise, our
algorithm nds the minimum spanning forest of G).
Let B be a subset of edges in G which contains no cycle. B induces a set of trees
in a natural sense|Two vertices in G are in the same tree if
they are connected by edges of B. If B contains no edge incident on a vertex v,
then v itself forms a tree.
Denition: Consider any edge in G and any tree T 2 F . If both u
and v belong to T , e is called an internal edge of T ; if only one of u and v belongs
to T , e is called an external edge. Note that an edge of T is also an internal edge
of T , but the converse may not be true.
Denition: B is said to be a -forest if each tree T 2 F has at least vertices.
For example, if B is the empty set then B is a 1-forest of G; a spanning tree such
as T
G is an n-forest. Consider a set B of edges chosen from T
G . Assume that B is
a -forest. We can augment B to give a 2-forest using a greedy
be an arbitrary subset of F such that F 0 includes all trees T 2 F with fewer than
vertices may contain trees with 2 or more vertices). For every tree in F 0 ,
we pick its minimum external edge. Denote B 0 as this set of edges.
Lemma 1. [JaJa 1992, Lemma 5.4] B 0 consists of edges in T
G only.
Lemma 2. is a 2-forest.
Proof. Every tree in F F 0 already contains at least 2 vertices. Consider a
tree T in F 0 . Let hu; vi be the minimum external edge of T , where v belongs to
another tree T 0 2 F . With respect to all the vertices in T and T 0 are
connected together. Among the trees induced by there is one including
T and T 0 , and it contains at least 2 vertices. Therefore, is a 2-forest of
G.
Based on Lemmas 1 and 2, we can nd T
G in blog nc stages as follows:
Notation: Let B[p; q] denote [ q
and the empty set otherwise.
Optimal Parallel MST Algorithm 5
procedure
(1) for to blog nc do /* Stage i */
(a) Let F be the set of trees induced by B[1; i 1] on G. Let F 0 be an
arbitrary subset of F such that F 0 includes all trees T 2 F with
fewer than 2 i vertices.
e is the minimum external edge of T 2 F 0 g
(2) return B[1; blog nc].
At Stage i, dierent strategies for choosing the set F 0 in Step 1(a) may lead
to dierent B i 's. Nevertheless, B[1; i] is always a subset of T
G and induces a 2 i -
forest. In particular, B[1; blog nc] induces a 2 blog nc -forest, in which each tree, by
denition, contains at least 2 blog nc > n=2 vertices. In other words, B[1; blog nc]
induces exactly one tree, which is equal to T
G . Using standard parallel algorithmic
techniques, each stage can be implemented in O(log n) time on the EREW PRAM
using a linear number of processors (see e.g., JaJa [1992]). Therefore, T
G can
be found in O(log 2 n) time. In fact, most parallel algorithms for nding MST
(including those CRCW PRAM algorithms) are based on a similar approach (see
e.g., Awerbuch and Shiloach [1987]; Chin et al. [1982]; citeNcv86; Johnson and
Metaxas [1991, 1992]; Chong and Lam [1993]; Chong [1996]; Karger et al. [1995]).
These parallel algorithms are \sequential" in the sense that the computation of B i
starts only after B i 1 is available (see Figure 1(a)).
An innovative idea exploited by our MST algorithm is to use concurrent threads
to compute the B i 's. Threads are groups of processors working on dierent tasks,
the computation of the threads being independent of each other. In our algorithm,
there are blog nc concurrent threads, each nding a particular B i . These threads
are characterized by the fact that the thread for computing B i starts long before
the thread for computing B i 1 is completed, and actually outputs B i in O(1) time
after Figure 1(b)). As a result, T
G can be found in O(log n)
time.
Our algorithm takes advantage of an interesting property of the sets
. This property actually holds with respect to most of the deterministic
algorithms for nding an MST, though it has not mentioned explicitly in the literature
Lemma 3. Let T be one of the trees induced by B[1; k], for any 0 k blog nc.
e T be the minimum external edge of T . For any subtree (i.e., connected sub-
of T , the minimum external edge of S is either e T or an edge of T .
Proof. See Appendix.
3.
OVERVIEW
AND SCHEDULE
Our algorithm consists of blog nc threads running concurrently. For 1 i blog nc,
Thread i aims to nd a set B i which is one of the possible sets computed at Stage i
of the procedure Iterative-MST. To be precise, let F be the set of trees induced by
be an arbitrary subset of F including all trees with fewer than
contains the minimum external edges of the trees in F 0 . Thread i
6 K.W. Chong, Y. Han, and T.W. Lam
Phase 2
Phase 1
Phase 1
Phase 1
Phase 2
Phase 1
Phase 2
Phase 1
(b)
Thread
Phase blog ic
Phase blog(i 1)c
Fig. 1. (a) The iterative approach. (b) The concurrent-thread approach.
receives the output of Threads 1 to i 1 (i.e., incrementally, but
never looks at their computation. After in a
further of O(1) time.
3.1 Examples
Before showing the detailed schedule of Thread i, we give two examples illustrating
how Thread i can speed up the computation of B i . In Examples 1 and 2, Thread i
computes B i in time ci and 1
respectively after Thread (i
where c is some xed constant. To simplify our discussion, these examples assume
that the adjacency lists of a set of vertices can be \merged" into a single list in
time. At the end of this section, we will explain why this is infeasible in our
implementation and highlight our novel observations and techniques to evade the
problem.
Thread i starts with a set Q 0 of adjacency lists, where each list contains the
smallest edges incident on a vertex in G. The edges kept in Q 0 are already
su-cient for computing B i . The reason is as follows: Consider any tree T induced
by Assume the minimum external edge e T of T is incident on a vertex
v of T . If T contains fewer than 2 i vertices, at most 2 i 2 edges incident on v are
internal edges of T . Thus, the 2 i 1 smallest edges incident on v must include e T .
Example 1: This is a straightforward implementation of Lemma 2. Thread i
starts only when are all available. Let F be the set of trees induced
by B[1; i 1]. Suppose we can merge the adjacency lists of the vertices in each tree,
forming a single combined adjacency list. Notice hat if a tree in F has fewer than
vertices, its combined adjacency list will contain at most (2 i edges. For
Optimal Parallel MST Algorithm 7
each combined list with at most (2 i edges, we can determine the minimum
external edge in time ci, where c is some suitable constant. The collection of such
minimum external edges is reported as B i . We observe that a combined adjacency
list with more than edges represents a tree containing at least 2 i vertices.
By the denition of B i , it is not necessary to report the minimum external edge of
such a tree.
Example 2: This example is slightly more complex, illustrating how Thread i
works in an \incremental" manner. Thread i starts o as soon as B i=2 has been
computed. At this point, only are available and Thread i is not ready
to compute B i . Nevertheless, it performs some preprocessing (called Phase I below)
so that when B become available, the computation of B i can be
speeded up to run in time 1
2 ci only (Phase II).
Phase I: Let F 0 be the set of trees induced by B[1; i=2]. Again, suppose we can
merge the adjacency lists in Q 0 for every tree in F 0 , forming another set Q 0 of
adjacency lists. By the denition of B[1; i=2], each tree in F 0 contains at least 2 i=2
vertices. For each tree in F 0 with fewer than 2 i vertices, its combined adjacency list
contains at most (2 i 1) 2 edges. We extract from the list the 2 i=2 1 smallest edges
such that each of them connects to a distinct tree in F 0 . These edges are su-cient
for nding B i (the argument is an extension of the argument in Example 1). The
computation takes time ci only.
Phase II: When B are available, we compute B i based on Q 0 as
follows: Edges in B[i=2 further connect the trees in F 0 , forming a set F
of bigger trees. Suppose we can merge the lists in Q 0 for every tree in F . Notice
that if a tree in F contains fewer than 2 i vertices, it is composed of at most 2 i=2 1
trees in F 0 and its combined adjacency list contains no more than (2 i=2 1) 2 edges.
In this case, we can nd the minimum external edge in at most a further of 1
time. B i is the set of minimum external edges just found. In conclusion, after
is computed, B i is found in time 1
Remark: The set B i found by Examples 1 and 2 may be dierent. Yet in either
case, is a subset of T
G and a 2 i -forest.
3.2 The schedule
Our MST algorithm is based on a generalization of the above ideas. The computation
of Thread i is divided into blog ic phases. When Thread i 1 has computed
is about to enter its last phase, which takes O(1) to report B i . See
Figure
1(b).
Globally speaking, our MST algorithm runs in blog nc supersteps, where each
lasts O(1) time. In particular, Thread i delivers B i at the end of the ith
superstep. Let us rst consider i a power of two. Phase 1 of Thread i starts at the
are available. The computation takes
no more than i=4 supersteps, ending at the (i=2 starts
at the (i=2 are available)
and uses i=8 supersteps. Each subsequent phase uses half as many supersteps as
the preceding phase. The last phase (Phase log i) starts and ends within the ith
superstep. See Figure 2.
For general i, Thread i runs in blog ic phases. To mark the starting time of each
8 K.W. Chong, Y. Han, and T.W. Lam1 2 i+ 1 i+ ii+ i+ iPhase log i
Phase 1 Phase 2 Thread i
Fig. 2. The schedule of Thread i, where i is a power of 2.
phase, we dene the sequence
(That is, a Phase j of Thread i, where 1 j
blog ic, starts at the (a 1)th superstep and uses a j+1 a
supersteps. Phase j has to handle the edge sets B a j 1
are made available by other threads during the execution of Phase (j 1).
3.3 Merging
In the above examples, we assume that for every tree in F , we can merge the
adjacency lists of its vertices (or subtrees in Phase II of Example 2) into a single
list e-ciently and the time does not depend on the total length. This can be
done via the technique introduced by Tarjan and Vishkin [1985]. Let us look at
an example. Suppose a tree T contains an edge e between two vertices u and v.
Assume that the adjacency lists of u and v contain e and its mate respectively.
The two lists can be combined by having e and its mate exchange their successors
(see
Figure
3). If every edge of T and its mate exchange their successors in their
adjacency lists, we will get a combined adjacency list for T in O(1) time. However,
the merging fails if any edge of T or its mate is not included in the corresponding
adjacency lists.
In our algorithm, we do not keep track of all the edges for each vertex (or subtree)
because of e-ciency. For example, each adjacency list in Q 0 involves only
edges incident on a vertex. With respect to a tree T , some of its edges and their
mates may not be present in the corresponding adjacency lists. Therefore, when
applying the O(1)-time merging technique, we may not be able to megre the adjacency
lists into one single list for representing T . Failing to form a single combined
adjacency list also complicates the extraction of essential edges (in particular, the
minimum external edges) for computing the set B i . In particular, we cannot easily
determine all the vertices belonging to T and identify the redundant edges, i.e.,
internal and extra multiple external edges, in the adjacency list of T .
e
e
e
e
Fig. 3. Merging a pair of adjacency lists Lu and Lv with respect to a common edge e.
Optimal Parallel MST Algorithm 9
Actually our MST algorithm does not insist on merging the adjacency lists into a
single list. A key idea here is that our algorithm can maintain all essential edges to
be included in just one particular combined adjacency list. Based on some structural
properties of minimum spanning trees, we can lter out redundant adjacency lists
to obtain a unique adjacency list for T (see Lemmas 5 and 9 in Section 5).
In the adjacency list representing T , internal edges can all be removed using a
technique based on \threshold" [Chong 1996]. The most intriguing part concerns
the extra multiple external edges. We nd that it is not necessary to remove all of
them. Specically, we show that those extra multiple external edges that cannot
be removed \easily" must have a bigger weight and their presence does not aect
the correctness of the computation.
In the next section, we will elaborate on the above ideas and formulate the
requirements for each phase so as to achieve the schedule.
4. REQUIREMENTS FOR A PHASE
In this section we specify formally what Thread i is expected to achieve in each
phase. Initially (in Phase 0), Thread i constructs a set Q 0 of adjacency lists. For
each vertex v in G, Q 0 contains a circular linked list L including the 2 i 1 smallest
edges incident on v. In addition, L is assigned a threshold, denoted by h(L). If
L contains all edges of v, is the
smallest edge truncated from L. In each of the blog ic phases, the adjacency lists
are further merged based on the newly arrived edge sets, and truncated according
to the length requirement. For each combined adjacency list, a new threshold
is computed. Intuitively, the threshold records the smallest edge that has been
truncated so far.
Consider Phase j, where 1 j blog ic. It inherits a set Q j 1 of adjacency
lists from Phase j 1 and receives the edge sets B[a (recall that a
denote the set of trees induced by B[1; a j ]. Phase j aims at
producing a set Q j of adjacency lists capturing the external edges of the trees in
F j that are essential for the computation of B i . Basically, we try to merge the
adjacency lists in Q j 1 with respect to B[a As mentioned before, this
merging process may produce more than one combined adjacency list for each tree
Nevertheless, we strive to ensure that only one combined list is retained to
. the rest are ltered out. In view of the time constraint imposed by the
schedule of Thread i, we also need a tight bound on the length of each remaining
adjacency list.
Let L be a list in Q j .
R1. uniquely corresponds to a tree T 2 F j , storing only
the external edges of T . In this case, T is said to be represented by L in Q j .
Some trees in F j may not be represented by any lists in Q
with fewer than 2 i vertices are represented.
R2. (length): L contains at most 2 bi=2 j c 1 edges.
We will dene what edges of a tree T 2 F j are essential and must be included
in L. Consider an external edge e of T that connects to another tree T 0 2 F j . We
Han, and T.W. Lam
say that e is primary if, among all edges connecting T and T 0 , e has the smallest
weight. Otherwise, e is said to be secondary. Note that if the minimum spanning
tree of G contains an edge which is an external edge of both T and T 0 , it must be a
primary one. Ideally, only primary external edges should be retained in each list of
. Yet this is infeasible since Thread i starts truncated adjacency lists
and we cannot identify and remove all the secondary external edges in each phase.
(Removing all internal edges, though non-trivial, is feasible.)
An important observation is that it is not necessary to remove all secondary external
edges. Based on a structural classication of light and heavy edges (dened
below), we nd that all light secondary external edges can be removed easily. Af-
terwards, each list contains all the light primary external edges and possibly some
heavy secondary external edges. The set of light primary external edges may not
cover all primary external edges and its size can be much smaller than 2 bi=2 j c 1.
Yet we will show that the set of light primary external edges su-ces for computing
and the presence of heavy secondary external edges does not aect the
correctness.
Below we give the denition of light and heavy edges, which are based on the
notion of base.
Denition: Let T be a tree in F j .
Let
be any real number. A tree T 0 2 F j is said to be
-accessible to T if
or there is another tree T 00 2 F j such that T 00 is
-accessible to T and connected
to T 0 by an edge with weight smaller than
Let e be an external (or internal) edge of T . Dene to be the set
is w(e)-accessible to Tg. The size of base(F
by is the total number of vertices in the trees involved.
Let e be an external (or internal) edge of T . We say that e is light if kbase(F
It follows from the above denition that a light edge of a tree T has a smaller
weight than a heavy edge of T . Also, a heavy edge of T will remain a heavy edge
in subsequent phases. More specically, in any Phase k where k > j, if T is a
subtree of some tree X 2 F k , then for any external (or internal) edge e of T ,
Therefore, if e is heavy with respect to T then
it is also heavy with respect to X .
The following lemma gives an upper bound on the number of light primary external
edges of each tree in F j , which complies with the length requirement of the
lists in Q j .
Lemma 4. Any tree T 2 F j has at most 2 bi=2 j c 1 light primary external edges.
Proof. Let x be the number of light primary external edges of T . Among the
light primary external edges of T , let e be the one with the biggest weight. The
set includes T and at least x 1 trees adjacent to T . As B[1; a j ] is a
2 a j -forest, every tree in F j contains at least 2 a j vertices. We have
. By denition of a light edge,
Thus, x 2 a j < 2 i and x < 2 i a
Optimal Parallel MST Algorithm 11
The following requirement species the essential edges to be kept in each list of
characterizes those secondary external edges, if any, in each list.
R3. (base) Let T be the tree in F j represented by a list L 2 Q j . All light
primary external edges of T are included in L, and secondary external edges
of T , if included in L, must be heavy.
Retaining only the light primary external edges in each list of Q j is already sufcient
for the computation of B i . In particular, let us consider the scenario at the
end of Phase blog ic. For any tree T s 2 F blog ic with fewer than 2 i vertices, the minimum
external edge e Ts of T s must be reported in B i . Note that base(F blog ic
contains Ts is a light primary external
edge of T s . In all previous phases k, F k contains a subtree of T s , denoted by
W , of which e Ts is an external edge. Note that e Ts is also a light primary external
edge of W (as a heavy edge remains a heavy edge subsequently).
On the other hand, at the end of Phase blog ic, if a tree T x 2 F blog ic contains 2 i or
more vertices, all its external edges are heavy and R3 cannot enforce the minimum
external edge e Tx of T x being kept in the list for T x . Fortunately, it is not necessary
for Thread i to report the minimum external edge for such a tree. The following
requirements for the threshold help us detect whether the minimum external edge
of T x has been removed. If so, we will not report anything for T x . Essentially, we
require that if e Tx or any primary external edge e of T x has been removed from
the list L x that represents T x , the threshold kept in L x is no bigger than w(e
(respectively, w(e)). Then the smallest edge in L x is e Tx if and only if its weight is
fewer than the threshold.
Let T be a tree in F j represented by a list L 2 Q j . The threshold of L
satises the following properties.
R4. (lower bound for the threshold) If h(L) 6= 1, then h(L) is equal to the
weight of a heavy internal or external edge of T .
R5. (upper bound for the threshold) Let e be an external edge of T not
included in L. If e is primary, then h(L) w(e).
(Our algorithm actually satises a stronger requirement that h(L) w(e) if
e is primary, or
e is secondary and the mate of e is still included in another list L 0 in Q j .)
In summary, R1 to R5 guarantee that at the end of Phase blog ic, for any tree
if T has fewer than 2 i vertices, its minimum external edge e T is the
only edge kept in a unique adjacency list representing T ; otherwise, T may or may
not be represented by any list. If T is represented by a list but e T has already been
removed, the threshold kept is at most w(e T ). Every external edge currently kept
in the list must have a weight greater than or equal to the threshold. Thus, we can
simply ignore the list for T .
It is easy to check that Q 0 satises the ve requirements for Phase 0. In the next
section we will give an algorithm that can satisfy these requirements after every
phase. Consequently, Thread i can report B i based on the edges in the lists in
12 K.W. Chong, Y. Han, and T.W. Lam
5. THE ALGORITHM
In this section we present the algorithmic details of Thread i, showing how to
merge and extract the adjacency lists in each phase. The discussion is inductive in
nature|for any j 1, we assume that Phase j 1 has produced a set of adjacency
lists satisfying the requirements R1-R5, and then show how Phase j computes a
new set of adjacency lists satisfying the requirements in O(i=2 j using a linear
number of processors.
Phase j inherits the set of adjacency lists Q j 1 from Phase j 1 and receives
the edges B[a To ease our discussion, we refer to B[a
INPUT. Notice that a list in Q represents one of the trees in F j 1 (recall that
denote the set of trees induced by B[1; a
Phase j merges the adjacency lists in Q according to how the trees in F j 1 are
connected by the edges in INPUT.
Consider an edge in INPUT. Denote W 1 and W 2 as the trees in F j 1
containing u and v respectively. Ideally, if e and its mate appear in the adjacency
lists of W 1 and W 2 respectively, the adjacency lists of W 1 and W 2 can be merged
easily in O(1) time. However, W 1 or W 2 might already be too large and not have
a representation in Q j 1 . Even if they are represented, the length requirement of
the adjacency lists may not allow e to be included. As a result, e may appear in
two separate lists in Q or in just one, or even in none; we call e a full, half, and
lost edge respectively. Accordingly, we partition INPUT into three sets, namely
Full-INPUT, Half-INPUT, and Lost-INPUT.
Phase j starts o by merging the lists in Q with respect to edges in Full-INPUT.
Let T be a tree in F j . Let W 1 be the trees in F j 1 that, together with
the edges in INPUT, constitute T . Note that some W i may not be represented by
a list in Q j 1 . Since the merging is done with respect to Full-INPUT, the adjacency
lists of W 1 present, may be merged into several lists instead
of a single one. Let merged lists. Each L i represents a
bigger subtree Z i of T , which is called a cluster below (see Figure 4). A cluster may
contain one or more W i . We distinguish one cluster, called the core cluster, such
that the minimum external edge e T of T is an external edge of that cluster. Note
that the minimum external edge of the core cluster may or may not be e T . For a
non-core cluster Z, the minimum external edge e Z of Z must be a tree edge of T
(by Lemma 3) and thus e Z is in INPUT. Moreover, e Z is not a full edge. Otherwise,
the merging should have operated on e Z , which then becomes an internal edge of a
bigger cluster.
The merged lists obviously need not satisfy the requirements for Q j . In the
following sections, we present the additional processing used to fulll the require-
ments. A summary of all the processing is given in Section 5.4. The discussion
of the processing of the merged lists is divided according to the sizes of the trees,
sketched as follows:
For each tree T 2 F j that contains fewer than 2 i vertices, there is a simple way
to ensure that exactly one merged list is retained in Q j . Edges in that list are
Optimal Parallel MST Algorithm 13
Fig. 4. Each Wx represents a tree in F j 1 . The dotted and solid lines represent half and
full edges in INPUT respectively. T is a tree formed by connecting the trees in F j 1 with
the edges in INPUT. Each Zy (called a cluster) is a subtree of T , formed by connecting
some Wx 's with full edges only. The adjacency lists of the Wx 's within each Zy can be
merged into a single list easily.
ltered to contain all the light primary external edges of T , and other secondary
external edges of T , if included, must be heavy.
For a tree T 0 2 F j that contains at least 2 i vertices, the above processing may
retain more than one merged lists. Here we put in an extra step to ensure that,
except possibly one, all merged lists for T 0 are removed.
The threshold of each remaining list is updated after retaining the 2 bi=2 j c 1
smallest edges. We show that the requirements for the threshold are satised no
matter whether the tree in concern contains fewer than 2 i vertices or not.
5.1 Trees in F j
with fewer than 2 i vertices
In this section we focus on each tree T 2 F j that contains fewer than 2 i vertices.
Denote by the merged lists representing the clusters of T . Observe that
each of these lists contains at most (2 bi=2 edges. Below, we derive an
e-cient way to nd a unique adjacency list for representing T , which contains all
light primary external edges of T .
First of all, we realize that every light primary external edge of T is also a light
primary external edge of a tree W in F j 1 and must be present in the adjacency
list that represents W in Q j 1 (by R3). Thus, all light primary external edges of
T (including the minimum external edge of T ) are present in some merged lists.
Unique Representation: Let L cc be the list in fL such that L cc contains
the minimum external edge e T of T . That is, L cc represents the core cluster
Z 0 of T . Our concern is how to remove all other lists in fL so that T
will be represented uniquely by L cc .
To e-ciently distinguish L cc from other lists, we make use of the properties
14 K.W. Chong, Y. Han, and T.W. Lam
stated in the following lemma. Let L nc be any list in fL g. Let
Z denote the cluster represented by L nc .
Lemma 5. (i) L cc does not contain any edge in Half-INPUT. (ii) L nc contains
at least one edge in Half-INPUT. In particular, the minimum external edge of Z is
in Half-INPUT.
Proof of Lemma 5(i). Assume to the contrary that L cc includes an edge
in Half-INPUT; more precisely, ha; bi is in L cc and hb; ai is not included in any list
in be the trees in F j 1 connected by e, where a 2 W and
. The edge e is a primary external edge of W , as well as of W 0 . Both W
and W 0 are subtrees of T , and W is also a subtree of Z 0 . Below we show that
contains at least 2 i vertices. The latter contradicts
the assumption about T . Thus, Lemma 5(i) follows.
W 0 is a subtree of T and contains less than 2 i vertices. By R1, Q contains
a list LW 0 representing W 0 . By R3, LW 0 contains all light primary external edges
of W 0 . The edge hb; ai is not included in LW 0 and must be heavy. Therefore,
Next, we want to show that all trees in base(F are subtrees of T . Dene
T a and T b to be the subtrees of T constructed by removing e from T (see Figure 5).
Assume that T a contains the vertex a and T b the vertex b. W , as well as Z 0 , is a
a
Fig. 5. T is partitioned into two subtrees Ta and T b , which are connected by e.
subtree of T a , and W 0 is a subtree of T b . By Lemma 3, the minimum external edge
of T b is either e T or e. The former case is impossible because e T is included in L cc
and must be an external edge of Z 0 . Thus, e is the minimum external edge of T b .
By denition of base, base(F cannot include any trees in F j 1 that are
outside T b . In other words, T b includes all subtrees in base(F must
have at least 2 i vertices, and so must T . A contradiction occurs.
Proof of Lemma 5(ii). Let e Z be the minimum external edge of Z. As e Z is a
tree edge of T , it is in INPUT but is not a full edge. In this case, we can further
show that e Z is actually a half edge and included in L nc , thus completing the proof.
Let W be the tree in F j 1 such that W is a component of Z and e Z is an external
edge of W . Note that e Z is primary external edge of W . Let LW denote the
adjacency list in Q representing W . Since e Z is the minimum external edge of
Optimal Parallel MST Algorithm 15
include trees in F j 1 that are outside Z, and thus it
has size less than 2 i . By R3, all light primary external edges of W including e Z are
present in LW . Therefore, e Z is in Half-INPUT, and L nc must have inherited e Z
from LW .
Using Lemma 5, we can easily retain L cc and remove all other merged lists L nc .
One might worry that some L nc might indeed contain some light primary external
edge of T and removing L nc is incorrect. This is actually impossible in view of the
following fact.
Lemma 6. For any external edge e of T that is included in L nc ,
Proof. Let e Z be the minimum external edge of Z. For any external edge e of
T that is included in L nc , e is also an external edge of Z and w(e) w(e Z ).
Let W and W 0 be the trees in F j 1 connected by e Z such that W 0 is not a
component of Z. As shown in the previous lemma, e Z is in Half-INPUT. Moreover,
e Z is a tree edge of T and is present only in the adjacency list of W . That is, W 0 is
a component of T and the adjacency list of W 0 in Q does not contain e Z . Note
that e Z is a primary external edge of W 0 . By R1 and R3, we can conclude that
e Z is a heavy external edge of W 0 and hence kbase(F Therefore,
By Lemma 6, L nc does not contain any light primary external edge of T . In
other words, all light primary external edges of T must be in L cc , which is the only
list retained.
Excluding All Light Internal and Secondary External Edges: L cc contains all the
light primary external edges of T and also some other edges. Because of the length
requirement (i.e. R2), we retain at most 2 bi=2 j c 1 edges of L cc . Note that the
light primary external edges may not be the smallest edges in L cc . Based on the
following two lemmas, we can remove all other light edges in L cc , which include the
light internal and secondary external edges of T . Then the light primary external
edges of T will be the smallest edges left in the list and retaining the 2 bi=2 j c 1
smallest edges will always include all the light primary external edges.
Lemma 7. Suppose L cc contains a light internal edge hu; vi of T . Then its mate,
hv; ui, also appears in L cc .
Proof. Recall that L cc is formed by merging the adjacency lists of some trees
in F j 1 . By R1, each of these lists does not contain any internal edge of the tree it
represents. If L cc contains a light internal edge hu; vi of T , the edge (u; v) must be
between two trees W and W 0 in F j 1 which are components of Z 0 . Assume that
are light external edges of W and W 0
respectively. Let LW and LW 0 be their adjacency lists in Q j 1 . As hu; vi appears in
appears in LW . By R3, all light edges found in LW , including hu; vi,
must be primary external edges of W . By symmetry, hv; ui is a primary external
edge of W 0 . By R3 again, hv; ui appears in LW 0 . Since L cc inherits the edges from
both LW and LW 0 , we conclude that both hu; vi and hv; ui appear in L cc .
Han, and T.W. Lam
Lemma 8. Suppose L cc contains a light secondary external edge e of T . Let e 0
be the corresponding primary external edge. Then e and e 0 both appear in L cc , and
their mates also both appear in another merged list L 0
cc , where L 0
cc represents the
core cluster of another tree T 0 2 F j .
Proof. Suppose L cc contains a light secondary external edge e of T . Assume
that e connects T to another tree T 0 2 F j , and e 0 is the primary external edge
between T and T 0 . As w(e 0 ) < w(e),
e
Fig. 6. e is a light secondary external edge of T and e 0 is the corresponding primary external
edge.
Thus, e o is also a light primary external edge of T and must be included in L cc . On
the other hand, since e is secondary, is equal to base(F
thus contains less than 2 i
vertices. After merging the lists in Q we obtain a merged list L 0
cc that includes
all light primary external edges of T 0 . Below we show that L 0
cc contains the mates
of e 0 and e.
Observe that kbase(F
both e 0 and e are light external edges of T 0 . As e 0 is also a primary external
edge of T 0 , e 0 (more precisely, its mate) must be included in L 0
cc .
Let W and W 0 be the two trees in F j 1 connected by e, where W is a subtree of
T and W 0 of T 0 . Because e is a light external edge of T and T 0 , it is also a light
external edge of W and W 0 . Note that L cc inherits e from the adjacency list
that represents W . By R3, LW does not include any light secondary
external edge of W , so e is a primary external edge of W . By symmetry, e is also
a primary external edge of W 0 ; thus, by R3, e is in the adjacency list LW
that represents W 0 . Note that L 0
cc must include all the edges of LW 0 as well as
other lists in Q j 1 that contain light external edges of T (see Lemma 6). L 0
cc
contains e, too.
By Lemma 7, we can remove all light internal edges by simply removing edges
whose mates are in the same list. Lemma 8 implies that if L cc contains a light
secondary external edge, the corresponding primary external edge also appears
Optimal Parallel MST Algorithm 17
in L cc and their mates exist in another list L 0
cc . This suggests a simple way to
identify and remove all the light secondary external edges as follows. Without loss
of generality, we assume that every edge in L cc can determine the identity of L cc
(any distinct label given to L cc ). If an edge e 2 L cc has a mate in another list,
say,
cc , e can announce the identity of L cc to its mate and vice versa. By sorting
the edges in L cc with respect to the identities received from their mates, multiple
light external edges connected to the same tree come together. Then we can easily
remove all the light secondary external edges.
Now we know that L cc contains all light primary external edges of T and any
other edges it contains must be heavy. Let us summarize the steps required to build
a unique adjacency list for representing T .
procedure M&C // M&C means Merge and Clean up //
(1) Edges in INPUT that are full with respect to Q j 1 are activated to
merge the lists in Q j 1 . Let Q be the set of merged adjacency lists.
(2) For each merged adjacency list L 2 Q,
(a) if L contains an edge in Half-INPUT, remove L from Q;
(b) detect and remove internal and secondary external edges from L
according to Lemmas 7 and 8.
5.2 Trees with at least 2 i vertices
Consider a tree T 0 2 F j containing 2 i or more vertices. Let L 1 ; ; L ' be the
merged lists, each representing a cluster of T 0 . Some of these lists may contain
more than (2 edges. Unlike the case in Section 5.1, the minimum
external edge e T 0 of T 0 is heavy, and we cannot guarantee that there is a merged
list containing e T 0 and representing the core clusters of T 0 . Nevertheless, Thread i
can ignore such a tree T 0 , and we may remove all the merged lists. In Lemma 9,
we show that those lists in fL 1 ; L ' g that represent the non-core clusters of T 0
can be removed easily. If there is indeed a merged list L cc representing the core
cluster, Thread i may not remove L cc . Since T 0 contains at least 2 i vertices and
has no light primary external edge, we have nothing to enforce on L cc regarding
the light primary external edges. The only concern for L cc is the requirements for
the threshold, which will be handled in Section 5.3.
As T 0 contains at least 2 i vertices, any merged list L nc that represents a non-core
cluster of T 0 may not satisfy the properties stated in Lemma 5(ii). We need other
ways to detect such an L nc . First, we can detect the length of L nc . If L nc contains
more than (2 edges, we can remove L nc immediately. Next, if L nc
contains less than (2 edges, we make use of the following lemma to
identify it. Denote h(L) as the threshold associated with a list L 2
merged into L nc g.
Lemma 9. Any list L that represents a non-core cluster
of T 0 satises at least one of the following conditions.
(1) L nc contains an edge in Half-INPUT.
(2) For every edge hu; vi in L nc , either hv; ui is also in L nc or w(u; v)
Han, and T.W. Lam
Proof. Assume that L nc does not contain any edge in Half-INPUT, and L nc
contains an edge hu; vi but does not contain hv; ui. Below we show that w(u; v)
tmp-h(L nc ). The edge (u; v) can be an internal or external edge of Z.
Case 1. (u; v) is an internal edge of Z. L nc inherits hu; vi from a list L 2
be the tree represented by L. By R1, hu; vi is an external edge of W .
Thus, Z includes another tree W 0 2 F j 1 with hv; ui as an external edge, and L nc
also inherits the edges in the list LW that represents W 0 . Note that hv; ui
does not appear in LW 0 . By R5, h(LW 0
have
Case 2. (u; v) is an external edge of Z. It is obvious that w(u; v) w(e Z ) where
e Z is the minimum external edge of Z. We further show that w(e Z ) tmp-h(L).
be the tree in Z and with e Z as an external edge. Let LW 2
the adjacency list representing W . As mentioned before, e Z is in INPUT and is not
a full edge. If e Z is a lost edge, then L nc does not contain e Z . If e Z is a half edge, L nc
again does not contain e Z because L nc does not contain any edge in Half-INPUT.
In conclusion, e Z does not appear in L nc and hence cannot appear in LW . Since
e Z is a primary external edge of W , we know that, by R5, h(LW ) w(e Z ). By
denition,
Using Lemma 9, we can extend Procedure M&C to remove every merged list L nc
that represents a non-core cluster of any tree T in F j (see Procedure Ext M&C).
Precisely, if T has fewer than 2 i vertices, L nc is removed in Step 2(a); otherwise,
L nc is removed in Step 1(b) or Steps 2(a)-(c).
procedure Ext M&C
(1) (a) Edges in INPUT that are full with respect to Q j 1 are activated
to merge the lists in Q j 1 . Let Q be the set of merged adjacency
lists.
(b) For each list L 2 Q, if L contains more than (2
remove L from Q.
(2) For each merged adjacency list L 2 Q,
(a) if L contains an edge in Half-INPUT, remove L from Q;
(b) detect and remove internal and secondary external edges from L;
(c) if, for all edges hu; vi in L, w(u; v) tmp-h(L), remove L from Q.
After Procedure Ext M&C is executed, all the remaining merged lists are representing
the core clusters of trees in F j . Moreover, for tree T 2 F j with fewer than 2 i
vertices, Procedure Ext M&C, like Procedure M&C, always retains the merged list L cc
that represents the core cluster of T . L cc is not removed by Step 1(b) because L cc
cannot contain more than (2 edges. In addition, L cc contains all the
light primary external edges of T . In Lemma 10 below, we show that tmp-h(L cc )
is the weight of a heavy internal or external edge of T . Thus, L cc contains edges
with weight less than tmp-h(L cc ) and cannot be removed by Step 2(c).
Lemma 10. tmp-h(L cc ) is equal to the weight of a heavy internal or external
edge of T .
Proof. Among all the lists in Q j 1 that are merged into L cc , let L be the one
Optimal Parallel MST Algorithm 19
with the smallest threshold. That is, tmp-h(L cc denote the tree
in F j 1 represented by L. By R4, h(L) is equal to the weight of a heavy internal
or external edge e of W . Thus e is also a heavy internal or external edge of T .
5.3 Updating Threshold and Retaining only External Edges
After Procedure Ext M&C is executed, every remaining merged list is representing
the core-cluster of a tree in F j . Let L cc be such a list representing a tree T 2 F j . If
contains less than 2 i vertices, all light primary external edges of T appear among
the smallest edges in L cc , and all other edges in L cc are heavy edges. If
T has at least 2 i vertices, no external or internal edges of T are light and all edges
in L cc must be heavy. By the denition of Ext M&C, the number of edges in L cc is
at most (2 bi=2 may exceed the length requirement for Phase j (i.e.,
1). To ensure that L cc satises and R3, we retain only the 2 bi=2 j c 1
smallest edges on L cc . The threshold of L cc , denoted by h(L cc ), is updated to be
the minimum of tmp-h(L cc ) and the weight of the smallest edge truncated.
contains fewer than 2 i vertices or not, every edge truncated
from L cc is heavy. Together with Lemma 10, we can conclude that h(L cc ) is equal
to the weight of a heavy internal or external edge of T , satisfying R4.
Next, we give an observation on L cc and in Lemma 12, we show that R5 is
satised. Denote Z 0 as the core-cluster of T represented by L cc .
Lemma 11. Let e be an external edge of Z 0 . If e is a tree edge of T , then e is
not included in L cc and h(L cc ) w(e).
Proof. Suppose e is included in L cc . Note that e cannot be a full edge with
respect to Q because a full edge and its mate should have been removed in
Step 2(b) in Procedure Ext M&C. Then e is in Half-INPUT and Procedure Ext M&C
should have removed L cc at Step 2(a). This contradicts that L cc is one of the
remaining lists after Procedure Ext M&C is executed. Therefore, e is not included
in L cc .
Next, we show that h(L cc ) w(e). Let W be the subtree of Z 0 such that e
is an external edge of W . Since e is a tree edge, e is a primary external edge of
W . As e is not included in L cc and L cc inherits the adjacency list LW 2
representing W , e is also not included in LW . By R5, h(LW ) w(e). Recall that
Lemma 12. Let e be an external edge of T currently not found in L cc . If (i) e
is primary, or (ii) e is secondary and the mate of e is still included in some other
list L 0 in Q j , then h(L cc ) w(e).
Proof. Let vi be an external edge of T currently not found in L cc ,
satisfying the conditions stated in Lemma 12. Let W be the tree in F j 1 such that
W is a subtree of T and e is an external edge of W . With respect to W , either e
is primary, or e is secondary and the mate of e is included in another list in Q
We consider whether W is included in the core cluster Z 0 of T .
Case 1. W is a subtree of Z 0 . By denition of Z 0 , W must be represented by a list
At the end of Phase j 1, e may or may not appear in LW . If e does not
20 K.W. Chong, Y. Han, and T.W. Lam
a
Z
Fig. 7. Z 0 and Z is connected by a path P in T , and P contains an edge (a; b), which is an
external edge of both Z 0 and W 0 .
appear in LW , then, by R5, h(LW
we have h(L cc ) w(e). Suppose that e is in LW . Then e is passed to L cc when
Procedure Ext M&C starts o. Yet e is currently not in L cc . If e is removed from
L cc within Procedure Ext M&C, this has to take place at Step 2(b), and e is either
an internal edge of T or a secondary external edge removed together with its mate.
This contradicts the assumption about e. Thus, e is removed after Procedure
Ext M&C, i.e., due to truncation. In this case, the way h(L cc ) is updated guarantees
Case 2. W is a subtree of a non-core cluster Z. We show that Z 0 has an external
w(e). Observe that T
contains a path connecting Z 0 and Z, and this path must involve an external edge
ha; bi of Z 0 . By Lemma 11, h(L cc ) w(a; b).
Next we show that w(a; b) < w(e). Let W 0 be the tree in F j 1 such that W 0 is a
subtree of Z 0 and ha; bi is an external edge of W 0 . See Figure 7. Suppose we remove
the edge (a; b) from T , T is partitioned into two subtrees T a and T b , containing the
vertices a and b, respectively. Note that T b contains W , and e is an external edge
of T b . On the other hand, Z 0 is included in T a , and e T is not an external edge of T b .
By Lemma 3, the minimum external edge of T b is hb; ai. Therefore, w(a; b) < w(e).
As a result, h(L cc ) w(a; b) < w(e) and the lemma follows.
Removing remaining internal edges: Note that L cc may still contain some
internal edges of T . This is because Procedure Ext M&C only remove those internal
edges whose mates also appear in L cc . The following lemma shows that every remaining
internal edges in L cc has a weight greater than h(L cc ). Thus by discarding
those edges in L cc with weight greater than h(L cc ), we ensure that only external
edges of T are retained. Of course, no light primary external edge can be removed
by this step.
Lemma 13. For any internal edge e of T that is currently included in L cc ,
Proof. We consider whether is an internal or external edge of Z 0 .
Optimal Parallel MST Algorithm 21
a
Fig. 8. The pair of vertices u and v of e is connected by a path P in T . Every edge on P has
weight smaller than w(e).
Case 1. e is an internal edge of Z 0 . Suppose L cc inherits e from the list
represents a tree W 2 F j 1 and W is a subtree of Z 0 . By
R1, hu; vi is an external edge of W . Then Z 0 includes another tree W 0 2 F j 1 which
contains the vertex v. Denote LW 0 as the list in Q j 1 that represents W 0 . The edge
hv; ui is an external edge of W 0 . But hv; ui does not appear in LW 0 (otherwise,
would have also inherited hv; ui from LW 0 and Procedure Ext M&C should have
removed both hu; vi and hv; ui from L cc at Step 2(b)). By R5, h(LW 0 ) w(u; v).
As
Case 2. e is an external edge of Z 0 . By Lemma 11, e is not a tree edge of T . Let
P be a path on T connecting u and v. See Figure 8. Since T is a subtree of T
G , every
edge on P has weight smaller than w(u; v). On P , we can nd an external edge
ha; bi of Z 0 . By Lemma 11 again, h(L cc ) w(a; b) and hence h(L cc ) w(u; v).
5.4 The complete algorithm
The discussion of Thread i in the previous three sections is summarized in the
following procedure. The time and processor requirement will be analyzed in the
next section.
Thread i
Input:. G; B k , where 1 k i 1, is available at the end of the kth superstep
Construct Q 0 from G; a 0 0
3 For to blog ic do // Phase j
// denote INPUT as B[a
(1) (a) Edges in INPUT that are full with respect to Q lists in
be the set of merged adjacency lists.
(b) For each list L 2 Q, if L contains at most (2 bi=2
is a part of Lg; otherwise,
remove L from Q.
(2) For each list L 2 Q, // Remove unwanted edges and lists
22 K.W. Chong, Y. Han, and T.W. Lam
(a) if L contains an edge in Half-INPUT, remove L from Q;
(b) detect and remove internal and secondary external edges from L;
(c) if, for all edges hu; vi in L, w(u; v) tmp-h(L), remove L from Q.
(3) // Truncate each list if necessary and remove remaining internal edges
(a) For each list L 2 Q, if L contains more than 2 bi=2 j c 1 edges, retain the
smallest ones and update h(L) to the minimum of tmp-h(L)
and w(e is the smallest edge just removed from L.
(b) For each edge hu; vi 2 L, if w(u; v) h(L), remove hu; vi from L.
appears in some list L in Q blog ic ; w(u; v) < h(L)
g.
6. TIME AND PROCESSOR COMPLEXITY
First, we show that the new MST algorithm runs in O(log n) time using (n+m) log n
CREW PRAM processors. Then we illustrate how to modify the algorithm to run
on the EREW PRAM and reduce the processor bound to linear.
Before the threads start to run concurrently, they need an initialization step.
First, each adjacency list of G is sorted in ascending order with respect to the
edge weights. This set of sorted adjacency lists is replicated blog nc times and
each copy is moved to the \local memory" of a thread, which is part of the global
shared memory dedicated to the processors performing the \local" computation of
a thread. The replication takes O(log n) time using a linear number of processors.
Then each thread constructs its own Q 0 in O(1) time. Afterwards, the threads run
concurrently.
As mentioned in Section 3, the computation of a thread is scheduled to run in
a number of phases. Each phase starts and ends at predetermined supersteps. We
need to show that the computation of each phase can be completed within the
allocated time interval. In particular, Phase j of Thread i is scheduled to start at
the (a j +1)th superstep and end at the a j+1 th superstep using
supersteps. The following lemma shows that Phase j of Thread i can
be implemented in c(i=2 j 1 ) time, where c is a constant. By setting the length
of a superstep to a constant c 0 such that c(i=2 j 1 )=c 0 1
Phase j can
complete its computation in at most 1
supersteps. It can be verify that
Lemma 14. Phase j of Thread i can be implemented in O(i=2 using
processors.
Proof. Consider the computation of Phase j of Thread i. Before the merging
of the adjacency list starts, Thread i reads in B[a which may also
be read by many other threads, into the local memory of Thread i. The merging
of adjacency lists in Step 2(a) takes O(1) time. In Step 2(b), testing the
length of a list ( (2 bi=2 can be done by performing pointer jumping in
time. After that, all adjacency lists left have
length at most (2 bi=2 . In the subsequent steps, we make use of standard
parallel algorithmic techniques including list ranking, sorting, and pointer jumping
Optimal Parallel MST Algorithm 23
to process each remaining list. The time used by these techniques is the logarithmic
order of the length of each list (see e.g., JaJa [1992]). Therefore, all the steps of
Phase j can be implemented in c(i=2 using a linear
number of processors.
Corollary 1. The minimum spanning tree of a weighted undirected graph can
be found in O(log n) time using (n +m) log n CREW PRAM processors.
Proof. By Lemma 14, the computation of Phase j of Thread i satises the
predetermined schedule. Therefore, B i can be found at the end of the ith superstep
and B[1; blog nc] are all ready at the end of the blog ncth superstep. That means the
whole algorithm runs in O(log n) time. As Thread i uses at most n+m processors,
(n +m) log n processors su-ce for the whole algorithm.
6.1 Adaptation to EREW PRAM
We illustrate how to modify the algorithm to run on the EREW PRAM model.
Consider Phase j of Thread i. As discussed in the proof of Lemma 1, concurrent
read is used only in accessing the edges of B[a which may also be read
by many other threads at the same time. If B[a have already resided in
the local memory of Thread i, all steps can be implemented on the EREW PRAM.
To avoid using concurrent read, we require each thread to copy its output to
each subsequent thread. By modifying the schedule, each thread can perform this
copying process in a sequential manner. Details are as follows: As shown in the
proof of Lemma 1, Phase j of Thread i can be implemented in c(i=2
where c is a constant. The length of a superstep was set to be c 0 so that Phase j
of Thread i can be completed within
supersteps. Now the length of
each superstep is doubled (i.e., each superstep takes 2c 0 time instead of c 0 ). Then
the computation of Phase j can be deferred to the last half supersteps (i.e., the
last 1
supersteps). In the rst half supersteps of Phase j (i.e., from the
1)th to (a j+1
)th supersteps), no computation is performed.
Thread i is waiting for other threads to store the outputs B a j 1 into the
local memory.
To complete the schedule, we need to show how each Thread k, where k < i,
perform the copying in time. Recall that Thread k completes its computation at
the kth superstep. In the (k to
four threads, namely Thread Each replication
takes using a linear number of processors.
Lemma 15. Consider any Thread i. At the end of the (a
)th super-
step, there is a copy of B[a residing in the local memory of Thread i.
Proof. For k < i, Thread i receives B k at the (k
For Phase j of Thread i, B a j is the last set of edges to be received and it arrives
at the (a
)th superstep, just before the start of
the second half of Phase j.
Han, and T.W. Lam
6.2 Linear processors
In this section we further adapt our MST algorithm to run on a linear number of
processors. We rst show how to reduce the processor requirement to m+ n log n.
Then, for a dense graph with at least n log n edges, the processor requirement is
dominated by m. Finally, we give a simple extra step to handle sparse graphs.
To reduce the processor requirement to m log n, we would like to introduce
some preprocessing to each thread so that each thread can work on only n (instead of
m) edges to compute the required output using n processors. Yet the preprocessing
of each thread still needs to handle m edges and requires m processors. To sidestep
this di-culty, we attempt to share the preprocessing among the threads. Precisely,
the computation is divided into dlog log ne stages. In Stage k, where 1
ne, we perform one single preprocessing, which then allows up to 2 k 1
threads to compute concurrently the edge sets
using processors. The preprocessing itself runs in O(2 k ) supersteps using
processors. Thus, each stage makes use of at most m
and the total number of supersteps over all stages is still O(log n).
Lemma 16. The minimum spanning tree of a weighted undirected graph can be
found in O(log n) time using m+ n log n processors on the EREW PRAM.
Proof. The linear-processor algorithm runs in dloglog ne+1 stages. In Stage 0,
is found by Thread 1. For 1 k dloglog ne, Stage k is given
and is to compute B[2 Specically, let
Thread 2x for the preprocessing and Threads 1; 2; ; x for the actual computation
of . Both parts require O(x) supersteps.
The preprocessing is to prepare the initial adjacency lists for each thread. Let F
be the set of trees induced by B[1; x], which is, by denition, a 2 x -forest of G. We
invoke Thread 2x to execute Phase 1 only, computing a set Q 1 of adjacency lists.
By denition, each list in Q 1 has length at most 2 2x=2 representing a
tree T in F and containing all primary external edges of T with base less than 2 2x .
contains su-cient edges for nding not only B 2x but also As
F contains at most n=2 x trees, Q 1 contains a total of at most n edges.
Each list in Q 1 is sorted with respect to the edge weight using O(x) supersteps
and n processors. Then Q 1 is copied into the local memory of Threads 1 to x one by
one in x supersteps using n processors. For 1 i x, Thread i replaces its initial
set of adjacency lists Q 0 with a new set Q (i)
0 , which is constructed by truncating
each list in Q 1 to include the smallest 2 i 1 edges.
Threads 1 to x are now ready to run concurrently, computing B
respectively. For all 1 i x, Thread i uses its own Q (i)
0 as the initial set
of adjacency lists and follows its original phase-by-phase schedule to execute the
algorithm stated in Section 5.4. Note that the algorithm of a thread is more versatile
than was stated. When every Thread i starts with Q (i)
0 as input, Thread i will
compute the edge set B x+i (instead of
can be found by Threads 1 to x in x supersteps. Note that Q (i)
0 has at most n
edges, the processors requirement of each thread is n only.
In short, Stage k takes O(x) supersteps using m+x n m+n log n processors.
Optimal Parallel MST Algorithm 25
Recall that . The dlog log ne stages altogether run in O(log n) time using
processors.
If the input graph is sparse, i.e., m < n log n, we rst construct a contracted
graph G c of G as follows. We execute Threads 1 to loglog n concurrently to nd
log n], which induces a (log n)-forest B of G. Then, by contracting each tree
in the forest, we obtain a contracted graph G c with at most n= log n vertices.
The contraction takes O(log n) time using m processors. By Lemma 16, the
minimum spanning tree of G c , denoted T
Gc , can be computed in O(log n) time
using m+ (n= log n) log processors. Note that T
Gc and B include exactly
all the edges in T
G . We conclude with the following theorem.
Theorem 1. The minimum spanning tree of an undirected graph can be found
in O(log n) time using a linear number of processors on the EREW PRAM.
--R
New Connectivity and MSF Algorithms for Shu
A Faster Deterministic Algorithm for Minimum Spanning Trees.
A Minimum Spanning Tree Algorithm with Inverse-Ackermann Type Complexity
Finding Minimum Spanning Trees on the EREW PRAM.
Finding Connected Components in O(log n loglog n) time on the EREW PRAM.
Parallel Merge Sort.
Finding minimum spanning forests in logarithmic time and linear work using random sampling.
Approximate and Exact Parallel Scheduling with Applications to List
Upper and lower time bounds for parallel random access machines without simultaneous writes.
Fibonacci heaps and their used in improved network optimization algorithms.
An Optimal Randomized Parallel Algorithm for Finding Connected Components in a Graph.
Can a Shared-Memory Model Serve as a Bridging Model for Parallel Computation? <Proceedings>In Proceedings of the 9th ACM Symposium on Parallel Algorithms and Architectures</Proceedings>
A fast and e-cient parallel connected component algorithm
Computing Connected Components on Parallel Computers.
Connected Components in O(lg 3
A Parallel Algorithm for Computing Minimum Spanning Trees.
Random sampling in Graph Optimization Problems.
A randomized linear-time algorithm to nd minimum spanning trees
Fast Connected Components Algorithms for the EREW PRAM.
Parallel Algorithms for Shared-Memory Ma- chines
Parallel Ear Decomposition Search
Undirected Connectivity in O(log 1:5 n) Space.
Finding Minimum Spanning Trees in O(n
A Randomized Time-Work Optimal Parallel Algorithm for Finding a Minimum Spanning Forest
An Optimal Minimum Spanning Tree Algo- rithm
A Randomized Linear Work EREW PRAM Algorithm to Find a Minimum Spanning Forest.
Structures and Network Algorithms.
A Bridging Model For Parallel Computation.
--TR
Data structures and network algorithms
Upper and lower time bounds for parallel random access machines without simultaneous writes
Efficient algorithms for finding minimum spanning trees in undirected and directed graphs
Fibonacci heaps and their uses in improved network optimization algorithms
New connectivity and MSF algorithms for shuffle-exchange network and PRAM
Parallel ear decomposition search (EDS) and <italic>st</>-numbering in graphs
Parallel merge sort
A bridging model for parallel computation
Parallel algorithms for shared-memory machines
Connected components in <italic>O</italic>(lg<supscrpt>3/2</supscrpt>|<italic>V</italic>|) parallel time for the CREW PRAM (extended abstract)
An introduction to parallel algorithms
A parallel algorithm for computing minimum spanning trees
Fast connected components algorithms for the EREW PRAM
A randomized linear-time algorithm to find minimum spanning trees
An efficient and fast parallel-connected component algorithm
Random sampling in graph optimization problems
Finding minimum spanning forests in logarithmic time and linear work using random sampling
Can shared-memory model serve as a bridging model for parallel computation?
<i>SL</i> MYAMPERSANDsube;<i>L</i><sup>4/3</sup>
Finding connected components in <italic>O</italic>(log <italic>n</italic> loglog <italic>n</italic>) time on the EREW PRAM
Optimal randomized EREW PRAM algorithms for finding spanning forests and for other basic graph connectivity problems
A minimum spanning tree algorithm with inverse-Ackermann type complexity
Efficient parallel algorithms for some graph problems
Computing connected components on parallel computers
An Optimal Minimum Spanning Tree Algorithm
A Randomized Linear Work EREW PRAM Algorithm to Find a Minimum Spanning Forest
A Randomized Time-Work Optimal Parallel Algorithm for Finding a Minimum Spanning Forest
A Faster Deterministic Algorithm for Minimum Spanning Trees
Finding Minimum Spanning Trees in O(m alpha(m,n)) Time
--CTR
Tsan-sheng Hsu, Simpler and faster biconnectivity augmentation, Journal of Algorithms, v.45 n.1, p.55-71, October 2002
Stavros D. Nikolopoulos , Leonidas Palios, Parallel algorithms for P
Toshihiro Fujito , Takashi Doi, A 2-approximation NC algorithm for connected vertex cover and tree cover, Information Processing Letters, v.90 n.2, p.59-63,
Vladimir Trifonov, An O(log n log log n) space algorithm for undirected st-connectivity, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
David A. Bader , Guojing Cong, Fast shared-memory algorithms for computing the minimum spanning forest of sparse graphs, Journal of Parallel and Distributed Computing, v.66 n.11, p.1366-1378, November 2006
Aaron Windsor, An NC algorithm for finding a maximal acyclic set in a graph, Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures, June 27-30, 2004, Barcelona, Spain
Seth Pettie , Vijaya Ramachandran, An optimal minimum spanning tree algorithm, Journal of the ACM (JACM), v.49 n.1, p.16-34, January 2002 | minimum spanning trees;parallel algorithms;EREW PRAM;connected components |
375862 | An Efficient Algorithm for Aggregating PEPA Models. | AbstractPerformance Evaluation Process Algebra (PEPA) is a formal language for performance modeling based on process algebra. It has previously been shown that, by using the process algebra apparatus, compact performance models can be derived which retain the essential behavioral characteristics of the modeled system. However, no efficient algorithm for this derivation was given. In this paper, we present an efficient algorithm which recognizes and takes advantage of symmetries within the model and avoids unnecessary computation. The algorithm is illustrated by a multiprocessor example. | Introduction
In recent years several Markovian process algebras (MPAs) have been presented
in the literature. These include PEPA [1], MTIPP [2], and EMPA [3].
As with classical process algebras, these formalisms allow models of systems
to be constructed which are amenable to functional or behavioural analysis
by a variety of techniques. Additionally they allow timing information to
be captured in those models and so facilitate performance analysis via the
solution of a Continuous Time Markov Chain (CTMC).
Process algebras have several attractive features: a facility for high-level
definition, compositional structure and the existence of formally defined
equivalence relations which can be used to compare models. In the Markovian
context theoretical results have shown that it is possible to exploit
these equivalence relations at the level of the model description to generate
an aggregated CTMC in a compositional way [4]. This is of great practical
importance because, like all state-based modelling techniques, MPA models
suffer from the state space explosion problem. Although prototype tools
have been developed for model exploration [5, 6, 7], little work has been done
to exploit to the full the potential to use equivalence relations to achieve effective
aggregation and thus to put the theoretic results to practical use. In
this paper we describe an algorithm to carry out efficient aggregation and
its implementation in the PEPA Workbench.
Aggregation is a widely used and well-understood technique for reducing
the size of CTMC used in performance analysis. The state space of
the CTMC is partitioned into a number of classes, each of which is treated
as a single state in a new derived stochastic process. If the partition can
be shown to have a condition known as lumpability [8], this new stochastic
process will again be a CTMC and amenable to numerical solution of a
steady state probability distribution via linear algebra. In the MPA context
the partitioning is carried out using a formally defined equivalence relation
which establishes behavioural or observational equivalence between states
within a model. The equivalence relation which is generally discussed in
relation to aggregation is called strong equivalence (for PEPA), Markovian
bisimulation (for MTIPP) or extended Markovian bisimulation equivalence
(for EMPA). However there are some problems with applying this equivalence
relation/aggregation at the syntax level in a compositional way. These
are discussed in more detail in Section 5. In this paper we use a finer equivalence
relation, called isomorphism, which although it may result in coarser
aggregations has the advantage of being readily amenable to automatic generation
of equivalence classes at the syntax level. Thus the construction
of the complete state space can be avoided and the aggregated CTMC is
constructed directly.
The rest of the paper is structured as follows. In Section 2 we introduce
the PEPA language, its operational semantics, and aggregation via
isomorphism. The algorithm for the computation of a reduced state space
is discussed in Section 3, an example is presented in Section 4. Some cases
in which the algorithm cannot achieve the optimal theoretical partitioning
are discussed in Section 5. Section 6 presents some related approaches and,
finally, Section 7 concludes the paper presenting some possible future investigation
Performance Evaluation Process Algebra (PEPA) is an algebraic description
technique based on a classical process algebra and enhanced with stochastic
timing information. This extension results in models which may be used
to calculate performance measures as well as deduce functional properties
of the system. In this section we briefly introduce PEPA; more detailed
information can be found in [1].
Process algebras are mathematical theories which model concurrent systems
by their algebra and provide apparatus for reasoning about the structure
and behaviour of the model. In classical process algebras, e.g. Calculus
of Communicating Systems (CCS [9]), time is abstracted away-actions are
assumed to be instantaneous and only relative ordering is represented-
and choices are generally nondeterministic. If an exponentially distributed
random variable is used to specify the duration of each action the process
algebra may be used to represent a Markov process. This approach is taken
in PEPA and several of the other Markovian process algebras [2, 3].
The basic elements of PEPA are components and activities, corresponding
to states and transitions in the underlying CTMC. Each activity is
represented by two pieces of information: the label, or action type, which
identifies it, and the activity rate which is the parameter of the negative
exponential distribution determining its duration. Thus each action is represented
as a pair (ff; r). We assume that the set of possible action types, A,
includes a distinguished type, - . This type denotes internal, or "unknown"
activities and provides an important abstraction mechanism.
The process algebra notation for representing systems is wholly based
on the use of a formal language. The PEPA language provides a small set
of combinators. These allow language terms to be constructed defining the
behaviour of components, via the activities they undertake and the interactions
between them. The syntax may be formally introduced by means of
the following grammar:
where S denotes a sequential component and P denotes a model component
which executes in parallel. C stands for a constant which denotes either
a sequential or a model component, as defined by a defining equation. C S
stands for constants which denote sequential components. The component
combinators, together with their names and interpretations, are presented
informally below.
r):S The basic mechanism for describing the behaviour of
a system is to give a component a designated first action using the prefix
combinator, ". For example, the component (ff; r):S carries out activity
(ff; r), which has action type ff and an exponentially distributed duration
with parameter r, and it subsequently behaves as S. The set of all action
types is denoted by A. Sequences of actions can be combined to build up a
life cycle for a component. For example:
Comp
S The life cycle of a sequential component may be more
complex than any behaviour which can be expressed using the prefix combinator
alone. The choice combinator captures the possibility of competition
or selection between different possible activities. The component
represents a system which may behave either as S 1 or as S 2 . The activities of
both S 1 and S 2 are enabled. The first activity to complete distinguishes one
of them: the other is discarded. The system will then behave as the derivative
resulting from the evolution of the chosen component. For example,
the faulty component considered above may also be capable of completing
a task satisfactorily:
Comp
Constant, C As we have already seen, it is convenient to be able to assign
names to patterns of behaviour associated with components. Constants
provide a mechanism for doing this. They are components whose meaning
is given by a defining equation: e.g. C def
which gives the constant C the
behaviour of the component P .
Cooperation,
Most systems are comprised of several components
which interact. In PEPA direct interaction, or cooperation, between
components is represented by the combinator " \Delta
". The set L, of visible
action types (L ' A n f-g), is significant because it determines those
activities on which the components are forced to synchronise. Thus the co-operation
combinator is in fact an indexed family of combinators, one for
each possible cooperation set L. When cooperation is not imposed, namely
for action types not in L, the components proceed independently and concurrently
with their enabled activities. However if a component enables an
activity whose action type is in the cooperation set it will not be able to
proceed with that activity until the other component also enables an activity
of that type. The two components then proceed together to complete the
shared activity. The rate of the shared activity may be altered to reflect the
work carried out by both components to complete the activity.
For example, the faulty component considered above may need to co-operate
with a resource in order to complete its task. This cooperation is
represented as follows:
System
ftaskg
Res
If the component also needs to cooperate with a repairman in order to be
repaired this could be written as:
System \Delta
Repman
or, equivalently
ftaskg
Repman
In some cases, when an activity is known to be carried out in cooperation
with another component, a component may be passive with respect to that
activity, denoted (ff; ?). This means that the rate of the activity is left
unspecified and is determined upon cooperation, by the rate of the activity
in the other component. All passive actions must be synchronised in the
final model.
If the cooperation set is empty, the two components proceed indepen-
dently, with no shared activities. We use the compact notation, P k Q,
to represent this case. Thus, if two components compete for access to the
resource and the repairman we would represent the system as
ftaskg
Repman
Hiding, P=L The possibility to abstract away some aspects of a compo-
nent's behaviour is provided by the hiding operator ``/''. Here, the set L
of visible action types identifies those activities which are to be considered
internal or private to the component. These activities are not visible to an
external observer, nor are they accessible to other components for coopera-
tion. For example, in the system introduced above we may wish to ensure
that these components have exclusive access to the resource in order to complete
their task. Thus we hide the action type task, ensuring that even when
the system is embedded in an environment no other component can access
the task activity of the resource:
System
ftaskg
Res)=ftaskg
Once an activity is hidden it only appears as the unknown type - ; the rate
of the activity, however, remains unaffected.
The precedence of the combinators provides a default interpretation of
any expression. Hiding has highest precedence with prefix next, followed by
cooperation. Choice has the lowest precedence. Brackets may be used to
force an alternative parsing or simply to clarify meaning.
2.1 Operational semantics and the underlying CTMC
The model components capture the structure of the system in terms of its
static components. The dynamic behaviour of the system is represented
by the evolution of these components, either individually or in cooperation.
The form of this evolution is governed by a set of formal rules which give an
operational semantics of PEPA terms. The semantic rules, in the structured
operational style of Plotkin, are shown in Figure 1; the interested reader is
referred to [1] for full details.
The rules are read as follows: if the transition(s) above the inference
line can be inferred, then we can infer the transition below the line. For one
example, the two rules for choice show that the choice operator is symmetric
and preserves the potential behaviours of its two operands. For another, the
cooperation operator has a special case where the two cooperands do not
synchronise on any activities. The notation for this case is E k F . In this
case the three rules would simplify to the two which are shown below.
F
These rules capture the intuitive understanding that two components which
do not synchronise on any activities cannot influence each other's computational
state. In the case of components which do synchronise the rate of the
resulting activity will reflect the capacity of each component to carry out
activities of that type. For a component E and action type ff, this is termed
the apparent rate of ff in E, denoted r ff (E). It is the sum of the rates of the
activities enabled in E. The exact mechanism used to determine the
rate of the shared activity will be explained shortly.
As in classical process algebra, the semantics of each term in PEPA is
given via a labelled transition system; in this case a labelled multi-transition
system-the multiplicities of arcs are significant. In the transition system
a state corresponds to each syntactic term of the language, or derivative,
Choice
F
Cooperation
F
F
F
F
r ff (E)
r ff
Hiding
Constant
E)
Figure
1: Operational semantics of PEPA
initial state
Figure
2: DG for the Multi-Component model (without hiding)
State Corresponding derivative
ftaskg
Repman
ftaskg
Repman
ftaskg
Repman
ftaskg
Repman
ftaskg
Repman
ftaskg
Repman
ftaskg
Repman
ftaskg
Repman
Table
1: States of the derivation graph of Figure 2
and an arc represents the activity which causes one derivative to evolve into
another. The complete set of reachable states is termed the derivative set of
a model and these form the nodes of the derivation graph (DG) formed by
applying the semantic rules exhaustively. For example, the derivation graph
for the system
ftaskg
Repman
is shown in Figure 2, assuming the following definitions:
Comp
Res
Repman
For simplicity, in the figure we have chosen to name the derivatives with
short names s the corresponding complete names are listed in
Table
1. Note that there is a pair of arcs in the derivation graph between
the initial state s 0 and its one-step derivative s 1 . These capture the fact
that there are two distinct derivations of the activity (task ; -) according to
whether the first or second component completes the task in cooperation
with the resource, even though the resulting derivative is the same in either
case.
The timing aspects of components' behaviour are not represented in the
states of the DG, but on each arc as the parameter of the negative exponential
distribution governing the duration of the corresponding activity. The
interpretation is as follows: when enabled an activity a = (ff; r) will delay
for a period sampled from the negative exponential distribution with parameter
r. If several activities are enabled concurrently, either in competition or
independently, we assume that a race condition exists between them. Thus
the activity whose delay before completion is the least will be the one to suc-
ceed. The evolution of the model will determine whether the other activities
have been aborted or simply interrupted by the state change. In either case
the memoryless property of the negative exponential distribution eliminates
the need to record the previous execution time.
When two components carry out an activity in cooperation the rate of the
shared activity will reflect the working capacity of the slower component. We
assume that each component has a fixed capacity for performing an activity
type ff, which cannot be enhanced by working in cooperation (it still must
carry out its own work), unless the component is passive with respect to
that activity type. This capacity is the apparent rate. The apparent rate
ae ae
ae
ae
r
r
ae
ae
r
r
ae ffl
ae
Figure
3: CTMC underlying the Multi-Component model
of ff in a cooperation P \Delta
fffg
Q will be the minimum of r ff (P ) and r ff (Q).
The rate of any particular shared activity will be the apparent rate of the
shared activity weighted by the conditional probability of the contributing
activities in the cooperating components. The interested reader is referred
to [1] for more details.
The DG is the basis of the underlying CTMC which is used to derive
performance measures from a PEPA model. The graph is systematically
reduced to a form where it can be treated as the state transition diagram of
the underlying CTMC. Each derivative is then a state in the CTMC. The
transition rate between two derivatives P and P 0 in the DG is the rate at
which the system changes from behaving as component P to behaving as P 0 .
It is denoted by q(P; P 0 ) and is the sum of the activity rates labelling arcs
connecting node P to node P 0 . For example, the state transition diagram
for the CTMC underlying the simple component model is shown in Figure 3.
Note the arc labelled with rate 2- between states X 0 and X 1 , representing
the derivatives ((Comp k Comp) \Delta
ftaskg
Repman and ((Comp k
Comp) \Delta
ftaskg
Repman respectively.
In order for the CTMC to be ergodic its DG must be strongly connected.
Some necessary conditions for ergodicity, at the syntactic level of a PEPA
model, have been defined [1]. These syntactic conditions are imposed by the
introduced earlier.
2.2 Aggregation in PEPA via isomorphism
Equivalence relations, and notions of equivalence generally, play an important
role in process algebras, and defining useful equivalence relations is an
essential part of language development. For PEPA various equivalence relations
have been defined. These include isomorphism, which captures the
intuitive notion of equivalence between language terms based on isomorphic
derivation graphs, and strong equivalence, a more sophisticated notion of
equivalence based on bisimulation.
equivalence relation defined over the state space of a model will induce
a partition on the state space. Aggregation is achieved by constructing
such a partition and forming the corresponding aggregated process. In the
aggregated process each partition of states in the original process forms one
state. If the original state space is fX then the aggregated
state space is some fX [0] ; X In
general, when a CTMC is aggregated the resulting stochastic process will
not have the Markov property. However if the partition can be shown to
satisfy the so-called lumpability condition, the property is preserved and the
aggregation is said to be exact.
When the model considered is derived from a process algebra such as
PEPA it is possible to establish useful algebraic properties of the equivalence
relation used. The most important of these is congruence. An equivalence
relation is a congruence with respect to the operators of the language if
substituting an equivalent component within a model expression gives rise
to an equivalent model; e.g. if P is equivalent to P 0 , then P \Delta
Q is equivalent
to
Q. When a congruence is used as the basis for aggregation in
a compositional model, the aggregation may be carried out component by
component, avoiding the construction of the complete state space because
the aggregated component will be equivalent to the original. Nevertheless
this approach is applied at the semantic level of the model and necessitates
the expansion and subsequent partitioning of relevant state spaces. More-
over, the reduced model produced in this way may not be as compact as
would be achieved by aggregating the complete model directly, making a
further application of the aggregation procedure necessary, in this case to
the model consisting of the aggregated components.
Both isomorphism and strong equivalence are congruence relations that
can be used as the basis for exact aggregation of PEPA models, based on
lumpability [4]. In either case the relation is used to partition the state
space (possibly compositionally), and so the underlying CTMC, and each
such equivalence class forms one state in the aggregated state space. In the
algorithm which we are presenting here we use the isomorphism relation:
the use of strong equivalence for the same purpose is discussed in Section 5.
The use of the isomorphism relation may seem surprising since the more
powerful bisimulation-style equivalence relations are one of the attractive
features of process algebras and are often cited as one of the benefits of
these formal languages. In contrast, isomorphism has received little attention
in the literature. In part, this is because in classical process algebra
the objective is to use an equivalence relation to determine when two agents
or system descriptions exhibit the same behaviour. In stochastic process
algebra greater emphasis is placed on using equivalence relations to partition
the derivation graph of the model in order to produce an aggregation
resulting in a smaller underlying Markov process. It has been shown that
PEPA's strong equivalence relation is a powerful tool for aggregation in this
style, always resulting in a lumpably equivalent Markov process [1]. How-
ever, we believe that in many instances isomorphism can also be useful for
this purpose. Since it is a more discriminating notion of equivalence it may
give a finer partition and thus less aggregation than strong equivalence. On
the other hand, as we will show, it may be detected at the syntactic level
of the system description without the recourse to the semantic level which
is necessary to detect strong equivalence in general. Thus a reduced derivation
graph is generated without the need to construct the original derivation
graph.
In the following section we present the algorithm which exploits isomor-
phism, while in Section 6 we discuss its relation to other work on automated
aggregation.
3 Algorithm
The algorithm for computing the reduced derivation graph of a PEPA model
begins by pre-processing a model which has been supplied by the modeller.
The purpose of this pre-processing is to re-express the model in a more
convenient form for the production of the aggregated derivation graph. The
aggregated derivation graph has at its nodes, equivalence classes of PEPA
terms, rather than single syntactic expressions. During the pre-processing
step the PEPA syntax is systematically replaced and the model expression is
converted into a vector form, which is then minimised and converted into its
canonical form. Every distinct PEPA expression maps to a distinct vector
form, but equivalent (isomorphic) expressions will have the same canonical
representation.
Once this pre-processing is complete, the generation of the reduced
derivation graph can begin. This process alternates between generating all
of the one-step derivatives of the present state and compacting these in order
to group together derivatives which have the same canonical representation.
The algorithm proceeds on the assumption that the model supplied is in
reduced named norm form. In the named form representation each derivative
of each sequential component is explicitly named. In the norm form the
model is expressed as a single model equation which consists of cooperations
of sequential components governed by hiding sets. In the reduced form all
cooperation and hiding sets have been reduced by removing any redundant
elements. If the supplied model is not in this form the necessary restructuring
is carried out before the algorithm is applied. The functions to achieve
this carry out routine checks on the validity of the model supplied and the
modifications that they make are completely transparent to the modeller.
We now proceed to describe these steps in more detail.
3.1 Restructuring the model
During the application of the algorithm it is convenient to have intermediate
derivatives in the model bound to identifiers. We generate these identifiers
as we decompose the defining equations for each sequential component. For
example, if the defining equation is
Comp
we introduce a name for the intermediate derivative by replacing this single
equation by the following pair of equations.
Comp
Once this has been done for each sequential component the model is said to
be in named form.
As described in Section 2, a PEPA model consists of a collection of
defining equations for sequential components and model components. One
of the model components is distinguished by being named as the initial
state of the model. The definition of this component may refer to other
model components, defined by other equations. We wish to eliminate uses
of model components from that definition, in order to reduce it to a normed
form in which the only identifiers used are those of sequential components.
We proceed by back-substituting the model component definitions into the
defining equation of the distinguished component. For example, the pair of
equations
Repman
System
ftaskg
Res
will become
ftaskg
Res \Delta
Repman
We continue this process until it converges to a definition of a normed model
equation which consists only of cooperations of sequential components governed
by hiding sets.
If the cooperation or hiding sets in a model definition contain unnecessary
or redundant elements the equivalence classes formed by the algorithm
may not be optimal. Thus we can, in some circumstances, improve the
subsequent performance of the algorithm by removing redundant elements
from these sets before the algorithm is applied. Furthermore, the presence
of redundant elements in cooperation or hiding sets can be regarded as a
potential error on the part of the modeller; consequently the modeller is
warned of any reduction.
We have previously presented efficient algorithms for computing the sets
of activities (Act) which are performed by PEPA model components [10]
and we use these to reduce to the minimum the size of cooperation and
hiding sets in the following way.
This reduction is applied systematically throughout the normed model equa-
tion. This operation is bounded in complexity by the size of the static representation
of the input PEPA model and thus there is no hidden cost here of
a traversal of the state space which is generated by the dynamic exploration
of the model.
3.2 Pre-processing: vector form, minimisation, canonicalisa-
tion
The vector form of a model expression represents the model in the most
suitable form for our aggregation algorithm because it is amenable to efficient
calculation of its canonical form. Here we present the vector form as a
vector of sequential components with decorated brackets denoting the scope
of these sets. We use subscripted brackets to delimit a cooperation set and
superscripted angle brackets to delimit hiding sets.
In the implementation these vectors are represented by linked lists which
provide for efficient manipulation when forming canonical representatives.
Re-ordering and re-arrangement of the representations of components in
the vector forms can then be achieved by safe, statically-checked pointer
manipulation, thereby avoiding the overhead of the repeated copying of data
values which would be incurred by the use of an array-based representation.
For a model expression, we define the vector
form inductively over the structure of the expression: let M;N be expressions
and C be a constant denoting a sequential component.
1. vf (M \Delta
2. vf
3. vf
In the following we write P to denote a vector
As with the normed model equation, the vector form representation contains
within a single expression all of the information about the static structure
of the model. It records the name of the current derivative of each of
the sequential components in addition to the scope of the cooperation and
hiding sets which are in force. The vector form alone is not sufficient to allow
us to compute the derivation graph of the model: the defining equations
for the sequential components are also needed.
Because it is generated directly from the full model equation the vector
form may include some redundancies. Hence, we include a preprocessing step
which is carried out to reduce the vector form generated by a straightforward
translation of the model equation to the vector form which will be used for
the remainder of the state space exploration. This step consists of generating
the minimal representation of the vector form, which is minimal with respect
to the number of brackets needed to record the scope of the cooperation and
hiding sets. As we will see, reducing the number of brackets in the vector
form may have significant impact on the aggregation which can be achieved.
Thus we can perform the following simplifications:
Elimination of redundant cooperation brackets: this arises when we
have a component such as Q \Delta
R). The vector form of this
component would be (Q; contiguous brackets have
the same decoration in this way the inner one can be eliminated. In
this example this results in (Q;
Elimination of redundant hiding brackets: this would arise whenever
hiding brackets are contiguous regardless of their decoration. For ex-
ample, if we had a component (P=L)=K its vector form would be
This would be reduced to K[L hP i.
From the minimal vector form we reduce the model representation to its
canonical form. We can choose an arbitrary ordering on component terms-
one suitable ordering is lexicographic ordering. We denote this ordering by
We denote the canonicalisation function by C. We insert a
component P into a vector P using I P P. The definitions of these functions
are shown in Definition 2. The definitions are not complex but we include
them here for completeness and in order to prevent there appearing to be
any hidden complexity in their definitions.
(Canonicalisation and insertion functions) We present
the definition of the canonicalisation function first and the definition of the
insertion function second. There are three cases in the definition of each of
these functions.
1.
2. C L hP
3.
1. I P ()
2. I P
3. I P
3.3 Generating the aggregated derivation graph
The previous pre-processing steps have been applied to the input PEPA
model to facilitate the subsequent application of the aggregation algorithm.
Before pre-processing the model was represented by a PEPA expression,
which represented an individual (initial) state and contained all the information
necessary for its dynamic evolution. After the pre-processing steps
have been performed, the expression is reduced to canonical, minimal vector
which retains only information about the state structure of the model
and represents an equivalence class of states. Thus this canonical vector form
is a reduced representation in two senses. Firstly, the information about the
dynamic behaviour, cooperation sets and hiding sets, which is common to
all states of the model, is factored out and stored separately. Secondly, each
canonical vector form may in fact represent a number of equivalent model
states which would have distinct vector forms.
Generating the reduced derivation graph now proceeds via the following
two steps which are carried out alternately until the state space has been
fully explored.
Derivation: Given the vector form the objective is to find all enabled activities
and record them in a list, paired with the vector form of the
corresponding derivative. This is done by recursing over the static
structure of the current derivative. At the lowest level the sequential
components are represented simply as a derivative name. At this point
the defining equations are used to find the activity, or set of activities,
which are enabled by the derivative. We can identify three cases:
Individual activities which are not within the scope of a hiding
operator are recorded directly with the resulting derivative.
Individual activities which are within the scope of a hiding operator
are recorded as - actions with the appropriate rate together
with the resulting derivative.
ffl Activities which are within the scope of a cooperation set are
compared with the enabled activities of the other components
within the cooperation. If there is no matching activity the individual
activity is discarded; otherwise, as above, the activity is
recorded together with the resulting vector form.
Reduction: Carrying out the derivation may have given rise to vector forms
which are not canonical. Moreover several of the (activity, vector form)
pairs may turn out to be identical once the vector form is put into
canonical form. In this case the multiplicity is recorded and only one
copy is kept.
These two steps have to be repeated until there are no elements left in the
set of unexplored derivative classes.
In the remainder of this section we present these steps more formally,
but first we introduce some notation for describing the formulation and
manipulation of vectors and vector forms.
ffl Given a vector P, we write (P to denote the sub-vector of
those elements of P which satisfy the predicate OE. When the vector
P is obvious from the context we shall omit it, writing as an
abbreviation.
ffl We write P[P
to denote the vector obtained from P by substituting
ffl When S is a sub-vector (S similarly, we write P[S :=
as an abbreviation for P[S 1 := S 0
Note that we
only use vector substitution between vectors with the same number of
elements.
The rules which govern the derivation step of the algorithm are shown
in
Figure
4. The rule for constant formally states that at the lowest level
defining equations are used to find the activity or activities which can be
inferred from a derivative name. The two rules for hiding correspond to
the first two cases identified above. The most complex rules are those for
cooperation, the third case above. We examine these in more detail.
The first rule states the condition under which a number of identical ac-
tivities, (ff; r), give rise to derivatives which have identical canonical forms.
For this to be the case the activity (ff; r) must be enabled by one or more
component P i of P. Moreover, for each such possible activity, the vector
form of the resulting derivative is always the same when canonicalised. Formally
where oe is an arbitrary element of the vector S 0 , say its first element S 0
1 .
Note that the equation
1 does not imply that P 0
i and S 0
are equal,
only that they are in the same equivalence class because they have equal
canonical forms. The vector S 0 is defined as the sub-vector consisting of
those derivatives which may potentially change via an (ff; r) activity.
Having now formed a vector S satisfying these conditions for the activity
(ff; r) we can compute the rate at which the component performs this activity
and evolves to the canonical representative of the derivatives as jSj \Delta r, since
the total rate into the equivalence class will be the sum of the rates of the
individual activities which may make the move.
In the case where only one of the elements of the vector performs an
activity ff the complication due to the consideration of multiplicities does
not arise and the rule simplifies to be equivalent to the following.
Constant
Hiding
\Gamma\Gamma\Gamma!
\Gamma\Gamma\Gamma!
Cooperation
\Gamma\Gamma\Gamma! C(PL [S := S 0 ])
\Gamma\Gamma\Gamma! C(PL [S := S 0 ])
Y
r ff
Figure
4: Operational semantics of vector form
The complexity in the second rule for cooperation is due to the need
to calculate the rate at which the sub-vector of components in cooperation
performs the activity. Here also there is a simpler case, where the vector is
of size two. This special case of the rule affords easier comparison with the
operational semantics of PEPA, as presented in Figure 1.
\Gamma\Gamma\Gamma!
The rate R of the activity which is performed in cooperation is computed
from the individual rates r 1 and r 2 as in the corresponding cooperation rule
in
Figure
1.
3.4 Implementation
The state space reduction algorithm has been added to the PEPA Workbench
[5], the modelling package which implements the PEPA language and
provides a variety of solution and analysis facilities for PEPA models.
The algorithm is presented in pseudo-code form in Figure 5. The driving
force of the algorithm is provided by the procedure vfderive which,
given a derivative of the model, finds its enabled transitions using the function
cderiv, and calls itself on the resulting derivative. The function cderiv
carries out the canonicalisation of the one-step derivatives which it has produced
using the function derivatives. This function has different cases depending
on the structure of the vector form being handled, each reflecting
the appropriate rule(s) in the semantics. For example, in the case of a choice
the list of possible derivatives consists of the list of derivatives of the second
component of the choice appended to the list of derivatives of the first. The
derivatives of a vector of cooperating components are computed by using
the function cooperations to derive transitions and the function disallow to
enforce that activities of types in a cooperation set are not carried out without
a partner. We make use of a function lookup to retrieve the definitions
of component identifiers from the environment. Finally, the function update
takes a set of elements and a procedure and returns a set in which each
element has been modified by the procedure.
The modification to the PEPA Workbench required the alteration of the
data structure which is used to represent PEPA models as an abstract syntax
tree within the Workbench. The representation of cooperations between
pairs of components was generalised to extend to lists of components. If
the PEPA model which is submitted for processing does not contain any
if not marked (P )
then begin
mark
for each ((ff; r); n; P 0 ) in cderiv(P ) do
output transition (P (ff;n\Deltar)
begin
while d 0 is not empty do
choose (a; P ) from d 0 ;
if (a; n; P ) in r for some n
then replace (a; n; P ) by (a;
else add (a;
remove (a; P ) from d 0 ;
return
switch P is
case unary cooperation *)
return update(d; proc (a;
case n-ary cooperation *)
return
case L hP i (* hiding *)
return filter(d; L);
case (ff; r):P (* prefix *)
return singleton((ff; r); P );
case P +Q (* choice *)
return
case const C (* constant *)
return derivatives(lookup(C));
begin
while d is not empty do
remove x from d;
add P (x) to
return
if d 1 or d 2 is empty
then return ;
else
begin
remove
) from d 1 ;
remove
) from d 2 ;
if a
then
else
return
begin
while d is not empty do
remove ((ff; r); P ) from d;
if ff in L
then add ((-; r); P ) to r
else add ((ff; r); P ) to r
return
begin
while d is not empty do
remove ((ff; r); P ) from d;
if ff not in L
then add ((ff; r); P ) to r
return
Figure
5: Pseudo-code for the algorithm
structure which can be exploited by the state space reduction algorithm
then this change is invisible to any user of the Workbench. However, if the
PEPA model does contain either repeated components or other structure
which can be exploited then the benefits become apparent to the user of the
Workbench in terms of reduced time to generate the CTMC representation
of the model and in terms of the matrix of smaller dimension required for
its storage, once the model gets above a certain size (see Table 3).
4 Example
In this section we show how the algorithm works on an example. We consider
a multiprocessor system with a shared memory, we derive the corresponding
PEPA model, and then the underlying derivation graphs, both ordinary and
aggregated. Some alternatives to our approach are discussed in Section 5,
introduced by means of small variants of the same example.
4.1 Multiprocessor system
Consider a multiprocessor system with a shared memory. Processes running
on this system have to compete for access to the common memory:
to gain access and to use the common memory they need also to acquire
the system bus which is released when access to the common memory is
for simplicity the bus will not be explicitly represented in the
following. Processes are mapped onto processors. The processors are not
explicitly represented but they determine the rate of activities in the associated
processes, i.e. all processes have the same functional behaviour, but
actions progress at different speeds depending on the processor on which
they are running, and the number of processes present on the processor. It
is the modeller's responsibility to select rates appropriately.
A protocol which is not completely fair, but simply prevents one processor
from monopolising the memory, might impose that after each access of a
processor to the memory, some other processor must gain access before the
first can access again. A process running on the ith processor is represented
as
In this case, in order to impose the protocol, the memory is modelled as
remembering which processor had access last. Access for this processor is
disabled.
Mem i
If there are n i processes running on the ith processor the system is modelled
by the following expression
Sys
Mem k
Note that in the cooperation set of this model expression, and throughout
the remainder of the paper, we write get i as a shorthand for get
We assume that the starting state of the system excludes access
of an arbitrary processor, number k. The vector form of the model Sys ,
derived applying the equations of Definition 1, has the following form:
We now show an example derivation of the state space of Sys , both ordinary
and aggregated. For simplicity we consider a smaller system Sys 0 in which we
have only two processors and only two replicas of the same process running
on each processor. The simplified system is thus specified as
Mem 1
We can expand the derivatives of the processes P i , for and of the
memory Mem 1 as follows:
Mem 1
Mem 2
Initial state s1 :
Figure
Ordinary derivation graph of Sys 0
Complete derivation graph. The complete derivation graph of Sys
computed using the PEPA Workbench [5] with the aggregation algorithm
switched off, has 96 states and 256 transitions. A portion of this graph is
shown in Figure 6. To make the drawing easier to understand we have chosen
to name the derivatives with short names s i or s
depending on whether
the state has been completely expanded (s i ) or not (s
its
one-step derivatives are also represented). The vector forms corresponding
to the derivatives are listed in Table 2: each row contains the name of the
state and the corresponding vector form. Moreover, it contains information
on whether the vector form is canonical or not, and the name of the state
which represents the corresponding canonical vector form.
Aggregated derivation graph. The aggregated derivation graph, computed
using the PEPA Workbench with the aggregation algorithm switched
on, has 42 states and 88 transitions. A portion of this graph is shown in
Figure
7 and can be compared with the one of Figure 6.
The PEPA model Sys 0 has been constructed according to the algorithm:
the sequential components defining the processes and the memory are composed
by means of the cooperation operator to obtain the model equation.
All the derivatives have been explicitly named and we can use the model
equation to generate the vector form of the model which does not have
redundant brackets, and therefore no elimination is required.
At this point the aggregated state space can be obtained by considering
canonical vector forms only, as shown in the graph of Figure 7 in which
only a subset of the states of Table 2, those corresponding to the canonical
vector forms, is explicitly outlined. The names of the nodes are again s i
or s
i and the integer numbers in round brackets close to them specify the
number of equivalent states they represent. These numbers can be computed
by considering the number of replicas of the same process in the model
equation and the numbers of equal derivatives in each vector form. As an
example, let us consider the state s 10 which corresponds to the vector form
. This state represents four equivalent derivatives.
This number can be computed by dividing the product of the factorial of
the numbers of the repeated instances of components by the product of the
factorial of the numbers of identical derivatives in the vector form.
State Vector Form Canonical? Representative
s
s
22
26 ((P 0
28 ((P 0
Table
2: States and vector forms for
s
s
22 (4)
s
s
s
26 (2)
s 6 (1)
s
s 11 (1)
Figure
7: Aggregated derivation graph of Sys 0
More generally the formula could be expressed as follows
, is the number of processes running on the
same processor and n i;j are the numbers of equal derivatives of P i , such
that
The multiplicities of the arcs are also represented and indicate the number
of arcs which have been folded together. The fact that a single arc
represents one or more activities of the same type is reflected in the rate of
the action that labels the arc itself. For instance, the model evolves from the
state s 1 to the states s 3 by executing an action think with a rate 2- 1 because
in s 1 , i.e.
Mem 1 , two activities (think are
concurrently enabled.
Notice that the aggregation we obtain corresponds to finding permutations
of the same components within brackets. This form of aggregation
is pictorially represented in Figure 7 by flattening equivalent nodes of the
derivation graph of Figure 6 onto the same plane.
4.2 Timings
We ran different configurations of the multiprocessor system on a Pentium
III machine with clock frequency of 500 MHz and 128 MBytes of mem-
ory. The times recorded in Table 3 take into account both CPU time and
the time necessary for file I/O.
If there is a single component P i running on each processor, no aggregation
is possible and the execution times of the basic and the modified
Workbench are almost the same. As soon as we add replicas of the same
process, the state space aggregation becomes apparent (compare the second
and the fifth columns in Table 3) as well as the reduction in the execution
times (compare the fourth and the seventh columns), particularly when the
size of the model grows.
Alternative aggregations
In this section we illustrate some cases in which our algorithm, or indeed
any syntactic approach, cannot achieve the optimal theoretical partitioning.
In particular we show how greater aggregation could be achieved in some
circumstances if strong equivalence was used to generate partitions instead
Processors Derivation graph Aggregated derivation graph
Processes States Trans. Time (sec.) States Trans. Time (sec.)
Processors Derivation graph Aggregated derivation graph
Processes States Trans. Time (sec.) States Trans. Time (sec.)
Table
3: Execution times of the basic and modified Workbench
of isomorphism. Note, however, that these cases rely on quite strong conditions
on apparently unrelated activity rates. It is not clear that such conditions
occur with sufficient frequency in real models to justify the additional
complexity needed to implement an approach based on strong equivalence.
The strong equivalence relation is a more sophisticated notion of equiv-
alence, in the bisimulation style, based on observed behaviour. In general,
in a process algebra, two terms are bisimilar if their externally observed behaviour
appears to be the same. Strong equivalence assumes that both the
action type and the apparent rate of each activity is observable. Informally,
two PEPA components are strongly equivalent if their total conditional transition
rates to strongly equivalent terms are the same for all action types.
The conditional transition rate from P to P 0 via an action type ff is
denoted by q(P; This is the sum of the activity rates labelling arcs
connecting the corresponding nodes in the DG which are also labelled by
the action type ff. The conditional transition rate is thus the rate at which
a system behaving as component P evolves to behaving as component P 0
as the result of completing an activity of type ff. If we consider a set of
possible derivatives S, the total conditional transition rate from P to S,
denoted q[P; S; ff], is equal to
The definition can thus be formally stated as follows.
denote the set of all language terms, or derivatives. An
equivalence relation over derivatives, R ' T \Theta T , is a strong equivalence if
whenever then for all ff 2 A and for all S 2 T =R,
We say that P and Q are strongly equivalent, written
for some strong equivalence R, i.e.
fR j R is a strong equivalence g
Two of the following examples demonstrate the use of strong equivalence
for aggregation. However, in the first example we show how the abstraction
operator may be used at a higher syntactic level in the model and
introduce symmetries between components which appear quite distinct in
their defining equations. These symmetries rely on the context in which the
components are placed, something not currently captured by our algorithm.
In [11], Ribaudo distinguishes two form of aggregation which can be
found using strong equivalence. Horizontal aggregation arises from the interleaving
of the activities of similarly behaved components. This aggregation
takes advantage of repeated instances of the same pattern of behaviour
within the overall model structure. The aggregation found using
our algorithm may be termed a horizontal aggregation. In contrast, vertical
aggregation arises when there are repeated patterns of behaviour within a
single component. In the second example presented below, a variant of the
multiprocessor model is considered in which a horizontal aggregation can
be found using strong equivalence although isomorphism would regard the
components as distinct. Finally we give an example where a vertical aggregation
is possible with strong equivalence but not with isomorphism, and
consequently not with our syntactic approach.
5.1 Aggregation via abstraction
The facility to hide or abstract action types within a PEPA model is designed
to give the modeller the freedom to construct components in detail to ensure
that their behaviour is accurately represented but to subsequently restrict
the visible action types to only those relevant to the current modelling study.
For example, in the model of the multiprocessor presented in the previous
section, the modeller may choose to hide all the get i actions. In terms of
capturing the correct behaviour of the protocol it was important that these
action types were distinguished; but in terms of the complete model they
may all be regarded as internal - actions.
Hiding all of these activities introduces strong symmetries into the model
in terms of its functional behaviour. If, moreover, we find that the processes
which are running on different processors share the same timing characteris-
tics, i.e. - then the symmetries are apparent
in all aspects of the model's behaviour. Only one process can access the
memory at any time-and for the subsequent memory access its host processor
is excluded-but the processes on all other processors behave equiv-
alently. This means that we need only consider two classes of processors,
those excluded and those eligible for access, regardless of their placement on
processors. Once the get i activities are all hidden it is no longer possible to
identify from these process which type of process is operating.
For example, consider the multiprocessor with three processors, and two
processes running on the first, one on the second and two on the third. Then
if we regard the system immediately after the process P 2 has completed an
access to the memory and when one other process is waiting for access,
the behaviour of the system is isomorphic regardless of whether the waiting
process is on processor 1 or processor 3, i.e. all the following states are
isomorphic:
Mem
Mem
Mem
Mem
Although these states are equivalent by isomorphism our algorithm would
not place them within a single partition but into two: one consisting of
the first two states and one consisting of the second pair. This is because
the processes operating on different processors have distinct names and distinct
actions get i -this is necessary to ensure the correct functioning of the
protocol-and the syntactic form of minimisation that we use cannot recognise
that in some contexts P 1 and P 3 will behave equivalently.
This could be regarded as a penalty for the richness of the language. For
example, the analogous situation does not arise in Petri net-based models
because there is no notion of abstraction or hiding.
5.2 Horizontal aggregation via strong equivalence
Isomorphism is a strict structural equivalence: there must be a one-to-one
relationship between both derivatives and activities. The observation-based
strong equivalence is not so strict. Although corresponding derivatives must
be capable of the same action types at the same apparent rates, how these
are implemented as activities in the derivatives may differ as the following
example demonstrates.
Suppose that on processor 1 two different types of process may be run-
ning. The first is identical to the process P 1 discussed in Section 4.1. The
second has a similar pattern of behaviour but has two alternative local computations
between accesses to the common memory. The process -
below.
If the rates of the think activities are such that -
strongly equivalent to P 1 although the two are clearly not isomorphic. Thus
if we consider the system
Mem 2
our algorithm will distinguish the derivatives (P 00
Mem 0and (P 0
Mem 0
2 whereas a partitioning based on strong
equivalence would consider them to be equivalent. In this case the state
space aggregated by our algorithm will have 64 states whereas aggregation
based on strong equivalence would result in 42 states.
5.3 Vertical aggregation via strong equivalence
We can identify a second source of aggregation which can be achieved by
strong equivalence but which is not captured by our algorithm: so-called vertical
aggregation. Here we illustrate the vertical aggregation case by means
of another variant of the multiprocessor example. We consider a process
which, after the use of the memory, can detect an error. If this is the case
it does not return directly to the initial state; instead, it must complete a
recovery action and repeat the access to the memory. For this new process
the expansion of derivatives could be as follows:
where p is the probability that an error occurs. The derivation graph of
such a process P i is shown in Figure 8(a). Now we suppose that the action
(a) (b)
Figure
8: Derivation graphs of P i
types think and recover are hidden and become internal to the component.
Moreover, we assume that - . If this is the case the derivatives P i
and P 0000
are strongly equivalent, and we can aggregate them to form the
macro-state [P i ]. Similarly, we combine the arcs labelled (rel
and (rel ; p \Theta r) into a single arc labelled (rel ; r) connecting P 000
Figure
8(b)).
this form of aggregation relies on the information about the operational
behaviour of the component represented in the derivation graph.
It cannot be detected by the purely syntactic means used in our algorithm.
Approaches based on bisimulation style equivalences, such as strong equiv-
alence, work at the semantic rather than the syntactic level. Thus they are
not, in general, comparable to our approach.
6 Related work
The exploitation of symmetries to achieve aggregation of performance models
is a well-explored topic. Several automated approaches have been described
in the literature. In this section we give a brief account of some
of the work that has appeared in the context of stochastic Petri nets and
stochastic process algebras, and explain how that work relates to our own.
In each case the objective is to generate a partitioning of the original CTMC
which satisfies the condition of lumpability.
The closest approach to our own is the work on a class of stochastic
coloured Petri nets called Stochastic Well-formed Nets (SWN) [12]. Stochastic
Petri nets (SPN) [13] have been extensively used for the functional analysis
and performance evaluation of distributed systems. Their modelling
primitives consist of places and (timed) transitions, representing system
states and system events respectively. Just as in PEPA, in order to analytically
solve a SPN model, the associated stochastic process must be derived
by computing the set of reachable states (markings). Moreover, just as in
PEPA, for realistic systems the computation of the state space can often
lead to models whose size makes them intractable.
In order to tackle this problem SWN allow the construction of a parametric
representation of a system. This is achieved by folding similar subnets
and by adding a colour structure to distinguish tokens that, after the folding,
belong to the same place. The nets are restricted in terms of the possible
colour domains for places and transitions and in terms of the possible colour
functions. These restrictions allow symmetric structures within the model
to be exploited for solution purposes. In particular, these structures are
automatically detected and the reduced state space is constructed without
recourse to the complete state space. The reduction is obtained through the
concept of symbolic marking [12].
Informally, a symbolic marking corresponds to an equivalence class of
ordinary markings sharing the same characteristics. Unlike our approach,
there is no formal equivalence relation defined to underpin the partitioning.
In fact, the ordinary markings in the same equivalence class enable the same
set of transitions, whose firings lead to new ordinary states which are still
equivalent, i.e. belong to the same symbolic marking.
Starting from a symbolic representation of the initial marking a symbolic
reachability graph is constructed via a symbolic firing rule. Each symbolic
marking is represented in a minimal, canonical form. Note that unlike our
algorithm in which minimisation is carried out only in the preprocessing, in
the SWN case minimisation has to be repeated after each symbolic derivation
step. The symbolic reachability graph is used to generate a reduced CTMC
and it has been proved [14] that it is lumpably equivalent to the original
CTMC. Thus the same performance estimates can computed with a lower
computational cost.
Another Petri net-based approach has been developed in the context
of Stochastic Activity Networks (SAN) [15]. This formalism incorporates
features of both SPNs and queueing models and makes use of compositional
operators, similar to those found in process algebras. The primitives of the
formalism are places, activities (equivalent to Petri nets transitions), which
may be guarded by input gates, representing enabling rules, and output
gates representing completion rules. Once submodels have been constructed
representing the components of the system they may be combined using
the replication and join operations. The replication operator captures the
case of a system containing two or more identical subsystems. The join
operator combines SAN submodels of different types. Use of these operators
makes symmetries within the model explicit and so facilitates a compact
representation of the state space.
The structure of a composed SAN is represented by a directed tree with
different types of nodes. Leaf nodes capture the distinct SAN submodels,
i.e. the basic elements to which the construction operators apply. Internal
nodes with one child are replication nodes, their child being the submodel
to be replicated. Internal nodes with two or more children are join nodes,
the children representing the submodels to be joined together.
From this tree a state representation is automatically extracted that
is minimal in the sense that states which differ only by a permutation of
repeated components are grouped together into a single combined state.
Each such state is represented by recording, for each replication node, the
number of replicated SANs in each possible submodel marking, and for each
join node, a vector of the markings of each joined submodel. In addition
each state maintains information about the desired performance variable [15]
but this is outside the scope of this paper. There are clear parallels between
this state representation and our vector form discussed in Section 3.2.
The other work on aggregation of stochastic process algebra models is
developed almost entirely at the semantic level. In this approach well known
graph partitioning algorithms are used to reduce the labelled transition system
underlying the process algebra model [16, 4]. In [17] a more syntactic
approach is taken but this is on an ad hoc basis without a corresponding
tool implementation. Equational laws derived from Markovian bisimulation,
which is equivalent to strong equivalence, are used to obtain state space reduction
of a MTIPP model. This is achieved by term rewriting based on
judicious application of the laws. However, although good results can be
obtained on particular models, no set of term rewriting rules which can be
used for aggregation purposes have been found.
In some approaches good results have been obtained by modifying and
restricting the combinators of the language to make symmetries more explicit
and disallowing difficult cases. For example, in [18] a symmetric parallel
composition operator, denoted fn!PgS is used to capture the case of n-ary
parallel composition of identical replicas, all synchronising on actions
in S. This operator provides a means of expressing a number of replicated
copies of a process but it cannot express synchronisation of repeated copies
over different synchronisation sets. The operational semantics of the new
operator is consistent with the usual parallel composition but a reduced
state space is produced. This can be regarded as the SPA equivalent of the
SAN approach outlined above. States which differ only by a permutation of
replicated submodels are treated as equivalent.
Earlier work on MTIPP took a similar approach in terms of altering the
combinators of the language. In [19] a replication operator, here denoted
S P has the same informal semantics as fn!PgS above. Hiding and the usual
general parallel composition operator are removed from the language. The
distinction of this approach is that a denotational matrix semantics is given
rather than the more usual operational semantics. Using this approach the
infinitesimal generator matrix of the CTMC is constructed directly. Moreover
Rettelbach and Siegle show that the transition matrix resulting from
the semantics are minimal with respect to Markov chain lumpability (i.e.
the matrices do not have subsets of equivalent states).
The disadvantages of both these approaches are that they require the
modeller to adhere to a new set of combinators and this form of cooperation
does not allow different synchronisation sets amongst replicas of the same
component. The techniques do not appear to have been automated. In
contrast our algorithm works transparently with the PEPA language taking
advantage of whatever symmetries are present in the model submitted to
the PEPA Workbench by the user.
7 Conclusions and further work
We have shown how the existence of isomorphisms between terms in the
derivation graph of a stochastic process algebra model can be exploited to
aggregate the state space of the model. Our algorithm for this collapses the
derivation graph at each model state and does not require a costly computation
of bisimulation equivalence between components of the model. We have
found it to be applicable in situations where the full derivation graph is too
large even to be generated [20]. Further, we believe that many of the models
which occur in practice would contain symmetries of the types which can be
exploited by isomorphism. However, against these advantages our algorithm
cannot be guaranteed to achieve the maximum possible aggregation for all
models.
Generating an aggregated derivation graph will allow speedier computation
of the steady state probability distribution of the CTMC which corresponds
to a PEPA model. We have not discussed in this paper the influence
of aggregation on the interpretation of this probability distribution in terms
of the given PEPA model. When examining the steady state distribution in
order to determine performance factors such as throughput and utilisation
the PEPA modeller must now select sets of model states of interest via the
description of canonical representatives in the state space. This is an added
reason for choosing to aggregate with isomorphism instead of with bisimulation
because the formation of a canonical representative of an isomorphism
class is simpler. However, the full investigation of this issue remains as
further work.
Our work has been influenced by earlier work on SWN [21]. However,
we stress that significant adjustments to the approach have been necessary
for the development of the algorithm for SPA: it is not a straightforward
translation of results. Nevertheless we feel that there is considerable benefit
to be gained from studying the relationship between formalisms with the
objective of importing ideas, and when appropriate, techniques from one to
the other.
Acknowledgements
This collaboration took place within project "Rom/889/94/9: An enhanced
tool-set for performance engineers" funded by The British Council and
MURST. Stephen Gilmore is supported by the 'Distributed Commit Pro-
tocols' grant from the EPSRC and by Esprit Working group FIREworks.
Jane Hillston is supported by the ESPRC 'COMPA' grant.
The authors would like to thank the anonymous referees for helpful comments
on an earlier version of this paper and to thank Graham Clark for
implementation work on the PEPA Workbench.
--R
A Compositional Approach to Performance Modelling.
Stochastic process algebras.
A Tutorial on EMPA: A Theory of Concurrent Processes with Nondeterminism
Compositional Markovian modelling using a process alge- bra
The PEPA Workbench: A tool to support a process algebra based approach to performance modelling.
Compositional performance modelling with TIPPTool.
Finite Markov Chains.
From SPA models to programs.
On the aggregation techniques in stochastic Petri nets and stochastic process algebras.
Stochastic Well-Formed coloured nets for symmetric modelling applications
Performance analysis using stochastic Petri nets.
Stochastic Well-Formed coloured nets and multiprocessor modelling applications
Reduced Base Model Construction Methods for Stochastic Activity Networks.
Compositional nets and compositional aggregation.
Stochastic Process Algebras as a Tool for Performance and Dependability Modelling.
Exploiting Symmetries in Stochastic Process Algebras.
Compositional Minimal Semantics for the Stochastic Process Algebra TIPP.
Investigating an On-Line Auction System using PEPA
On Well-Formed coloured nets and their symbolic reachability graph
--TR
--CTR
Stephen Gilmore , Jane Hillston , Leila Kloul , Marina Ribaudo, PEPA nets: a structured performance modelling formalism, Performance Evaluation, v.54 n.2, p.79-104, 1 October
Marco Bernardo , Nadia Busi , Marina Ribaudo, Integrating TwoTowers and GreatSPN through a compact net semantics, Performance Evaluation, v.50 n.2-3, p.153-187, November 2002
Salem Derisavi , Peter Kemper , William H. Sanders , Tod Courtney, The Mbius state-level abstract functional interface, Performance Evaluation, v.54 n.2, p.105-128, 1 October
Daniel D. Deavours , Graham Clark , Tod Courtney , David Daly , Salem Derisavi , Jay M. Doyle , William H. Sanders , Patrick G. Webster, The Mbius Framework and Its Implementation, IEEE Transactions on Software Engineering, v.28 n.10, p.956-969, October 2002
David M. Nicol , William H. Sanders , Kishor S. Trivedi, Model-Based Evaluation: From Dependability to Security, IEEE Transactions on Dependable and Secure Computing, v.1 n.1, p.48-65, January 2004 | stochastic process algebras;performance modeling;performance evaluation tools;model aggregation |
376307 | Social role awareness in animated agents. | This paper promotes {\itshape social role awareness\/} as a desirable capability of animated agents, that are by now strong affective reasoners, but otherwise often lack the social competence observed with humans. In particular, humans may easily adjust their behavior depending on their respective role in a socio-organizational setting, whereas their synthetic pendants tend to be driven mostly by attitudes, emotions, and personality. Our main contribution is the incorporation of `social filter programs' to mental models of animated agents. Those programs may qualify an agent's expression of its emotional state by the social context, thereby enhancing the agent's believability as a conversational partner or virtual teammate. Our implemented system is entirely web-based and demonstrates socially aware animated agents in an environment similar to Hayes-Roth's Cybercaf\'{e}. | INTRODUCTION
Ever since Bates and Reilly promoted believable agents in
their 'Oz project' [2], there has been continued interest to
give animated agents the illusion of life. It is now widely accepted
that emotion expression and personality are key components
of believable agents. Moreover, Cassell and her co-workers
recently provided convincing evidence of the importance
of non-verbal 'embodied' (conversational) behavior
for believable lifelike agents. Animated agents with believable
behavior are used as virtual tutors in interactive learning
environments (e.g., Johnson et al. [16]), as virtual presenters
on the web (e.g., Andr-e et al. [1], Ishizuka et al. [14]),
and as virtual actors for entertainment (e.g., Rousseau and
Hayes-Roth [28]). Although those agents achieve convinc-
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for pro-t or commercial advantage and that copies
bear this notice and the full citation on the -rst page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior speci-c
permission and/or a fee.
AGENTS'01, May 28-June 1, 2001, Montr- eal, Quebec, Canada.
ing behavior in their respective predefined roles, they might
fall short of 'social robustness' when put into a di#erent sce-
nario. E.g., a tutor agent might be believable for a student
(in the teacher role) but not in the role of a peer. Humans,
on the other hand, are always aware of the roles they play
in a certain social setting and typically behave accordingly.
In this paper, we will argue that social role awareness
is an important feature of human-human communication
which should be integrated to existing animated agents ap-
proaches. Social role awareness is easily illustrated, as in
the following conversation.
Aspirant (to secretary): I need a copy of this document.
Secretary (to aspirant): So what?
Manager (to secretary): I need a copy of this document.
Secretary (to manager): Sure. I will do it immediately.
The secretary's polite behavior towards the manager is
consistent with her unfriendly reaction to the aspirant, simply
because she is aware of her role-specific rights and duties
in the socio-organizational context of an o#ce environment.
Awareness of her role as secretary allows her to ignore the
(indirect) request from the aspirant (whose role is assumed
to be lower than the secretary's on the power scale). We
believe that the conversation above cannot be understood
(or generated) by means of reasoning about personality and
attitudes alone. Even if the secretary happens to have a
rather aggressive personality and she does not like her man-
ager, she would respect the manager's rights and obey to
his or her request. Similarly, we will argue that an agent's
role determines its way of emotion expression. Consider a
situation where the (aggressive) secretary is angry with her
manager. She will presumably not show her emotion to the
manager, being aware of her social role as an employee. On
the other hand, she might express her anger to a fellow secretary
who has equal social power.
Our goal is to create autonomous agents that can serve as
dramatically interesting conversational partners for the task
of web-based language conversation training. Specifically,
the animated agent approach will be used to improve English
conversation skills of native speakers of Japanese. Since
interactions are set up as role-playing dramas and games,
strong requirements are imposed on the agents' social abili-
ties. Social reasoning will be blended with a rather standard
theory of reasoning about emotion (Ortony et al. [25, 24])
and a simple model of personality. We employ Moulin and
Rousseau's [22] approach to model and simulate conversa-
tions, which provides a rich framework for many aspects of
inter-agent communication. The programmable interface of
the Microsoft Agent package [19] is used to run our role-playing
scenarios. Although these o#-the-shelf agent characters
are quite restricted in the number of behaviors, the
package comes ready with a speech recognizer and text-
to-speech engine that allow client-side execution in a web
browser.
The rest of this paper is organized as follows. The next
section discusses related work. In Section 3, we describe a
framework for modeling and simulating conversations. In
the following section, we introduce a simple a#ective reasoner
and argue that reasoning about emotion and personality
is not su#cient to achieve believable emotion expression.
Section 5 describes the basic notions underlying social reasoning
and introduces social filter programs that function as
a filter between a#ective state and emotion expression. In
Section 6, we first explain the web technology used to run the
animated agents. After that, we illustrate our approach by
example runs of a Cybercaf-e-style role-playing drama. Section
7 discusses the paper and suggests further extensions
and refinements. Finally, we summarize the paper.
2. RELATED WORK
Several research groups have addressed the problem of
socially intelligent agents in the framework of multi-agent
systems (e.g., Jennings [15], Castelfranchi [5]). On the other
hand, there are relatively few researchers who focus on social
role related behavior from the perspective of the believability
of animated character agents.
A notable exception is Hayes-Roth and her co-workers
(Hayes-Roth et al. [13], Rousseau and Hayes-Roth [28]). In
[13], role-specific behavior is studied in the context of characters
(animated agents) that function as actors in a master-
servant scenario. Social roles are defined by behaviors that
represent a character's status, e.g., high status is e#ected
through a quiet manner and ways of talking that forbid in-
terrupting. A character's believability in a specific role is
justified by adopting guidelines from the literature on the art
of drama, rather than by exploiting a social-psychological
model. Thus, no attention is paid to a character's representation
of its role. Rousseau and Hayes-Roth [28], however,
propose an elaborate social-psychological model that considers
personality, emotions (moods), and attitudes. Rules
are defined that allow to select appropriate behavior depending
on a character's personality and attitudes, thereby
enhancing the character's believability. Interestingly, the
social roles discussed in [13] are only of marginal importance
in this model. By contrast, we believe that an agent's
awareness of its social role is equally important for action
selection, and may even overrule the influence of personality.
Walker et al. [29] promote linguistic style as a key aspect
for believable agents. Linguistic choices are seen as realizations
of agents' personality, and subject to social variables
such as social distance between agents or the power one
agent has over another. The Linguistic Style Improvisation
(LSI) framework is based on speech acts theory and a theory
of linguistic social interaction. As a theory of social conver-
sation, LSI is clearly superior to our approach, which does
not provide a formalization of speech acts and uses a simpler
algorithm to decide the linguistic style of utterances.
However, we will provide a subtler account of the interaction
between an agent's emotional state, personality, social
role, and emotion expression.
Gratch [10] introduces 'social control programs' on top of
a general purpose planning system. In this system, plan
generation and execution are biased by the characteristics
of the social context. A so-called 'personality GUI' contains
the agent's goals, its social status, etiquette, (in)dependence,
and attitudes towards other agents. Besides those static features
of the agent's mental state, Gratch [10] introduces dynamic
features of the social context such as the communicative
state, the plan state, and the agent's emotional state.
Social rules encode commonsense rules of social interaction,
e.g., help a friend or avoid that some other agent interferes
with your plan. For the rather rigid organizational setting
in which military commander agents operate, Gratch and
Hill [11] introduce social concepts similar to ours. The main
di#erence to our work is that we place social control programs
at the interface of the module that reasons about
emotion and the module that renders the emotional state to
actual behavior. By considering the social context, we aim
to achieve believable emotion expression rather than generate
socially adjusted plans.
Guye-Vuill-eme and Thalmann recently started to work on
an architecture for believable social agents which is based on
four sociological concepts: social norms, values, world view,
and social role (see abstract [12]).
Finally, the research most relevant to ours is done by
Moulin and collaborators [21, 22]. It will be described in
the following section. We also continue work of Prendinger
and Ishizuka [27] who motivate the role-playing metaphor
for interactive learning environments.
3. A CONCEPTUAL FRAMEWORK FOR
SIMULATING CONVERSATIONS
A conversation is typically seen as an activity where multiple
(locutor-)agents participate and communicate through
multiple channels, such as verbal utterances, gestures and
facial display. Each agent has its own goals and will try to
influence other participants' mental states (e.g., emotions,
beliefs, goals). Moulin and Rousseau [22] distinguish three
levels of communication:
. At the communication level agents perform activities
related to communication maintenance and turn-taking.
. At the conceptual level agents transfer concepts.
. At the social level agents manage and respect the social
relationships that hold between agents.
Our system integrates the second and third level. The communicative
level basically implements conversational features
of human-human communication, as proposed by Cassell
and Th-orisson [4]. At the conceptual level, information is
passed from one agent to other agents as a (simplified) symbolic
representation of the utterance, e.g., if an agent orders
a beer, this is simply represented as order beer. According
to their role in the social context, the social level puts behavioral
constraints on agents' actions and emotion expression
(Moulin [21]). This issue will be discussed in detail below.
As an example, consider an agent character playing the
role of a customer called 'Al' and an agent character in the
role of a waiter called 'James'. Al orders a beer from James
by saying "May I order a beer please?". The corresponding
communicative act is formalized as
com act(al,james,order beer,polite,happiness,s0)
where the argument 'polite' is a qualitative evaluation of the
linguistic style (LS) of the utterance, the argument 'happi-
ness' refers to Al's emotion expression, and s0 denotes the
situation in which the utterance takes place.
As in [22], we assume that a conversation is governed by
. a conversational manager that maintains a model of
the conversation, and
. an environmental manager that simulates the environment
in which the agents are embedded.
For simplicity, we assume that the conversational manager
operates on a shared knowledge base that is visible to all
agents participating in the conversation (except for the user).
It stores all concepts transferred during the conversation by
updating the knowledge base with
com act(S,H,C,LS,E,Sit)
facts. The resulting 'model' of the conversation will eventually
be substituted by a less simple-minded conversation
model incorporating a formalization of speech acts (as, e.g.,in
Moulin and Rousseau [22]). Moreover, the conversational
manager maintains a simple form of turn-taking manage-
ment, by assigning agents to take turns based on their personality
traits. E.g., if James is an extrovert waiter, he
would tend to start a conversation with a customer, which
is formalized as
The environmental manager simulates the world that agents
inhabit and updates its (shared) knowledge base with consequences
of their actions. E.g., if the agent character Al
got his beer in situation s5, this will be stored as
holds(al,has beer,s5)
The characteristics of the environment are encoded by a set
of facts and rules. Situation calculus is used to describe and
reason about change in the environment (Elkan [7]).
4. MENTAL MODELS
Each agent involved in the conversation is assumed to have
its own mental model. A mental model may contain di#erent
kinds of entities, including world knowledge (beliefs, plans)
and a#ective mental states (emotions, personality, moods,
goals, attitudes). In this paper, we will concentrate on reasoning
about a#ective states and social reasoning.
4.1 Reasoning about Emotion vs. Emotion
Expression
It is widely accepted that animated agents expressing emotions
are important to make the interaction with them more
enjoyable and compelling for users (e.g., Lester et al. [17]).
Emotional behavior can be conveyed through various chan-
nels, such as facial display, speech and body movement. The
so-called 'basic emotions' approach (Ekman [6]) distills those
emotions that have distinctive (facial) expressions associated
with them: fear, anger, sadness, happiness, disgust, and sur-
prise. Murray and Arnott [23] describe the vocal e#ects on
Emotion type 'joy': agent L is in a `joy' state about
state-of-a#airs F with intensity # in situation S if
wants F in S with desirability degree # Des(F )
and F holds in S and # Des(F ) .
Emotion type 'angry-at': agent L1 is angry at
another agent L2 about action A with intensity # in S if
agent L2 performed action A prior to S
and action A causes F to hold in S
and agent L1 wants -F with degree # Des(-F ) in S
and L1 considers A blameworthy with degree # Acc(A)
Figure
1: Specifications for joy and angry-at.
the basic emotions found in [6], e.g., if a speaker expresses
the emotion 'happiness', his or her speech is typically faster,
higher-pitched, and slightly louder.
Although a 'basic emotions' theory allows relating emotion
to behavior (emotion expression), it cannot answer the
question why an agent is in a certain emotional state. How-
ever, reasoning about emotions is considered equally important
for presentation, pedagogical, and entertainment agents
(e.g., Andr-e et al. [1], Johnson et al. [16], Rousseau and
Hayes-Roth [28]). Many systems that reason about emo-
tions, so-called a#ective reasoners, derive from the influential
'cognitive appraisal for emotions' model of Ortony,
Clore, and Collins [25], also known as the OCC model (e.g.,
Elliott [8], O'Rorke and Ortony [24]). Here, emotions are
seen as valenced reactions to events, agents' actions, and ob-
jects, qualified by the agents' goals (what the agent wants),
standards (what the agent considers acceptable), and attitudes
(what the agent considers appealing). The OCC
model groups emotion types according to cognitive eliciting
conditions. In total, twenty-two classes of eliciting conditions
are identified and labeled by a word or phrase, such as
'joy', or `angry at'. We defined rules for a subset of the OCC
emotion types: joy, distress, hope, fear, happy for, sorry for,
angry at, gloats, and resents (see also O'Rorke and Ortony
[24], Gratch [9]). In Fig. 1, the emotion types joy and angry-
at are described. 1 The intensities of emotions are computed
as follows (for all intensity degrees, # {1, 2, . , 5}). In case
of 'joy', we set # Des(F ) . For emotions, where intensities
# and # have to be combined, such as in the specification of
the 'angry-at' emotion, logarithmic combination is employed
By example, let us explain the 'angry at' emotion type.
Assume that a secretary is angry at her manager because
she is refused to take a vacation. If the secretary has the
goal to take vacation with desirability degree #Des = 3, and
considers the refusal as blameworthy with degree
then she will be angry at her manager with intensity degree
how will she react to her manager? Presumably
she will nod, showing that she understood the manager's
1 The specification of the 'joy' emotion is related to the specification
of the 'satisfaction' emotion, whereby the latter one
is prospect-based. An agent L is satisfied if a hoped-for
state-of-a#airs F holds, where L hopes for F if L wants F
and anticipates F . See Gratch [9] for an in-depth treatment
of computing intensities for prospect-based emotions.
answer, and try to convince the manager that she really
needs some days o# in a calm voice, with a rather neutral
facial expression.
The secretary's behavior-suppressing the expression of
her emotional state-can be explained in at least two ways.
First, she might have personality traits that characterize
her as very friendly. Second, and probably more important
in this scenario, she might be aware of her social role as an
employee which puts behavioral restrictions on her answer to
the manager. Having said this, we should make explicit that
we only consider deliberative forms of emotion (expression),
as opposed to automatic 'hard-wired' processes of emotion
expression (see Picard [26]).
Below, we provide a brief characterization of personality,
and in the following we will try to explicate the impact of the
social dimension on emotion expression in communication.
4.2 Personality
Personality traits are typically characterized by patterns
of thought, attitude, and behavior that are permanent or
at least change very slowly. Most importantly, believable
agents should be consistent in their behavior (Rousseau and
Hayes-Roth [28]). To keep things simple, we consider only
two dimensions of personality, which seem crucial for social
interaction. Extroversion refers to an agent's tendency
to take action (e.g., being active, talkative). Agreeableness
refers to an agent's disposition to be sympathetic (e.g., being
friendly, good-natured). We assume numerical quantification
of dimensions, with a value out of {-3, -2, -1, 1, 2, 3}.
If an animated agent called 'James' is very outgoing and
slightly unfriendly, it is formalized as
personality
As mentioned in Section 3, we consider the first dimen-
sion, extroversion, in the conversational manager: outgoing
agents try to take turn in a conversation whenever possi-
ble, whereas introverted agents only respond when o#ered
to take turn.
Mo#at [20] points out the close relationship between personality
and emotion, although they seem very di#erent:
emotions are short-lived and focused whereas personality is
stable and global. He also considers mood which is rather
short-lived (like emotion) and not focused (like personality).
Later on, we will consider personality (or mood) to bias
emotion expression given a certain emotional state. Con-
sequently, an agent that is sorry for another agent and is
friendly will express its emotion more intensely than an unfriendly
agent.
5. SOCIAL FILTER PROGRAMS
Basically, a social filter program consists of a set of rules
that encode qualifying conditions for emotion expression.
The program acts as a 'filter' between the agent's a#ective
state and its rendering in a social context, such as a conver-
sation. Hence, we prefer to talk about social filter programs
rather than control programs (Gratch [10]). We consider the
agent's personality and the agent's social role as the most
important emotion expression qualifying conditions.
5.1 Roles, Conventions, and Social Networks
A significant portion of human conversation takes place
in a socio-organizational setting where participating agents
have clearly defined social roles, such as sales person and
customer, or teacher and student (Moulin [21]). Each role
has associated behavioral constraints, i.e., responsibilities,
rights, duties, prohibitions, and possibilities. Depending
on its role, an agent has to obey communicative conventions
(Lewis [18]). These conventions function as a regulatory
for the agent's choice of verbal expressions in a given
context. Conventional practices (i.e., behavioral constraints
and communicative conventions) can be conceived as guidelines
about socially appropriate behavior in a particular organizational
setting. In this paper, we will focus on the
choice of verbal and non-verbal behavior (emotion expres-
sion), depending on the agent's social role and personality.
Formally, in social or organizational groups roles are ordered
according to a power scale, which defines the social
power of an agent's role over other roles. For agents L i and
the power P of L i over L j is expressed as
considers itself as of the
same rank as L i . The social network is specified by the social
roles and associated power relations. Walker et al. [29]
also consider social distance between speaker and hearer to
determine an appropriate linguistic style. Similarly, we use
to express the distance between two agents
(D # {0, 1, 2, 3}). Given values for power and distance, an
agent L i computes the (social) threat # from agent L j , by
just adding the values, i.e.,
This is of course a very simple view of a social network
but, as shown below, it already allows us to explain various
phenomena in actual conversations. Observe that a zero
value for threat can be interpreted in three ways: (i) there
is no threat for an agent L, (ii) L chooses not to respect
conventional practices, and (iii) L is not aware of any threat.
5.2 Social Filter Rules
In the following, we will give some examples of social filter
rules. We assume that emotion expression (e.g., facial
display or linguistic style) is determined by personal expe-
rience, background knowledge, and cultural norms (Walker
et al. [29]), as well as the 'organizational culture' (Moulin
[21]). Our rules are consistent with Brown and Levinson's
theory of social interaction, as reported in [29].
If the conversational partner has more social power or
distance is high (i.e., # is high), the expression of 'nega-
tive' emotions is typically suppressed, resulting in 'neutral-
ized' emotion expression (see Fig. 2). The first condition
of the rule for emotion expression of 'anger' concerns the
social context, the second condition the agent's personality
(agreeableness), and the third accounts for the output of
the a#ective reasoner, the emotional state. The intensity #
of emotion expression is computed as #-(1+#). Consider
the case of an agent that is very angry (i.e.,
rather unfriendly (i.e., considers the social
threat as maximal (i.e., meaning that
the angry emotion is completely suppressed. On the other
hand, if the agent's agreeableness dimension comes
into force, resulting in
five the maximal intensity level, greater values are cut o#.
As shown in the second rule in Fig. 2, an agent might
even express happiness about something which-the agent
believes-distresses another agent. Observe that here, the
agent has to reason about the emotions of another agent.
Currently, we employ two mechanisms to model the ap-
Emotion expression 'anger': agent L1 displays
expression 'anger' towards L2 with intensity # if
the social threat for L1 from L2 is #
and L1's agreeableness has degree #
and L1 is angry at L2 with intensity #
Emotion expression 'happiness': agent L1 displays
expression 'happiness' towards L2 with intensity # if
the social threat for L1 from L2 is #
and L1's agreeableness has degree #
and L1 is gloats at L2 with intensity #
Emotion expression 'happiness': agent L1 displays
expression 'happiness' towards L2 with intensity # if
the social threat for L1 from L2 is #
and L1's agreeableness has degree #
and L1 is joyful with intensity #
and #).
Figure
2: Some examples of social filter rules.
praisal of another agent. If the observing agent has beliefs
about the observed agent's mental states and their desirabil-
ity, the agent infers the emotional state of the other agent
by using its emotion rules. Else, the observing agent uses
the other agent's perceived emotion, communicated via the
com act/6 representation discussed in Section 3, to assess
the other agent's emotion.
The third rule in Fig. 2 demonstrates the e#ect of personality
and social context on 'positive' emotions. We compute
the intensity of positive emotions as #). As a con-
sequence, the agent's unfriendliness or a high social threat
will diminish the expression of positive emotions. E.g., if a
very happy rather unfriendly agent
communicates with a slightly distant agent (i.e.,
the agent will express happiness with rather low intensity
Finally, notice an interesting consequence of our frame-
work. Since we clearly distinguish between emotional state
and expression of emotion, we may add another possibility of
an agent's misinterpretation of other agents' behavior. First,
an agent never has direct access to others mental states,
it can only have (possibly false) beliefs about their mental
(e.g., emotional) states. Second, our distinction allows that
agents cheat in their behavior by expressing a misleading
emotion. E.g., an agent may express a sad emotion, pretending
to be in a distressed emotional state, although it is
in a 'happy' state. This option is required for entertainment
purposes, where 'levels of indirection' are beneficial.
5.3 Violations of Conventional Practices
Despite the fact that obedience to conventional practices
is expected in real-world socio-organizational settings, violations
of conventional practices occur, and in particular, they
seem to be dramatically more interesting. What happens if
a manager requests something from his or her secretary and
the secretary refuses to follow the order? Consider the following
conversation fragment:
Manager (to secretary): I quickly need a copy of this.
Secretary (to manager): Sorry, I am busy right now.
Here, the secretary violates conventional practices by ignoring
the manager's indirectly formulated order. This situation
typically triggers a negotiation process where the agent
with higher social role makes his or her request more explicit,
or the even directly refers to his or her role and associated
rights, e.g., the power to request tasks from subordinates
(Moulin [21]). From an emotion expression point of view,
the manager agent will typically be in an 'angry at' (the sec-
retary) emotional state and may express its anger emotion
due to its higher rank on the power scale. The crucial belief
in the manager agent's mental model is, e.g.,
blameworthy(manager,conventional practice violated,4)
i.e., managers (typically) consider it as blameworthy if their
higher rank is ignored by employees. On the other hand,
we might imagine a manager with friendly personality traits
who is distressed about the secretary's behavior, and expresses
the emotion 'sadness'. Another possibility is that
the manager shows 'neutral' emotion expression, if manager
and secretary are old friends and-according to the very organizational
culture-typical conventional practices are not
applicable.
6. ROLE-PLAYING ON THE WEB
Our interactive environment for English conversation training
for Japanese speakers assumes that users (language stu-
dents) would enjoy getting involved in a role-play with animated
character agents, and thereby overcome their uneasiness
to converse in a foreign language. Inspired by the
examples of Rousseau and Hayes-Roth [28], we implemented
an interactive theater (or drama) that o#ers the role of a customer
in a virtual co#ee shop. An interactive role-playing
game employing animated agents, the Wumpus Game, is
currently under development.
6.1 Implementation
The programmable interface of the Microsoft agent package
[19] is used to run a virtual co#ee shop session in a web
browser (Internet Explorer 5). This choice put some serious
restrictions from the outset: the characters available for this
package have only a limited number of behaviors ('anima-
tions'), confining the realization of various emotions as well
as some features of embodied conversational behavior (Cas-
sell and Th-orisson [4]). However, our goal is believablity on
the level of adequate emotion expression rather than life-likeness
(in the sense of realistic behavior). The Microsoft
Agent package provides controls to embed animated characters
into a web page based JavaScript interface, and includes
a voice recognizer and a text-to-speech engine. Prolog programs
implement all reasoning related to conversation and
environmental management and agents' mental models (af-
fective and social reasoning). We use Jinni (Java Inference
engine and Networked Interactor) to communicate between
Prolog code and the Java objects that control the agents
through JavaScript code (BinNet Corp. [3]).
In a role-playing session, the user can promote the development
of the conversation by uttering one of a set of pre-defined
sentences that are displayed on the screen. Unlike
the setup of Hayes-Roth's `Virtual Theater Project' [28], the
emotion type 'angry at' in situation s1
holds(did(order beer,customer),s1).
causes(order beer,regulation violated),s0).
blameworthy(james,order beer,4).
wants(james,regulation respected,3,s1).
emotion expression 'anger' in situation s1
personality type(james,extrovert,-2,agreeable,-3).
social power(customer,james,0).
social distance(james,customer,0).
emotion type 'angry at' in situation s5
holds(did(refuse vacation,manager),s5).
causes(refuse vacation,no vacation,s4).
blameworthy(james,refuse vacation,3).
wants(james,get vacation,5,s5).
emotion expression 'neutral' in situation s5
social power(manager,james,3).
social distance(james,manager,2).
Figure
3: Some facts in the waiter agent's mental
model for first example run.
user does not need an avatar in the play. Animated agents
will respond by synthetic speech, facial display, and ges-
tures. Verbal and non-verbal behavior is synthesized in the
agent's mental model and interpreted in the browser. The
parameters for speech output are set in accordance with the
vocal e#ects associated with the basic emotions [6, 23]. Of
course, the facial display of characters is limited to the pre-defined
'animations' from the Agent package (e.g., `pleased',
'sad'). To some extent, we also implemented conversational
behavior (Cassell and Th-orisson [4]). E.g., the animations
'confused' (lifting shoulders) and `don't-recognize' (put hand
to ear) are used if the user's utterance is not recognized.
6.2 Example Runs
We will illustrate our system by showing some example
runs. In the first example, the user takes the role of a
(friendly) customer who interacts with an unfriendly waiter
agent James, who himself interacts with a friendly manager
agent as an employee. Fig. 3 describes some relevant facts
stored in the waiter agent's brain. For the rule part, the
reader is referred to Figures 1 and 2. The following is an
annotated trace from our conversation system.
[s0] Customer: I would like to drink a beer. [User can also
choose other beverages, and for each, he or she may select
the linguistic style (polite, neutral, rude).]
James (to customer): No way, this is a co#ee shop.
[Considers it as blameworthy to be asked for alcohol and
shows his anger. We assume that the waiter ignores the
social threat from the customer.]
[The manager of the co#ee shop appears.]
[s3] James (to manager): Good afternoon, boss. May I take
a day o# tomorrow? [Welcome gesture. Following conventional
practices, the waiter is polite to his manager.]
[s4] Manager: It will be a busy day. So I kindly ask you
to come. [Uses polite linguistic style in accordance with his
personality traits.]
[s5] Waiter: Ok, I will be there. [Considers it as blameworthy
to be denied a vacation and is angry. However, he is
aware of the social threat and thus does not show his anger.
Instead, he shows neutral emotion expression.]
The communicative act of the customer has the form
com act(customer,james,order beer,polite,neutral,s0)
Since the animated agents do not understand English, a library
is used to associate the user's utterance with an 'ef-
fect', e.g., the regulations of the co#ee shop are violated,
and an evaluation of its linguistic style, such as polite, rude
or neutral. Moreover, as an emotion (expression) recognition
module is not part of our system, we set 'neutral' as
the default value for user input (but see the work of Picard
[26]). The waiter's answer is formalized as
com act(james,customer,refuse beer,rude,anger,s1)
Similarly, the library is employed to generate the syntactic
form of the animated agent's response. As described in
Section 3, the environmental manager simulates the envi-
ronment. In this example, it includes the fact act(manager,
appears, s2) which triggers the waiter's reaction in situation
s3. In accordance with the contents of James' mental model
and our rules for a#ective and social reasoning, the waiter
agent expresses its anger towards the customer (user), but
its anger towards its manager.
The second example run is a variation of the previous
example where we assume an extrovert, friendly waiter who
respects conventional practices towards customers but not
towards his indi#erent manager.
James: Welcome to our co#ee shop! May I take your
[Starts the conversation because of his extrovert per-
Bring me beer, right away. [User chooses
rude linguistic style.]
James (to customer): I am sorry but I am not allowed to
serve alcoholic beverages here. [Concludes that the customer
is distressed and feels sorry for the customer.]
[s3] [The manager of the co#ee shop appears.]
James (to manager): Good to see you, boss. Tomorrow
I want to take a day o#. [Performs welcome gesture.]
[s5] Manager: Actually, I need you tomorrow. Thank you.
[Uses neutral linguistic style for his request.]
James: Too bad for you. I will not be here. [Waiter is
angry as the manager refuses to give him a vacation. Since
the waiter does not respect conventional practices, he expresses
his anger and refuses to obey the manager's order.]
In situation s3, we assume James to believe that the customer
wants a beer (urgently) and is distressed as a consequence
of the waiter's refusal. James' agreeableness is
responsible for feeling sorry about having to refuse the cus-
tomer's order. However, the linguistic style of James' response
is slightly lower in its politeness, since the customer
approaches the waiter in a rude way (a rudimentary form of
a reciprocal feedback loop).
There are limitless ways to vary social encounters between
agents, even in the restricted context of a co#ee shop en-
vironment. Consider, for instance, a situation where an
unfriendly waiter shows rude behavior towards a customer,
who turns out to be the waiter's new manager. A 'socially
robust' waiter agent will show a form of `behavior switching'
(assuming that the agent respects conventional practices).
6.3 User Feedback
We conducted a rather preliminary experiment on the impact
of agents featuring social role awareness. As in the
example runs of the previous section, users would play the
role of a customer in a co#ee shop and interact with an animated
agent portraying a waiter. The waiter agent interacts
with a manager agent, a fellow waiter agent or a customer
agent. Although our general goal is to employ the animated
agents approach to language conversation training, the focus
here is to show that (i) users can recognize whether the
agents behave according to their social roles, and (ii) the
animated agents' responses are believable. Five users were
asked to rate the appropriateness of the agents' responses.
Furthermore, we asked them whether they think the agents'
reactions could occur in real-world situations.
As we expected, users could identify the social roles played
by the animated agents, which are easily detected in the coffee
shop environment. Users could also recognize when conventional
practices have been violated. However, answers
varied when asked for what they think went wrong in case
of violation: including, that the agent is in a bad mood, or
does not like the boss (or customer agent). When we run
the experiment with agents that only reason about emotion
and personality (i.e., without social role awareness), users
would generally not consider those agents as 'unbelievable',
but they expected to get hints regarding the motivation for
the violation of conventional practices, and appropriate reactions
from the other agents. Our (preliminary) findings
reinforce the belief that an agent should show an overall consistency
in its behavior in order to come across as believable
and that social role awareness facilitates consistency.
7. DISCUSSION
Our work aims to account for an important feature of
human-human communication, namely social role aware-
ness, that seems to have strong influence on our ways of
emotion expression and our behavior in general. Social role
awareness is approached from the viewpoint of the believability
of animated agents. It is shown that this feature of
social interaction may explain phenomena such as suppressing
(the expression) of emotions, as well as other forms of
'cheating' about an agent's emotion. As such, social role
awareness can significantly contribute to the design of dramatically
interesting characters (as in Hayes-Roth et al. [13]
or Rousseau and Hayes-Roth [28]). More recently, sensitivity
to socio-organizational contexts is also pointed out as
a crucial issue in military training simulations (Gratch and
Hill [11], Gratch [10]). Here, interesting conflicts can arise
between an agent's goal (or self-interest) and role-specific
duties imposed by orders from a commander agent.
Although we believe that social role awareness makes animated
agents more 'socially robust', our approach su#ers
from several shortcomings. In the following paragraphs, we
discuss work relevant to future refinements of our approach.
Social reasoning and planning. In the emotion model
called
Emile, Gratch [9] interleaves emotional reasoning with
an explicit planning model. There are obvious advantages
of considering an agent's plans, e.g., `prospective' emotional
states such as hope and fear assume reasoning about future
situations and typically induce plan generation or the modification
of current plans. Similarly, social reasoning would
benefit from an explicit representation of plans. Often, an
agent's choice of emotion expression depends on the state of
its current plan: e.g., if an employee agent plans to get fired,
its violation of conventional practices towards its manager
should be seen in the light of this high-level goal, and not be
considered as part of the agent's concept of its social role.
Social action. Besides appropriate emotion expression,
other (possibly more important) behavioral constraints apply
to socio-organizational settings. The role of an agent is
associated with certain responsibilities, rights, duties, pro-
hibitions, and decision power (Moulin [21]). A broader perspective
of social agency will require explicit representations
of those behavioral constraints, as well as formalisations
of social concepts such as 'commitment'. In this respect
we may heavily draw on well-established research work
on multi-agent systems and distributed artificial intelligence
(e.g., Jennings [15], Castelfranchi [5]).
Social communication. As mentioned throughout the pa-
per, an obvious weakness of our approach is that we do
not provide an explicit formalization of speech acts. Conse-
quently, all of the dialogue contributions have to be carefully
hand-crafted. In fact, we employed a simplified version of
Moulin and Rousseau's conversation model [21, 22]. In [21],
Moulin introduces a new notation for speech acts that is
tailored to communication in socio-organizational settings,
in particular conversational schemas that allow an agent to
select speech acts in accordance with communicative con-
ventions. In addition, we should consider the linguistic style
strategies discussed by Walker et al. [29]. Those strategies
determine semantic content, syntactic form and acoustical
realization of a speech act, qualified by the social situ-
ation. Application of LS strategies supports social interactions
that allow agents to maintain public face (i.e., autonomy
and approval). If speaker and hearer have equal social
rank, 'direct' strategies can be applied (e.g., "Bring me a
beer!"). On the other extreme, if the rank distance is very
large, 'o# record' strategies are chosen (e.g., "Someone has
not brought me a beer.
8. CONCLUSION
In this paper, we propose to integrate social reasoning to
mental models of animated agents, in addition to an a#ective
reasoning component. The novel aspect of our work is that
we explicate the social role of agents and associated constraints
on emotion expression, which allows for enhanced
believability of animated characters beyond reasoning about
emotion and personality. We believe that considering the social
dimension in animated agents approaches adds value for
the following reasons:
. Believability. It may increase the illusion of life, which
is often captured by emotion and personality only.
. Social Communication. By respecting an important
feature of human conversation, it adds 'social robust-
ness' to agent-human and inter-agent communication.
. Explanatory power. It explains the frequent mismatch
between the output of emotional reasoning (the emotional
state) and emotion expression.
We have described a web-based interactive drama scenario
featuring animated agents as an entertaining testbed to experiment
with new capabilities of agents. By considering
the issues described in the discussion section, we hope to
gain a better understanding of the social dimension in communication
9.
ACKNOWLEDGMENTS
We would like to thank the anonymous referees for their
very helpful and detailed comments. The first author was
supported by a grant from the Japan Society for the Promotion
of Science (JSPS).
10.
--R
The automated design of believable dialogue for animated presentation teams.
The role of emotion in believable agents.
BinNet Corp.
The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents.
Modeling social action for AI agents.
An argument for basic emotions.
Socially situated planning.
Continuous planning and collaboration for command and control in joint synthetic battlespaces.
Acting in character.
MPML: A multimodal presentation markup language with character control functions.
Commitments and conventions: The foundation of coordination in multi-agent systems
Animated pedagogical agents: Face-to-face interaction in interactive learning environments
Achieving a
A Philosophical Study.
The social dimension of interactions in multiagent systems.
An approach for modeling and simulating conversations.
Implementation and testing of a system for producing emotion-by-rule in synthetic speech
Explaining emotions.
The Cognitive Structure of Emotions.
Carrying the role-playing metaphor to interactive learning environments
A social-psychological model for synthetic actors
Improvising linguistic style: Social and a
--TR
The affective reasoner
The role of emotion in believable agents
Implementation and testing of a system for producing emotion-by-rule in synthetic speech
Affective computing
Improvising linguistic style
Developing for Microsoft Agent
A social-psychological model for synthetic actors
Requirements for an architecture for believable social agents
MYAMPERSANDEacute;mile
The automated design of believable dialogues for animated presentation teams
Personality Parameters and Programs
Acting in Character
The Social Dimension of Interactions in Multiagent Systems
--CTR
Patrick Gebhard , Michael Kipp , Martin Klesen , Thomas Rist, Authoring scenes for adaptive, interactive performances, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Matthias Rehm , Elisabeth Andr, Catch me if you can: exploring lying agents in social settings, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Jonathan Gratch , Jeff Rickel , Elisabeth Andr , Justine Cassell , Eric Petajan , Norman Badler, Creating Interactive Virtual Humans: Some Assembly Required, IEEE Intelligent Systems, v.17 n.4, p.54-63, July 2002
Han Noot , Zsfia Ruttkay, Variations in gesturing and speech by GESTYLE, International Journal of Human-Computer Studies, v.62 n.2, p.211-229, February 2005
Robert C. Hubal , Geoffrey A. Frank , Curry I. Guinn, Lessons learned in modeling schizophrenic and depressed responsive virtual humans for training, Proceedings of the 8th international conference on Intelligent user interfaces, January 12-15, 2003, Miami, Florida, USA
Zsfia Ruttkay , Claire Dormann , Han Noot, Embodied conversational agents on a common ground, From brows to trust: evaluating embodied conversational agents, Kluwer Academic Publishers, Norwell, MA, 2004
Mitsuru Ishizuka , Helmut Prendinger, Describing and generating multimodal contents featuring affective lifelike agents with MPML, New Generation Computing, v.24 n.2, p.97-128, January 2006
Catherine Pelachaud , Isabella Poggi, Multimodal embodied agents, The Knowledge Engineering Review, v.17 n.2, p.181-196, June 2002 | social dimension in communication;believability;human-like qualities of synthetic agents;affective reasoning and emotion expression;social agents |
376477 | A knowledge level software engineering methodology for agent oriented programming. | Our goal in this paper is to introduce and motivate a methodology, called \emph{Tropos}, for building agent oriented software systems. Tropos is based on two key ideas. First, the notion of agent and all the related mentalistic notions (for instance: beliefs, goals, actions and plans) are used in all phases of software development, from the early analysis down to the actual implementation. Second, Tropos covers also the very early phases of requirements analysis, thus allowing for a deeper understanding of the environment where the software must operate, and of the kind of interactions that should occur between software and human agents. The methodology is illustrated with the help of a case study. | INTRODUCTION
Agent oriented programming (AOP, from now on) is most
often motivated by the need of open architectures that continuously
change and evolve to accommodate new components
and meet new requirements. More and more, software
must operate on different platforms, without recompilation,
and with minimal assumptions about its operating environment
and its users. It must be robust, autonomous and
proactive. Examples of applications where AOP seems most
suited and which are most quoted in the literature [15] are
electronic commerce, enterprise resource planning, air-traffic
control systems, personal digital assistants, or book travel
arrangements, and so on.
To qualify as an agent, a software or hardware system is
often required to have properties such as autonomy, social
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
AGENTS'01, May 28-June 1, 2001, Montr- eal, Quebec, Canada.
ability, reactivity, proactivity. Other attributes which are
sometimes requested are mobility, veracity, rationality, and
so on. The key feature which makes it possible to implement
systems with the above properties is that, in this paradigm,
programming is done at a very abstract level, more precisely,
following Newell, at the knowledge level [13]. Thus, in AOP,
we talk of mental states, of beliefs instead of machine states,
of plans and actions instead of programs, of communication,
negotiation and social ability instead of interaction and I/O
functionalities, of goals, desires, and so on. Mental notions
provide, at least in part, the software with the extra flexibility
needed in order to deal with the complexity intrinsic
in the applications mentioned in the first paragraph. The
explicit representation and manipulation of goals and plans
allows, for instance, for a run-time "adjustment" of the system
behavior needed in order to cope with unforeseen cir-
cumstances, or for a more meaningful interaction with other
human and software agents. 1
We are defining a software development methodology, called
Tropos, which will allow us to exploit all the flexibility
provided by AOP. In a nutshell, the two key and novel features
of Tropos are the following:
1. The notion of agent and all the related mentalistic notions
are used in all phases of software development,
from the first phases of early analysis down to the
actual implementation. In particular our target implementation
agent language and system is JACK [3],
an agent programming platform, based on the BDI
(Beliefs-Desires-Intentions) agent architecture.
2. A crucial role is given to the earlier analysis of requirements
that precedes prescriptive requirements specifi-
cation. We consider therefore much earlier phases than
the phases supported in, for instance, OOP software
engineering methodologies. One such example are the
1 AOP is often introduced as a specialization or as a "natu-
ral development" of Object Oriented Programming (OOP),
see for instance [14, 11, 15]. In our opinion, the step from
OOP to AOP is more a paradigm shift than a simple spe-
cialization. Also those features of AOP which can be found
in OOP languages, for instance, mobility and inheritance,
take in this context a different and more abstract meaning.
methodologies based on UML [2] where use case analysis
is proposed as an early activity, followed by architectural
design. As described in detail below, this
move is crucial in order to achieve our objectives.
Our goal in this paper is to introduce and motivate the
Tropos methodology, in all its phases. The presentation
is carried out with the help of a running example. The
example considered is a fragment of a substantial software
system (which, in its full implementation, is requiring various
man years of work) developed for the government of
Trentino (Provincia Autonoma di Trento, or PAT). The system
(which we will call throughout the eCulture system) is a
web-based broker of cultural information and services for the
province of Trentino, including information obtained from
museums, exhibitions, and other cultural organizations and
events. It is the government's intention that the system be
usable by a variety of users, including Trentinos and tourists
looking for things to do, or scholars and students looking for
material relevant to their studies.
The paper is structured as follows. Section 2 introduces
the five basic steps of the Tropos methodology, namely, early
requirement analysis, late requirements analysis, architectural
design, detailed design, and implementation. The five
Tropos phases are then described, as applied in the context
of the eCulture system example, in Sections 3, 4, 5, 6 and 7.
The conclusions are presented in section 8.
This paper follows on two previous papers, [12] and [4],
which provide some motivations behind the Tropos project,
and an early glimpse of how the methodology works. With
respect to these earlier papers much more emphasis has been
put on the issue of developing knowledge level specifications.
2. THE TROPOS METHODOLOGY:
AN
OVERVIEW
Tropos is intended to support five phases of software development
ffl Early requirements, concerned with the understanding
of a problem by studying an existing organizational
setting; the output of this phase is an organizational
model which includes relevant actors and their respective
dependencies. Actors, in the organizational set-
ting, are characterized by having goals that, in iso-
lation, they would be unable to achieve; the goals are
achievable in virtue of reciprocal means-end knowledge
and dependencies [19].
ffl Late requirements, where the system-to-be is described
within its operational environment, along with relevant
functions and qualities; this description models
the system as a (small) number of actors, which have
a number of social dependencies with other actors in
their environment.
ffl Architectural design, where the system's global architecture
is defined in terms of subsystems, interconnected
through data and control flows; in our frame-
work, subsystems are represented as actors while data
and control interconnections correspond to actor de-
pendencies. In this step we specify actor capabilities
and agents types (where agents are special kinds of
actors, see below). This phase ends up with the specification
of the system agents.
ffl Detailed design, where each agent of the system architecture
is defined in further detail in terms of internal
and external events, plans and beliefs and agent communication
protocols.
Implementation, where the actual implementation of
the system is carried out in JACK, consistently with
the detailed design.
The idea of paying attention to the activities that precede
the specification of the prescriptive requirements, such as
understanding how the intended system would meet the organizational
goals, is not new. It was first proposed in the
requirements engineering literature (see for instance [7, 18]).
In particular we adapt ideas from Eric Yu's model for requirements
engineering, called i *, which offers actors, goals
and actor dependencies as primitive concepts [18]. 2 The
main motivation underlying this earlier work was to develop
a richer conceptual framework for modeling processes which
involve multiple participants (both humans and computers).
The goal was to have a more systematic reengineering of pro-
cesses. One of the main advantages is that, by doing this
kind of analysis, one can also capture not only the what or
the how but also the why a piece of software is developed.
This, in turn, allows for a more refined analysis of the system
dependencies and, in particular, for a much better and
uniform treatment not only of the system's functional requirements
but also of the non-functional requirements (the
latter being usually very hard to deal with).
Neither Yu's work, nor, as far as we know, any of the
previous work in requirements analysis was developed with
AOP in mind. The application of these ideas to AOP, and
the decision to use mentalistic notions in all the phases of
analysis, has important consequences. When writing agent
oriented specifications and programs one uses the same notions
and abstractions used to describe the behavior of the
human agents, and the processes involving them. The conceptual
gap from what the system must do and why, and
what the users interacting with it must do and why, is reduced
to a minimum, thus providing (part of) the extra
flexibility needed to cope with the complexity intrinsic in
the applications mentioned in the introduction.
Indeed, the software engineering methodologies and specification
languages developed in order to support OOP essentially
support only the phases from the architectural design
downwards. At that moment, any connection between the
intentions of the different (human and software) agents cannot
be explicitly specified. By using UML, for instance, the
software engineer can start with the use case analysis (possi-
bly refined by developing some activity diagrams) and then
moves to the architectural design. Here, the engineer can
do static analysis using class diagrams, or dynamic analysis
using, for instance, sequence or interaction diagrams. The
target is to get to the detail of the level of abstraction allowed
by the actual classes, methods and attributes used
to implement the system. However, applying this approach
and the related diagrams to AOP misses most of the advantages
coming for the fact that in AOP one writes programs
at the knowledge level. It forces the programmer to translate
goals and the other mentalistic notions into software level
notions, for instance the classes, attributes and methods ofi * has been applied in various application areas, including
requirements engineering [17], business process reengineering
[21], and software modeling processes [20].
cCultural
infrastructure
System
usable
spent
services
increase
internet
taxes well
enjoy visit
System
eCulture
Visitor
eCulture
Museum
Citizen
internet use
available
available
provide
get cultural
information
dependee
depender dependum
goal softgoal
actor
goal dependency
Figure
1: An actor diagram specifying the stake-holders
of the eCulture project and their main goal
dependencies.
class diagrams. The consequent negative effect is that the
former notions must be reintroduced in the programming
phase, for instance when writing JACK code: the programmer
must program goals, beliefs, and plans, having lost the
connection with the original mentalistic notions used in the
early and late requirements. The work on AUML [1, 10],
though relevant in that it provides a first mapping from
OOP to AOP specifications, is an example of work suffering
from this kind of problem.
In the following sections we present the five Tropos phases
as applied in the context of the eCulture system example.
3. EARLY REQUIREMENTS
During early requirements analysis, the requirements engineer
models and analyzes the intentions of the stakehold-
ers. Following i *, in Tropos the stakeholders' intentions
are modeled as goals which, through some form of a goal-oriented
analysis, eventually lead to the functional and non-functional
requirements of the system-to-be. Early requirements
are assumed to involve social actors who depend on
each other for goals to be achieved, tasks to be performed,
and resources to be furnished. Tropos includes actor diagrams
for describing the network of social dependency relationships
among actors, as well as rationale diagrams for
analyzing and trying to fulfill goals through a means-ends
analysis. 3 These primitives are formalized using intentional
concepts from AI, such as goal, belief, ability, and commitment
An actor diagram is a graph, where each node represents
an actor, and a link between two actors indicates that one
actor depends, for some reason, on the other in order to attain
some goal. We call the depending actor the depender
and the actor who is depended upon the dependee. The
object around which the dependency centers is called the
In i * actor diagrams are called strategic dependency mod-
els, while rationale diagrams are called strategic rationale
models.
internet use
System
build
citizens
increase
services
reasonable
expenses
taxes well
provide
educate
eCultural
provide
interesting
systems
funds for
cultural IS
System
offer
inexpensive
infrastructure
funding
museums for
own systems
eCulture
Museum
services
good
internet
infrastructure
available
good cultural
services
spent
available
eCulture
Figure
2: A rational diagram for PAT. The rectangular
box added to a dependency, models a resource
dependency.
dependum (see, e.g., Figure 1). By depending on another
actor for a dependum, an actor is able to achieve goals that
it would otherwise be unable to achieve on its own, or not
as easily, or not as well. At the same time, the depender
becomes vulnerable. If the dependee fails to deliver the de-
pendum, the depender would be adversely affected in its
ability to achieve its goals.
In our eCulture example we can start by informally listing
(some of) the stakeholders:
ffl Provincia Autonoma di Trento (PAT), that is the government
agency funding the project; their objectives
include improving public information services, increase
tourism through new information services, also encouraging
Internet use within the province.
ffl Museums, that are cultural information providers for
their respective collections; museums want government
funds to build/improve their cultural information ser-
vices, and are willing to interface their systems with
the eCulture system.
ffl Visitors, who want to access cultural information before
or during their visit to Trentino to make their visit
interesting and/or pleasant.
ffl (Trentino) Citizens, who want easily accessible infor-
mation, of any sort.
These stakeholders correspond to actors in an actor dia-
gram. Notice that citizens and visitors correspond to (hu-
man) agents while this is not the case for the other two stake-
holders. Museums and PAT correspond, rather, to roles. An
actor is an agent, a role or a position, according to the fact
that the actor is a well identified (human or software) entity
(agent), it is a function (role) that can be played by an
agent, or collects a set of roles that are usually played by a
single agent (position).
Figure
1 shows the actors involved in the eCulture project
and their respective goals. In particular, PAT is associated
with a single relevant goal: increase internet use, while
Visitor and Museum have associated softgoals, enjoy visit
and provide cultural services respectively. Softgoals are
distinguished from goals because they don't have a formal
definition, and are amenable to a different (more qualita-
tive) kind of analysis (see [5] for a detailed description of
softgoals). Citizen wants to get cultural information and
depends on PAT to fulfill the softgoal taxes well spent, a
high level goal that motivates more specific PAT's respon-
sibilities, namely to provide an Internet infrastructure, to
deliver on the eCulture system and make it usable too.
The early requirements analysis goes on extending the
actor diagram by incrementally adding more specific actor
dependencies which come out from a means-ends analysis
of each goal. We specify this analysis using rationale diagrams
Figure
2 depicts a fragment of one such diagram,
obtained by exploding part of the diagram in Figure 1,
where the perspective of PAT is modeled. The diagram appears
as a balloon within which PAT's goals are analyzed
and dependencies with other actors are established. This
example is intended to illustrate how means-ends analysis is
conducted. Throughout, the idea is that goals are decomposed
into subgoals and positive/negative contributions of
subgoals to goals are specified. Thus, in Figure 2, the goals
increase internet use and eCulture system available
are both well served by the goal build eCulture System.
The (high level) softgoal taxes well spent gets two positive
contributions, which can be thought as justifications
for the selection of particular dependencies. The final result
of this phase is a set of strategic dependencies among
actors, built incrementally by performing means-ends analysis
on each goal, until all goals have been analyzed. The
later it is added, the more specific a goal is. For instance, in
the example in Figure 2 PAT's goal build eCulture system
is introduced last and, therefore, has no subgoals and it is
motivated by the higher level goals it fulfills. 4
4. LATE REQUIREMENTS
During late requirement analysis the system-to-be (the
eCulture System in our example) is described within its operating
environment, along with relevant functions and qual-
ities. The system is represented as one or more actors which
have a number of dependencies with the actors in their en-
vironment. These dependencies define all functional and
non-functional requirements for the system-to-be.
Figure
3 illustrates the late requirements actor diagram
where the eCulture System actor has been introduced. The
PAT depends on it to provide eCultural services, one of
the PAT's subgoals discovered during the means-end analysis
depicted in Figure 2. The softgoal usable eCulture
system, for which Citizen depends on PAT (see Figure 1),
has been delegated by PAT to the eCulture system. More-
over, the eCulture System is expected to fulfill other PAT
softgoals such as extensible eCulture system, flexible
In rationale diagrams one can also introduce tasks and resources
and connect them to the fulfillment of goals.
System
available
flexible
temporal
eCulture
scalable
available
eCultural
usable
eCulture
provide
provide
eCulture
educational
eCulture
System
eCulture
user friendly
System
eCulture
info services
technology
provide
services
System
use internet
System
usable
System
System
eCulture
extensible
eCultural
services
services
eCultural
logistic info cultural info
portable
make
reservations
virtual
visit
provide
Figure
3: A fragment of the actor diagram including
the PAT and the eCulture System actors; the rationale
diagram for the eCulture System is detailed within
the balloon.
eCulture system, and use internet technology. The balloon
in Figure 3 shows how two of the PAT's dependums can
be further analyzed from the point of view of the eCulture
System. The goal provide eCultural services is decomposed
(AND decomposition) into four subgoals: make re-
servation, provide info, educational services and virtual
visit that can be further specified along a subgoal
hierarchy. For instance, the types of information that the
system has to provide are both logistical (timetables and visiting
instructions for museums), and cultural (for instance,
cultural content of museums and special cultural events).
The rationale diagram includes also a softgoal analysis.
The usable eCulture system softgoal has two positive (+)
contributions from user friendly eCulture system and available
eCulture system. This latter softgoal in turns
specifies the following three basic non-functional require-
ments: system portability, scalability, and availability over
time.
Starting from this analysis, the system-to-be actor can
be decomposed into sub-actors that take on the responsibility
of fulfilling one or more goals of the system. Figure 4
shows the resulting eCulture System actor diagram: the
eCulture System depends on the Info Broker to provide
info, on the Educational Broker to provide educational
services, on the Reservation Broker to make reserva-
tion, on the Virtual Visit Broker to provide virtual
interfacing
provide
system
educational
Broker
interface
eCulture
Educational
System
services
Broker
Info
Broker
System
Manager
System
Interface
Manager
Interface
User
Manager
make
reservations
virtual
visits
provide info
interfacing
user
Virtual
Visit
Reservation
Broker
Figure
4: The system actor diagram. Sub-actors
decomposition for the eCulture System.
visit, and on the System Manager to provide interface.
Furthermore each sub-actor can be further decomposed in
sub-actors responsible for the fulfillment of one or more sub-goals
At this point of the analysis we can look into the actor diagram
for a direct dependency between the Citizen, which
plays the role of system user, and the eCulture System. In
other words we can now see how the former Citizen's goal
get cultural information can be fulfilled by the current
eCulture System. The rational diagram of this goal depen-
dency, see Figure 5, provides a sort of use-case analysis [9].
5. ARCHITECTURAL DESIGN
The architectural design phase consists of three steps:
1. refining the system actor diagram
2. identifying capabilities and
3. assigning them to agents.
In the first step the system actor diagram is extended according
to design patterns [8] that provide solutions to heterogeneous
agents communication and to non-functional re-
quirements. 5 Figure 6 shows the extended actor diagram
with respect to the Info Broker. 6 The User Interface
Manager and the Sources Interface Manager are responsible
for interfacing the system to the external actors Citizen
and Museum respectively.
The second step consists in capturing actor capabilities
from the analysis of the tasks that actors and sub-actors will
carry on in order to fulfill functional requirements (goals).
A capability is the set of events, plans and beliefs necessary
for the fulfillment of actor goals. Figure 7 shows
5 In this step design patterns for agent systems are mapped
to actor diagrams.
6 For the sake of readability we do not show all the actors
needed to take into account other non-functional require-
ments, e.g., system extensibility and user friendliness.
source
query result
Citizen
area
search
information
info about
specification
area
get cultural
eCulture
System
information
classify
search by
synthesize
geographical
area results search by
keywords
search by
area
Museum
get info on
area
query
sources
find info
sources
search by
time period
Figure
5: Rationale diagram for the goal get
cultural information. Hexagonal shapes model
tasks. Task decomposition links model task-subtask
relationships. Goal-task links are a type of means-ends
links.
an example for the Info Broker actor analysis, with respect
to the goal of searching information by topic area.
The Info Broker is decomposed into three sub-actors: the
Area Classifier, the Results Synthesizer, and the Info
Searcher. The Area Classifier is responsible for the classification
of the information provided by the user. It depends
on the User Interface Manager for the goal interfacing
to users. The Info Searcher depends on the Area Classifier
to have (thematic) area information that the user
is interested in, and depends on the Sources Interface
Manager for the goal interfacing to sources (the Museum).
The Results Synthesizer depends on the Info Searcher
for the information concerning the pending query (query
information) and on the Museum to have the query results.
Manager
Citizen
Interface
User Sources
Interface
Manager
Broker
Info
interfacing
to users to sources
interfacing
to the
System
interfacing
eCulture eCulture
System
interfacing
to the
Museum
Figure
Extended actor diagram, Info Broker.
Interface
Manager
Sources
Citizen
query
results
interfacing
interfacing
interfacing
to the
eCulture
System System
area
to sources
to the
Synthesizer
Broker
specification
area
eCulture
information
Results
Info
Classifier
Area
information
query
information
area
Searcher
Info
Museum
Interface
User
Manager
interfacing
to users
Figure
7: Actor diagram for capability analysis, Info Broker.
Capabilities can be easily identified by analyzing the diagram
in Figure 7. In particular each dependency relationship
gives place to one or more capabilities triggered by external
events. Table 1 lists the capabilities associated to the
extended actor diagram of Figure 7. They are listed with
respect to the system-to-be actors, and then numbered in
order to eliminate possible copies whereas.
Actor Name N Capability
Area Classifier 1 get area specification form
classify area
3 provide area information
4 provide service description
Info Searcher 5 get area information
6 find information source
7 compose query
8 query source
9 provide query information
provide service description
query information
get query results
12 provide query results
synthesize area query results
provide service description
Sources Interface 14 wrap information source
Manager provide service description
User Interface 15 get user specification
Manager user specification
get query results
present query results to the user
provide service description
Table
1: Actors capabilities
Agent Capabilities
Query Handler 1, 3, 4, 5, 7, 8, 9, 10, 11, 12
Classifier 2, 4
Searcher 6, 4
Synthesizer 13, 4
Wrapper 14, 4
User Interface Agent 15, 16, 17, 18, 4
Table
2: Agent types and their capabilities
The last step of the architectural design consists in defining
a set of agent types and in assigning to each agent one
or more different capabilities (agent assignment). Table 2
reports the agents assignment with respect to the capabilities
listed in Table 1. The capabilities concern exclusively
the task search by area assigned to the Info Broker. Of
course, many other capabilities and agent types are needed
in case we consider all the goals and tasks associated to the
complete extended actor diagram.
In general, the agents assignment is not unique and depends
on the designer. The number of agents and the capabilities
assigned to each of them are choices driven by
the analysis of the extended actor diagram and by the way
in which the designer thinks the system in term of agents.
Some of the activities done in architectural design can be
compared to what Wooldridge et al. propose to do within
the Gaia methodology [16]. For instance, what we do in actor
diagram refinement can be compared to "role modeling"
in Gaia. We instead consider also non-functional require-
ments. Similarly, capability analysis can be compared to
"protocols modeling", even if in Gaia only external events
are considered.
evaluate query
results
present empty
results present query
results
E:(empty result set)
Query results
EE: inform(SIA, UIA, query results)
Figure
8: Capability diagram using the AUML activity
diagram. Ovals represent plans, arcs internal
and external events.
6. DETAILED DESIGN
The detailed design phase aims at specifying agent capabilities
and interactions. The specification of capabilities
amounts to modeling external and internal events that trigger
plans and the beliefs involved in agent reasoning. Practical
approaches to this step are often used. 7 In the paper
we adapt a subset of the AUML diagrams proposed in [1].
In particular:
1. Capability diagrams. The AUML activity diagram allows
to model a capability (or a set of correlated ca-
pabilities), from the point of view of a specific ac-
tor. External events set up the starting state of a
capability diagram, activity nodes model plans, transition
arcs model events, beliefs are modeled as ob-
jects. For instance, Figure 8 depicts the capability diagram
of the query results capability of the User
Interface Agent.
2. Plan diagrams. Each plan node of a capability diagram
can be further specified by AUML action diagrams.
3. Agent interaction diagrams. Here AUML sequence diagrams
can be exploited. In AUML sequence diagrams,
agents corresponds to objects, whose life-line is independent
from the specific interaction to be modeled
(in UML an object can be created or destroyed during
the interaction); communication acts between agents
correspond to asynchronous message arcs. It can be
shown that sequence diagrams modeling Agent Inter-action
Protocols, proposed by [10], can be straightforwardly
applied to our example.
7 For instance the Data-Event-Plan diagram used by JACK
developer. Ralph R-onnquist, personal communication.
7. IMPLEMENTATION USING A BDI AR-
The BDI platform chosen for the implementation is JACK
Intelligent Agents, an agent-oriented development environment
built on top and fully integrated with Java. Agents in
JACK are autonomous software components that have explicit
goals (desires) to achieve or events to handle. Agents
are programmed with a set of plans in order to make them
capable of achieving goals.
The implementation activity follows step by step, in a natural
way, the detailed design specification described in section
6. In fact, the notions introduced in that section have a
direct correspondence with the following JACK's constructs,
as explained below:
Agent. A JACK's agent construct is used to define the
behavior of an intelligent software agent. This includes
the capabilities an agent has, the types of messages and
events it responds to and the plans it uses to achieve
its goals.
Capability. A JACK's capability construct can include
plans, events, beliefs and other capabilities. An agent
can be assigned a number of capabilities. Furthermore,
a given capability can be assigned to different agents.
JACK's capability provides a way of applying reuse
concepts.
ffl Belief. Currently, in Tropos, this concept is used only
in the implementation phase, but we are considering
to move it up to earlier phases. The JACK's database
construct provides a generic relational database. A
database describes a set of beliefs that the agent can
have.
ffl Event. Internal and external events specified in the
detailed design map to the JACK's event construct.
In JACK an event describes a triggering condition for
actions.
ffl Plan. The plans contained into the capability specification
resulting from the detailed design level map
to the JACK's plan construct. In JACK a plan is
a sequence of instructions the agent follows to try to
achieve goals and handle designed events.
As an example, the definition for the UserInterface agent,
in JACK code, is as follows:
public agent UserInterface extends Agent -
#has capability GetQueryResults;
#has capability ProvideUserSpecification;
#has capability GetUserSpecification;
#has capability PresentQueryResults;
#handles event InformQueryResults;
#handles event ResultsSet;
The capability PresentQueryResults, analyzed in detail in
the previous section (see Figure 8) is defined as follows:
public capability PresentQueryResults
extends Capability -
#handles external event InformQueryResults;
#posts event ResultsSet;
#posts event EmptyResultsSet;
#private database QueryResults();
#private database ResultsModel();
#uses plan EvaluateQueryResults;
#uses plan PresentEmptyResults;
#uses plan PresentResults;
8. CONCLUSIONS
In this paper we have proposed Tropos, a new software
engineering methodology which allows us to exploit the advantages
and the extra flexibility (if compared with other
programming paradigms, for instance OOP) coming from
using AOP. The two main intuitions underlying Tropos are
the pervasive use, in all phases, of knowledge level specifi-
cations, and the idea that one should start from the very
early phase of early requirements specification. This allows
us to create a continuum where one starts with a set of mentalistic
notions (e.g., beliefs, goals, plans), always present in
(the why of) early requirements, and to progressively convert
them into the actual mentalistic notions implemented in an
agent oriented software. This direct mapping from the early
requirements down to the actual implementation allows us to
develop software architectures which are "well tuned" with
the problems they solve and have, therefore, the extra flexibility
needed in the complex applications mentioned in the
introduction.
Several open points still remain. The most important are:
we should be able to use concepts such as beliefs and events
as early as possible in the Tropos methodology; we should
be able to exploit adaptation and reuse concepts during all
the activities in the development process, as well as to support
an iterative process; we should be able to extend the
Tropos process also to other important activities of software
engineering, such as testing, deployment and maintenance.
9.
ACKNOWLEDGMENTS
The knowledge that Paolo Busetta has of JACK has been
invaluable. Without him this paper would have been much
harder to write. We'd like to thank also Ralph R-onnquist
and Manuel Kolp for their helpful comments.
10.
--R
Agent UML: A formalism for specifying multiagent interaction.
The Unified Modeling Language User Guide.
Jack intelligent agents - components for intelligent agents in java
Developing agent-oriented information systems for the enterprise
"goal"
Architectural design patterns for multiagent coordination.
On agent-based software engineering
Tropos: A Framework for Requirements-Driven Software Development
The knowledge level.
Intelligent agents: Theory and practice.
The Gaia methodology for agent-oriented analysis and design
Modeling organizations for information systems requirements engineering.
Modeling Strategic Relationships for Process Reengineering.
"A-R" - modeling strategic actor relationships for business process reengineering
Understanding 'why' in software process modeling
Using goals
--TR
Agent-oriented programming
Goal-directed requirements acquisition
Understanding MYAMPERSANDldquo;whyMYAMPERSANDrdquo; in software process modelling, analysis, and design
Modelling strategic relationships for process reengineering
The Unified Modeling Language user guide
On agent-based software engineering
The Gaia Methodology for Agent-Oriented Analysis and Design
From E-R to "A-R" - Modelling Strategic Actor Relationships for Business Process Reengineering
--CTR
Jorge J. Gmez-Sanz , Juan Pavn , Francisco Garijo, Meta-models for building multi-agent systems, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Ghassan Beydoun , Cesar Gonzalez-Perez , Graham Low , Brian Henderson-Sellers, Synthesis of a generic MAS metamodel, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
Haibin Zhu , MengChu Zhou, Methodology first and language second: a way to teach object-oriented programming, Companion of the 18th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications, October 26-30, 2003, Anaheim, CA, USA
Rubn Fuentes , Jorge J. Gmez-Sanz , Juan Pavn, Integrating agent-oriented methodologies with UML-AT, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Haralambos Mouratidis , Paolo Giorgini , Gordon Manson, Modelling secure multiagent systems, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Fausto Giunchiglia , John Mylopoulos , Anna Perini, The tropos software development methodology: processes, models and diagrams, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
Franco Zambonelli , Nicholas R. Jennings , Michael Wooldridge, Developing multiagent systems: The Gaia methodology, ACM Transactions on Software Engineering and Methodology (TOSEM), v.12 n.3, p.317-370, July
Anthony Karageorgos , Simon Thompson , Nikolay Mehandjiev, Agent-Based System Design for B2B Electronic Commerce, International Journal of Electronic Commerce, v.7 n.1, p.59-90, Number 1/Fall 2002
Liliana Ardissono , Anna Goy , Giovanna Petrone , Marino Segnan, A multi-agent infrastructure for developing personalized web-based systems, ACM Transactions on Internet Technology (TOIT), v.5 n.1, p.47-69, February 2005
Anthony Karageorgos , Nikolay Mehandjiev , Simon Thompson, RAMASD: a semi-automatic method for designing agent organisations, The Knowledge Engineering Review, v.17 n.4, p.331-358, December 2002
Greg Brown , Betty H. C. Cheng , Heather Goldsby , Ji Zhang, Goal-oriented specification of adaptation requirements engineering in adaptive systems, Proceedings of the 2006 international workshop on Self-adaptation and self-managing systems, May 21-22, 2006, Shanghai, China
Anna Perini , Angelo Susi , Fausto Giunchiglia, Coordination specification in multi-agent systems: from requirements to architecture with the Tropos methodology, Proceedings of the 14th international conference on Software engineering and knowledge engineering, July 15-19, 2002, Ischia, Italy
Paolo Bresciani , Anna Perini , Paolo Giorgini , Fausto Giunchiglia , John Mylopoulos, Tropos: An Agent-Oriented Software Development Methodology, Autonomous Agents and Multi-Agent Systems, v.8 n.3, p.203-236, May 2004
Jaelson Castro , Manuel Kolp , John Mylopoulos, Towards requirements-driven information systems engineering: the Tropos project, Information Systems, v.27 n.6, p.365-389, September 2002
Manuel Kolp , Paolo Giorgini , John Mylopoulos, Multi-Agent Architectures as Organizational Structures, Autonomous Agents and Multi-Agent Systems, v.13 n.1, p.3-25, July 2006 | agent-based software engineering;design methodologies |
376519 | A Maximum-Likelihood Strategy for Directing Attention during Visual Search. | AbstractA precise analysis of an entire image is computationally wasteful if one is interested in finding a target object located in a subregion of the image. A useful attention strategy can reduce the overall computation by carrying out fast but approximate image measurements and using their results to suggest a promising subregion. This paper proposes a maximum-likelihood attention mechanism that does this. The attention mechanism recognizes that objects are made of parts and that parts have different features. It works by proposing object part and image feature pairings which have the highest likelihood of coming from the target. The exact calculation of the likelihood as well as approximations are provided. The attention mechanism is adaptive, that is, its behavior adapts to the statistics of the image features. Experimental results suggest that, on average, the attention mechanism evaluates less than 2 percent of all part-feature pairs before selecting the actual object, showing a significant reduction in the complexity of visual search. | Introduction
Object recognition algorithms often have two phases: a selection phase where a region (or a subset of image
features) is chosen, and a verication phase where the algorithm veries whether the object is present in the
chosen region. The algorithm speed depends on the strategy used for selecting regions. We call this strategy an
\attention strategy" since its task is to direct computational \attention" to promising parts of the image.
Classical object recognition algorithms have xed attention strategies. For example, template matching and
Hough Transform process the image in a xed raster fashion 1 . Fixed strategies can be slow, and to speed them
practical implementations employ ad-hoc preprocessing such as color thresholding. The ad-hoc procedures
are really attention strategies, since their purpose is to direct computation to the promising regions of the image.
Ad-hoc pre-processing techniques are useful, but they have two serious limitations:
1. Because they are ad-hoc, we are never sure that they are doing the best that can be done.
2. Ad-hoc techniques are usually unable to adapt to image statistics. For example, consider the strategy of
thresholding on color. If an object has a prominent red part, the image may be thresholded on the specic
hue of red. This can be problematic if the background also contains red objects. In that case, it may be
better to threshold on some other color provided some part of the object has that color.
This capacity to adapt the attention mechanism to the statistics of the image is critical for a good attention
algorithm. We know that the human visual attention system has this capacity. Most ad-hoc algorithms do
not and are easily confounded when faced with certain backgrounds.
Interpretation trees search subtrees with a xed priority.
Our aim is to propose an attention algorithm to overcome these two limitations. We replace ad-hoc strategies
with strategies based on probabilistic decision theory. In particular, we investigate a strategy based on the
maximum-likelihood (ML) decision rule. The ML rule turns out to give an adaptive strategy.
We formulate the problem as follows. We assume that the object recognition system is composed of three
sub-systems: (1) a pre-attentive system, (2) an attention mechanism, and (3) a post-attentive system.
The pre-attentive system is a fast feature detector. It operates over the entire image and detects simple image
features (color, edge roughness, corner angles etc). Some of the detected features come from the target object,
the rest come from other objects in the image called distractors. We assume that the object is composed of parts
and each part gives rise to single feature in the image.
The role of the attention mechanism is to choose an object part, pair it with an image feature and to hypothesize
that this pairing is due to the presence of the object in the image. This hypothesis is passed to the post-attentive
system which uses the full geometric knowledge of the object and explores the image around the feature to nd the
object. The post-attentive system is just a traditional object recognition algorithm. It indicates to the attention
mechanism whether the hypothesis is valid or not. If the hypothesis is not valid, the attention mechanism takes
this into account and proposes the next hypothesis. The post-attentive system now focuses on this region of
the image. The process terminates either when the object is found or when all features in the image have been
exhausted.
Post-attentive
System
Pre-attentive
System
Attention
Mechanism
Features
Hypothesis
Validation
Object Model
Slow processing
of the region of interest
Figure
1: Attention in Visual Search.
We make the following assumptions about the attention mechanism:
1. All features detected by the pre-attentive system are available to it all the time. This is simply another way
of saying that the pre-attentive system is fast enough to process the entire image before a detailed search
with the post-attentive system begins.
2. The attention mechanism only uses the values of the features detected by the pre-attentive system. That is,
the attention mechanism does not evaluate additional geometric constraints (by looking at pairs of image
features, for example). Imposing geometric constraints is computationally expensive and we wish to avoid
this expense 2 .
3. The attention mechanism is greedy. At every stage it chooses that part-feature pair which has the highest
likelihood of coming from the object in the image. The likelihood is evaluated after taking into account all
previous pairs rejected by the post-attention mechanism.
We make no other assumptions. In particular, we do not assume any specic pre- and post- attentive systems.
The strategy that we derive can be used with a range of user-supplied modules. We demonstrate this in our
experiments, where we use corner detectors as well as color detectors as pre-attentive systems.
Post-attentive systems are slow precisely because they evaluate geometric constraints.
In this paper, we only address 2d object recognition, and we have kept the formulation as simple as possible.
This is by design - we want to emphasize basic ideas and certain approximations. The theory in this paper can
be made more sophisticated, by adding more features, by considering multiple spatial resolutions, by clustering
pre-attentive features etc. These alternatives are not pursued in this paper.
Human Visual Attention
Human vision is known to have an attention strategy that is highly eective [14, 22, 28]. Human visual attention
is adaptive in the sense discussed above. Cognitive scientists do not yet have a complete understanding of human
visual attention, but some partial understanding has emerged. Many models proposed by cognitive scientists are
similar to our model of gure 1 [22, 28]. In these models, the pre-attentive system is fast and capable of extracting
\primitive" image features (such as color, edge smoothness, and size). The post-attentive system is slower, but
can analyze image regions in detail and use the complete geometric denition of the target object for recognition.
The human attention mechanism uses the primitive features produced by the rst system to direct the second.
A key aspect of the human visual is that it seems to use only feature values or local feature density in directing
attention. It does not appear to evaluate further geometric constraints on the primitive features.
The behavior of human visual attention in two conditions, called \pop-out" and \camou
age," is particularly
interesting. If an image contains a target whose primitive features are su-ciently dierent from the distractor
features, then the time required to nd the target is independent of the number of distractors in the image [22].
This is the \pop-out" condition. In this condition, the target just seems to pop out of the image. On the other
hand, if the target has features similar to the distractors, then the time required to nd the target grows linearly
with the number of distractors. We call this the \camou
age" condition.
In section 5 we show theoretically that \pop-out" and \camou
age" are emergent properties of our algorithm.
In section 6 we conrm this experimentally.
Relation to Previous Work
Other researchers have presented strong cases for using attention in vision algorithms [23, 24]. Some researchers
have proposed specic computational mechanisms that model what is known about human visual search [10, 11].
Others have proposed search algorithms for specic cues: parallel-line groups [17], color [5, 18], texture [19],
prominent motion [25], \blobs" in scale space [12], or intermediate objects [27]. Some authors have considered
attention for scene interpretation [16]. Others have applied it to passive tracking [21] and to active vision systems
[2, 15].
Our aim is quite dierent from these studies. We do not wish to implement a specic biology-based attention
algorithm or one that is tailored to a particular cue. Instead, we ask whether it is possible to derive an attention
strategy from rst principles which can be applied to a range of cues.
Finally, we wish to address a source of confusion. Our algorithm is sometimes compared to the interpretation
tree algorithm for object recognition [1, 6, 7]. The confusion arises from an apparent similarity between the
two: both attempt to match object parts to the image. However, this similarity is only supercial. There are
substantial dierences between the two algorithms:
1. The two algorithms operate at a dierent level. We are concerned with interaction between pre- and post-
attentive modules, rather than organizing the geometric comparison of parts and features. Interpretation
trees are concerned with the latter.
2. Interpretation trees match the entire object to edges in the image. Our algorithm is concerned with evaluating
the likelihood of a single part matching a single feature, given that the object is present in the
image.
3. The interpretation tree is an explicitly geometric algorithm - its purpose is to systematically evaluate
geometric relations between image edges. On the other hand, our attention mechanism is not concerned
with geometric relations. It works for arbitrary feature types, many of which do not enforce any geometric
constraints between parts.
These comments are not meant as a criticism of interpretation trees, but are simply meant to show that
the two have dierent goals. In fact, the two can be used together, with the interpretation tree serving as the
post-attentive system capable of fully recognizing the object.
Organization of the Paper
This paper is organized as follows. Section 2 contains the denitions and notations. Section 3 contains the
likelihood calculations. The calculation of the exact likelihood is computationally expensive and section 4 contains
approximations to it. Section 5 analyzes the behavior of ML attention strategy in a special case and demonstrates
its \adaptive" nature. Section 6 contains experimental results and section 7 concludes the paper.
Denitions
2.1 Features and Parts
We begin by dening features. By a feature we mean a primitive element of an image, such as color, corner, etc.,
that can be found by simple feature detectors. A feature has a value (the RGB triple of color, angle of the corner,
etc.) which belongs to some feature space V . The set of all features in the image is F . We will refer to the value
of the k th feature by f k . Some of the features in the image come from the instance of the object we wish to
detect. These are target features. Others come from other objects in the image; these are distractor features.
The object to be recognized has M parts, fS j . The set of all parts is P . Parts need not be
dened in a geometric way. The only requirement is that the union of all parts is the entire object. A part may
be visible or may be completely occluded in an image. The prior probability that part S j is visible in the image
is (the probability of complete occlusion of the part is 1 P j ). We assume that each visible part gives rise
to a single feature in the image. Thus, multiple parts cannot contribute to a feature and a part cannot give rise
to multiple features. If a part is visible, it may still be partially occluded, and its feature value may change due
to partial occlusion. We model this by saying that, if the jth part is visible, then its feature value is a random
variable with the probability density function p j (f ).
We assume that distractor feature values are realizations of a uniform Poisson process in the feature space
. We make this assumption because we do not have any knowledge of distractors and would like to treat their
values as being \uniformly" distributed in V . The probability density of obtaining n distractor feature values
is given by
where is the process intensity and V is the feature space volume.
2.2 The Attention Mechanism
The attention mechanism is iterative and it works as follows: During each iteration the mechanism chooses that
part-feature pair which is most likely due to the target. This choice is passed on to the post-attentive system,
which evaluates whether the pairing is really due to the occurrence of the object. If it is due to the object,
then the object has been found and the search terminates. If it is not, then the attention mechanism takes this
information into account and suggests the next most likely pair.
We will denote the pairing of part Sm with feature fn as (Sm ; fn ). Since the set of all parts of the object is P
and the set of all image features is F , the set of all possible part-feature pairings is P F . We will refer to any
which has been declared incorrect by the post-attention mechanism as a rejected pair. The set of all
rejected pairs till the j th iteration of the algorithm is denoted by R j . Thus, in the j th iteration, the set of all part
feature pairs that have not been rejected is P F R j . With this notation, the pseudo-code for the attention
algorithm can be written as:
1. Pre-process: Extract F, the set of image features.
2. Initialize: Set ;, the empty set.
3. Loop condition: The set of pairs that remain to be tested at the j th iteration is PF
R j . If this set is empty, terminate the iteration and declare that the object is not present
in the image.
4. Candidate selection: From the set PF R j choose the pair (S which has the greatest
likelihood of coming from the target in the image:
is the likelihood that the pair (Sm ; fn ) comes from the target in
the image given the set of parts, the set of image features, and the set of rejected pairs.
This is the M.L. decision.
5. Object verication: Pass the selected pair to the post-attentive system for verification.
If the hypothesis is correct, the object has been found. Terminate the search.
Else .
6. Bookkeeping: Set R 1. Go to Step 3.
3 The Likelihood
We need a formula for p((S to execute the above algorithm. We begin with a simple calculation.
3.1 All Parts Visible
Assume for the moment that there are no rejected pairs and that all parts of the object are visible, i.e., the prior
probability
Suppose that there are N features in the feature set F . We rst evaluate the likelihood that a specic set of
features came from the M object parts, with the rest of the features being distractors. To describe the pairing
of features with parts, we introduce a part mapping function
from the indices of the parts to the indices of features. The function says that the parts are mapped
to features f (1) ; ; f (2) . The likelihood of this with the rest of the feature set accounted for as distractors is:@ Y
Expressions such as the above occur frequently in our analysis, and we use a special notation for them. We denote
the expression by (g
are the features to be matched with respectively and H is the set of distractor
features. In this notation, the above likelihood is (f
Now, the likelihood that the single comes from the target object is the sum of all likelihoods in
which the parts are paired with features with the restriction that part Sm is always paired with part fn . This is:
;(m)=n
where the sum is over all functions which satisfy
Next suppose that the set of rejected pairs R j is not empty. To calculate the likelihood that (Sm ; fn ) is due
to the target, we must avoid summing over those part mappings which give rise to a rejected part-feature pair.
We will say that a part mapping compatible with the set R j if (S
. Using this notion, we can write the likelihood p((S
;(m)=n
where the sum is over all part mapping functions that are compatible with R j and which satisfy,
3.2 Occluded Parts
Next, consider the possibility that some parts may be completely occluded (the prior probabilities P j are not
necessarily equal to 1). To take this into account, we introduce additional features called null features. When a
part is mapped to a null feature, we say that it is completely occluded.
We augment the feature set F by adding M null features to it. The feature set now has N +M elements. The
likelihood can be expressed as before:
;(m)=n
where, as before, the sum is over all compatible part mapping functions, but the function is now given by
Y
in which q j and h evaluate the likelihoods that feature values come from parts and from distractors taking into
account the prior probability of occlusion and null features:
j is not a null feature
and,
where,
number of non-null features in H:
So far, we have ignored the fact that we do not know the intensity of the distractor process ( is required
in equation (7)). However, we can estimate from the data as follows: Since P is the probability
that part S i is visible, the average number of visible parts is
Thus, the average number of distractors is
number must be equal to V which is the average number of distractors derived from the
Poisson distribution. That is,
or,
Equations (4)-(8) completely dene the likelihood.
4 Approximations to the Likelihood
Equations (4)-(8) are computationally expensive to evaluate because they contain the combinatorics of mapping
1 parts to N 1 features. In this section, we propose two approximations. The rst involves using the
normal approximation to the Poisson distribution. This allows us to simplify the expression for the likelihood by
eliminating some terms. The second, and the more radical approximation, involves using a reduced number of
parts. That is, we consider smaller objects formed by taking r-tuples of parts from the original object, and we
calculate the likelihood that a part-feature pair comes from at least one of the simpler objects.
We nd in practice that simple case of very satisfactory results and we use this approximation in
all our experiments. For completeness, we report calculations for the general (r 1)-tuple case.
4.1 The Normal Approximation
The Poisson distribution for the distractor values can be approximated by a normal distribution when the number
of distractors is large [8]. Recall that each term being summed in equation (4) has a factor h(H) arising from
equation (5). In Appendix A, we show that if N is the number of null features mapped onto the parts, then h(H)
can be approximated as:
exp
Y
exp
Y
exp
where, C is the part of the expression that is independent of N .
Referring back to equation (5), we can see that the term C is common factor in all terms and need not be
evaluated if we are only interested in nding the part-feature pair that maximizes the likelihood of equation (4).
With this, the likelihood becomes:
;(m)=n
where, the function is now given by
Y
Y
exp
C
Y
is the number of null features in
j is not a null feature
if g j is a null feature, (12)
In the last step of equation (11) we dropped the C term and included the exp M
term in the denition of r j .
4.2 Matching Simpler Models
We now proceed to the second approximation. Recall that in calculating the exact likelihood for (Sm ; fn ) we
had to account for the combinatorics for all of the rest of the parts to match features. In the approximation
considered here, we only account for r 1 of the rest parts. That is, we consider all simpler objects formed by
the part Sm and r 1 tuples of other parts and evaluate the likelihood that (Sm ; fn ) comes from at least one the
simpler objects.
4.3 Approximate Likelihood for
In this case, we have M simplied objects, each object having exactly one unique part. The likelihood that the
comes from at least one of these simpler parts is equal to the likelihood that is comes from the
object having Sm as its single part. The likelihood is just r m (f n ).
4.4 Approximate Likelihood for
In this case, we have M simplied objects, each object having two parts { the part Sm and one other part, and
we calculate the likelihood that the pair (Sm ; fn ) comes from at least one of these simpler parts.
m) be one other specic part. Then, using equations (10)- (12), the joint likelihood that the pair
comes from this simplied object is
;(m)=n
where, the sum is over all part mapping functions which (1) map the part index set fm; ig into the feature
index set, (2) are compatible with R j , and (3) satisfy n. The functions r are dened by equation
(12). This expression can be easily rewritten as
;(m)=n
where, on the right hand side, the sum is over all feature indices k for which k 6= n and the pair (S is not in
Therefore, the joint likelihood that the pair (Sm ; fn ) comes from one or more of the 2-part objects is the sum
of the above likelihood over all
The computational complexity of this expression is O(NM ).
4.5 Higher Order Approximations
Now, consider the likelihood that the pairing (Sm due to at least one simplied object formed
by the part Sm and r 1 other parts of the original object.
The expression for this likelihood is messy. To simplify its presentation, we adopt the following convention:
we represent an ordered set of r indices such that fS i 1
is one simplied object. Two
dierent sets of the type I represent two dierent combinations of r parts from the object.
Repeating the calculation for equation (13), we get
I;i m=m
;(im)=n
Y
i2I
where, the rst sum is over all I which have m, the second sum is over all part mapping functions (which
map I to the feature index set) which are compatible with R j and for which (i m n. The complexity of
evaluating the likelihood of equation (14) is O(C N 1
r 1 ). These calculations give us likelihoods which can be used
with the attention algorithm in practice. This completes the description of the algorithm.
Next, we turn to investigate the adaptibility of the attention algorithm.
(x)Detected feature values
Figure
2: Maximum-likelihood Explanation of Pop-out.
5 Adaptation, Pop-out, and Camou
age
5.1 Adaptation
To understand the adaptive behavior of the algorithm, consider the following simple case:
1. The model has only two parts, S 1 and S 2 . Neither part is ever completely occluded (i.e.,
2. The feature space is one dimensional and the parts have uniform feature distributions over disjoint intervals
That is, the probability density that S 1 occurs with
a feature value f is
Similarly, the probability density that S 2 occurs with a feature value f is
Consider two situations in which there are 6 features in the image. In the rst case, suppose that ve of the
occur in 1 and the sixth feature f 6 occurs in 2 (illustrated in gure 2). In the second
case, reverse the situation, so that the ve features, f occur in 2 and the sixth feature f 6 occurs in 1 .
In the rst case, the likelihood that S 2 matches f 6 is a sum of ve terms. Each term corresponds to S 2 matching
matching one feature in ff 1 ; ; f 5 g, and the rest of the features being distractors. Each term isL 2
and hence the likelihood of (S
Now consider the likelihood of (S It has a single term corresponding to S 1 matching f 1 , S 2 matching f 6
being distractor features. Its likelihood has a single
The likelihood that S 1 matches any other feature occurring in 1 is the same as above.
Clearly, the likelihood that S 2 matches f 6 is greater than the likelihood that S 1 matches f 1 or any other feature
in 1 , and the attention mechanism chooses the former.
We can simply repeat the above calculation when the ve features f are in 2 and one feature f 6 in
1 . Again we get the likelihood of part S 1 matching the feature f 6 asL 2
while the likelihood of S 2 matching any of the features in 2 isL 2
The former is clearly greater than the latter.
If 1 and 2 were ranges of red and blue colors, then the two cases can be interpreted as follows: The object
has two parts, one colored red and the other blue. In the rst case, we have an image with one blue feature
and ve red features and in the second case we have an image with one red feature and ve blue features. The
attention algorithm chooses to investigate the blue feature in the rst case, and the red feature in the second case.
Thus the algorithm \adapts" to the distribution of features in the image and chooses to investigate that feature
which is least like the distractor (i.e. background) features. This is precisely the adaptative behavior we want for
the attention algorithm, and the above calculation shows that the ML decision imparts it to our algorithm.
It is easy to check that if we use the approximation in the above calculation, the algorithm will not
adapt. In fact it is easy to check in general that adaptation (to the statistics of features in the image) is possible
if r 2.
Finally, recall that pop-out and camou
age are conditions under which the human visual system nds the
target in constant time and, in time that grows linearly with the number of distractors. Pop-out is achieved if
the target has some feature that is su-ciently dierent from the distractors. Camou
age occurs when all target
features are similar to distractor features.
The adaptive behavior discussed above demonstrates pop-out since the ML attention mechanism always chooses
that feature which is least like the distractors. In contrast, if the target and distractor had similar features { say
that there were three features in 1 and 2 { the likelihoods of all part-feature pairs would be identical and there
would be no reason to prefer one over the other. The search in this case would proceed without any strong bias
towards choosing a particular pair, and the time to nd the target will be similar to a blind serial search. It will
grow linearly with the number of distractors. This is camou
age.
Thus, it appears that the ML attention mechanism can emulate pop-out and camou
age.
6 Experimental Results
We next report experiments that evaluate the performance of the attention mechanism with real images. We
conducted three sets of experiments. In each experiment, we obtained a number of images in which a target object
was present. For each image, we calculated the net number of part-feature pairings that were possible. We then
used the attention mechanism to propose part-feature pairings from this set. The number of part-feature pairings
suggested by the attention mechanism till the object was found, expressed as a fraction of the total number of
part-feature pairings, was taken as the performance measure of the attention algorithm.
The pre-attentive system and the attention mechanism was implmented in C. On the other hand, the verication
of whether the proposed part-feature pair belonged to the target object was performed manually. For the likelihood
calculation, all priors, P j , were set to 1:0.
6.1 Corner Features
The target object used in the rst experiment is shown in Figure 3 { a cardboard cut-out of a sh. Fifty
images containing the target were produced by placing the model in a 30cm 30cm area . Commonly occurring
laboratory tools were tossed on the model. Images were taken in such a way that the model had a scale range
between 0:5 and 2:0. In all cases, the model was partially occluded in the image. The model was so heavily
occluded in two of the 50 images that none of its corners were visible. These images were discarded and the
algorithm was tested on the remaining 48 images. Figure 4 shows two of the 48 images.
Figure
3: The Model.
The preattentive system used corners of edge contours in the image as features. Corners were dened as
points of local maxima of curvature on an edge contour together with the two \arms" which abut that point.
Arms extend until the next point of high curvature along a contour, and the length of an arm is its arc length.
Figure
5(a) shows an example of a curve parsed into corners. Corners and arms were extracted from the images
automatically by edge detection followed by edge linking and curvature calculation.
Figure
4: Example Images.
Each corner feature was parameterized by a vector of 2 parameters: the length of the shorter arm, a, and the
average angle of deviation between the two arms, (see Figure 5(b)).
The target had six corners which were chosen as the object parts. The distribution of the model features
was calculated as follows: Each target corner was occluded (in software) such that the smaller arm length after
occlusion was 100%; 80%; 60% of the unoccluded smaller arm length. For each partial occlusion, the feature
value (; a) was calculated. These values represent samples of the distribution of (; a) under partial occlusion
(a) Definition of a part (b) Features of a part
Part
Longer arm
Shorter arm
Average
angular
deviation
Figure
5: Features used in Experiments.
Figure
Attentive Search for the Object
Percentage hypothesis evaluated1030
Frequency
Figure
7: Histogram of percentage hypothesis
evaluated until recognition.
at unit magnication. The process was duplicated by changing the magnication (in software) of the model
to 2:0; 1:707; 0:5. The set of (; a) obtained in this wat was fed to a standard non-parametric density
estimator to obtain the probability distribution of corner parameters for each part.
Results That approximate ML decision rule was used (with r set to 1; 2 and 3 respectively. Figure 6 shows
a typical result of ML visual search with 2. The gure shows the sequence of corners that the algorithm
analyzed in turn until it suggested the target. The corner at the successful match is also shown in the gure.
Similar behavior was obtained for
As mentioned above, to evaluate the eectiveness of the ML attention mechanism, we measured the average
percentage of hypotheses that were processed until the correct hypothesis was suggested. On average, the
rule processed 4:67% of the possible hypotheses, the processed 1:97% of the possible hypotheses, and
the processed 2:73% of the possible hypotheses. Clearly the latter two rules outperformed the rst
rule. Since the rule does not exhibit adaptation (as discussed in section 5), it was dropped from further
consideration. Further, since performed similarly, but the r = 3 rule was slower in execution,
the rule appears to be a good compromise between eectiveness, ability to adapt, and computational
complexity. Figure 7 shows a histogram of the percentage of hypotheses processed by this rule to nd the correct
match. As mentioned above, on average, only 1:97% of the possible hypotheses were processed before the correct
hypothesis was found. In the absence of an attention mechanism, the expected proportion of hypotheses evaluated
would be 50%. This shows that the attention mechanism signicantly shortened the search time.
Pop-out and Camou
age In the second experiment, we examined the performance of the ML attention
mechanism under pop-out and camou
age conditions. As before, we used the approximate likelihood with 2.
To simulate pop-out and camou
age conditions, we created similar and dissimilar distractors. Dissimilar
distractors were created by cutting triangular pieces of cardboard. Similar distractors were created by duplicating
the target model and cutting the duplicates in half along random lines. Figure 8 shows an image where the model
is present along with the triangular dissimilar distractors. This is the pop-out condition. Figure 9 shows the
camou
age condition. Multiple images were obtained in the pop-out and camou
age conditions by increasing the
number of distractors.
Figures
8 and 9 show typical sequences in image features were searched till the target was suggested in pop-out
and camou
age conditions. Figure 10 shows the number of hypotheses processed until the target was suggested
by the attention algorithm as a function of the total number of features in the image. In the pop-out case, the
rst hypothesis was always the correct one, while in the camou
age case, the number of hypotheses increased
monotonically with the number of features in the image.
Figure
8: Search under Pop-out. Figure 9: Search under Camou
age.
50 150 250
Number of features in the image1030
Number
of
fixations
to
find
the
object
Camouflage
Figure
10: Number of Hypotheses Evaluated
vs. Image Features.
(a) (b)
Figure
11: Search paths for two Ronaldo images. In (a), Ronaldo was found immediately. In (b), Ronaldo is
squatting at the lower left.
6.2 Color Features
The third experiment used color features for nding people in photographs and video stills. The object model
had 2 parts, and the feature of each part was the color distribution of its pixels in RGB space.
First, we conducted a series of experiments for nding a member of the Brazilian soccer team, Luiz Ronaldo,
against a variety of backgrounds. The two parts correspond to the yellow and blue of his team's uniform. Color
distributions were estimated by taking a sample of each color from several images of the subject, histogramming
the samples in a coarse (32 3 ) RGB cube, smoothing, and normalizing.
Fifty images containing Ronaldo were gathered from the Internet, with images of varying sizes. Some images
were acquired from frames of MPEG movies. The criteria used to select the images were (1) the subject in
uniform should be visible in the image, and (2) the images should not be close-ups (in which case the candidate
selection task would be much too easy). The scale of the subject varied in height, in a range between 5 to 100
pixels. Figure 11 shows two of the 50 images.
We used single pixels as image features. From each image, were sampled at grid points of
the image. In 5 instances, this allowed the player to fall between gaps in the sampling, and so sampling was
quadrupled to pixels. Although this suggests that there were as many as 4800 2 match hypotheses
in any image, in reality, most of the selected pixels had RGB values that could not be produced by either color
distribution. Those pixels were discarded from the count of potential hypotheses.
Results For this experiment, only 0:86% of the possible hypotheses were evaluated by the attention algorithm,
on average, before the correct hypothesis was found. Figure 11 shows the extreme examples of pop-out and
camou
age. In (a), Ronaldo's shirt is the only yellow object in the image and it immediately pops out against
a largely green background. In (b), he is camou
aged by his teammates who oer equally good matches to the
color distributions.
Where's Waldo As a nal example, we evaluated the eectiveness of ML attention on the well-known Where's
Waldo game { a children's book series in which the goal is to nd the title character in pages lled with highly
detailed illustrations. A similar attempt was also reported in [13].
The same implementation described above was used with color densities from Waldo's shirt and shorts. Because
Waldo is a small gure in the image, pixels were sampled at full resolution (610 338). Of the 194380 2 non-
zero-probability part-pixel match possibilities, only 28 (0:007%) were examined before the algorithm suggested
the target location for Waldo. The total time for the entire search process aside from object verication took
Figure
12: Where's Waldo?
only 0:18 seconds on a 266MHz single-processor Pentium II. In Figure 12, the search sequence for nding Waldo
is overlaid on top of the image 3 .
These experiments indicate that the ML attention strategy is eective with 2. Furthermore, as we
mentioned in section 1, the strategy works with geometric as well as non-geometric pre-attentive features.
Conclusions
In this paper we proposed a maximum-likelihood technique for directing attention. The technique uses simple
features found by a fast pre-attentive module to direct a slower, but more accurate post-attentive module. The
attention mechanism recognizes that the target object is made up of parts and attempts to nd that pairing of
object part and image feature which is most likely to come from the target in the image. The resulting attention
strategy is adaptive. Its choice of the part-feature pair depends on the image feature statistics. Furthermore,
the attention strategy demonstrates \pop-out" and \camou
age" which are two important properties of human
visual attention. In experiments with real world images, the attention strategy signicantly reduces the number
of hypotheses that are required to be evaluated before target is found.
Acknowledgements
We greatly beneted from discussions with Profs. Drew McDermott, Anand Rangarajan, and Lisa Berlinger of
Yale University.
3 This particular image can be found at http://www.ndwaldo.com/city/city.asp.
--R
HYPER: a new approach for the recognition and positioning of two-dimensional objects
Ferrier N.
Finding Waldo
Localizing overlapping parts by searching the interpretation tree.
Object recognition by computer.
Handbook of the Poisson Distribution
Selective Attention in Vision
Brown C.
Wang J.
Mudge T.
Ballard D.
Cave K.
--TR
--CTR
Toshiyuki Kirishima , Kosuke Sato , Kunihiro Chihara, Real-Time Gesture Recognition by Learning and Selective Control of Visual Interest Points, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.3, p.351-364, March 2005
Fred H. Hamker, The emergence of attention by population-based inference and its role in distributed processing and cognitive control of vision, Computer Vision and Image Understanding, v.100 n.1-2, p.64-106, October 2005
H. I. Bozma , G. akirolu , . Soyer, Biologically inspired Cartesian and non-Cartesian filters for attentional sequences, Pattern Recognition Letters, v.24 n.9-10, p.1261-1274, 01 June
C. Y. Fang , C. S. Fuh , P. S. Yen , S. Cherng , S. W. Chen, An automatic road sign recognition system based on a computational model of human recognition processing, Computer Vision and Image Understanding, v.96 n.2, p.237-268, November 2004
. Soyer , H. I. Bozma , Y. Istefanopulos, APES: Attentively Perceiving Robot, Autonomous Robots, v.20 n.1, p.61-80, January 2006 | object recognition;visual search;attention |
377018 | A Cost Model for Selecting Checkpoint Positions in Time Warp Parallel Simulation. | AbstractRecent papers have shown that the performance of Time Warp simulators can be improved by appropriately selecting the positions of checkpoints, instead of taking them on a periodic basis. In this paper, we present a checkpointing technique in which the selection of the positions of checkpoints is based on a checkpointing-recovery cost model. Given the current state $S$, the model determines the convenience of recording $S$ as a checkpoint before the next event is executed. This is done by taking into account the position of the last taken checkpoint, the granularity (i.e., the execution time) of intermediate events, and using an estimate of the probability that $S$ will have to be restored due to rollback in the future of the execution. A synthetic benchmark in different configurations is used for evaluating and comparing this approach to classical periodic techniques. As a testing environment we used a cluster of PCs connected through a Myrinet switch coupled with a fast communication layer specifically designed to exploit the potential of this type of switch. The obtained results point out that our solution allows faster execution and, in some cases, exhibits the additional advantage that less memory is required for recording state vectors. This possibly contributes to further performance improvements when memory is a critical resource for the specific application. A performance study for the case of a cellular phone system simulation is finally reported to demonstrate the effectiveness of this solution for a real world application. | the execution of events at each LP (this is also referred to as causality). These mechanisms are,
in general, conservative or optimistic. The conservative ones enforce causality by requiring LPs to
block until certain safety criteria are met. Instead, in the optimistic mechanisms, events may be
executed in violation of timestamp order as no "block until safe" policy is considered. Whenever
a causality error is detected, a recovery procedure is invoked. This allows the exploitation of
parallelism anytime it is possible for causality errors to occurs, but they do not.
We focus on the Time Warp optimistic mechanism [10]. It allows each LP to execute events
unless its pending event set is empty and uses a checkpoint-based rollback to recover from timestamp
order violations. A rollback recovers the state of the LP to its value immediately prior the violation.
While rolling back, the LP undoes the effects of the events scheduled during the rolled back portion
of the simulation. This is done by sending a message with an antievent for each event that must be
undone. Upon the receipt of an antievent corresponding to an already executed event, the recipient
LP rolls back as well.
There exist two main checkpointing methods to support state recovery, namely incremental state
saving and sparse checkpointing ( 1 ). The former [2, 23, 25] maintains a history of before-images of
the state variables modified during event execution so that state recovery can be accomplished by
backwards crossing the logged history and copying before-images into their original state locations.
This solution has the advantage of low checkpointing overhead whenever small fractions of the state
are updated by event execution. However, in order to provide short state recovery latency, it requires
the rollback distance to be sufficiently small. The second method, namely sparse checkpointing,
is traditionally defined as recording the LP state periodically, each - event executions [11]. If a
value of - greater than one is used, the checkpointing overhead is kept low, however an additional
time penalty is added to state recovery. More precisely, state recovery to an unrecorded state
involves reloading the latest checkpoint preceding that state and re-updating state variables through
the replay of intermediate events (coasting forward). It has been shown ([4, 19]) that periodic
checkpoints taken each - event executions give rise to coasting forward with length uniformly
distributed between 0 and
Recently, solutions mixing features of both methods have been presented in [5, 15, 22].
Recent papers ([16, 17]) have shown that it is possible to achieve fast state recovery with less
checkpointing overhead than that of periodic checkpointing if an appropriate selection of checkpoint
positions is adopted. Along this line we present in this paper a checkpointing technique which selects
the positions of checkpoints basing on a cost model that associates to any state a checkpointing-
recovery overhead. The checkpointing overhead is either the time to take a checkpoint or zero
depending on whether the state is recorded or not. The recovery overhead is the time penalty
associated to a possible (future) rollback to that state. This penalty varies depending on whether
the state is recorded or not (if the state is not recorded, then it must be reconstructed through
coasting forward, so the recovery overhead depends on the position of the last taken checkpoint and
on the granularity of coasting forward events). Then, the convenience of recording that state as a
checkpoint is determined basing on the cost model. In order to solve the model, an estimate of the
probability for that state to be restored due to rollback in the future of the simulation execution
has to be performed. We present a method to solve this problem which actually requires quite
negligible computational effort. In addition, we remark that the cost model, being very simple, is
solved on-line without incurring significant overhead. As a final point, we note that the model takes
explicitly into account the granularity of any simulation event (and not just a mean value among
all the events) while determining the recovery overhead associated to a given state. This allows
our solution to exhibit the potential for providing good performance also in the case of simulations
having high variance of the event granularity (this is typical in both battlefield simulations or
simulations of complex communication systems).
We also report simulation results of a classical benchmark (the PHOLD model [7]) in several
different configurations. The obtained data show that our technique actually leads to a reduction of
the checkpointing-recovery overhead, thus allowing faster execution of the simulation. This happens
especially when the benchmark parameters are chosen to represent large and complex simulated
systems. In addition, we noted that the amount of memory used by our technique is similar to or
even less than that of previous solutions.
The remainder of the paper is organized as follows. In Section 2 a background on sparse
checkpointing methods is presented. In Section 3 our cost model and the outcoming checkpointing
technique are described. Performance data are reported in Section 4.
Related Work
As pointed out in the Introduction, the traditional approach to sparse checkpointing consists of
recording the LP state periodically each - event executions, - being the checkpoint interval. Several
analytical models have been presented to determine the time-optimal value (- opt ) for the check-point
interval. The assumption underlying all these models is that the coasting forward length is
uniformly distributed between 0 and results reported in [4, 19] have shown
that this is a good approximation of the real distribution of the coasting forward length anytime
checkpoints are taken on a periodic basis). In addition, most of these models [11, 13, 19] assume
there exists a fixed value for the time to record a state as a checkpoint (which is usually a good
approximation) and that the granularity of simulation events has small variance. A more general
model is the one presented in [21], which takes into account how the exact granularity of simulation
events affects the coasting forward time (thus the state recovery time). The relevance of this model
relies on the fact that several real world simulations, such as battlefield simulations or simulations
of mobile communication systems, have actually high variance of the event granularity which should
be taken into account in order to determine the time-optimal value - opt of the checkpoint interval.
The extended experimental study in [14] pointed out the effects of the variation of the checkpoint
interval on the rollback behavior. Several stochastic queuing networks connected with different
topologies were considered. Presented results showed that when the checkpoint interval slightly
increases a throttling effect appears which tends to reduce the number of rollbacks (this is due to
interactions among the LPs on the same processor). Instead, when the checkpoint interval is largely
increased (which produces, on the average, much longer coasting forwards), a thrashing effect gives
rise to an increase in the number of rollbacks (this is due to interactions among LPs on distinct
processors).
To tackle dynamism in the rollback behavior (which can be originated, for example, by variations
of the load on the processors or even by thrashing/throttling phenomena) a number of adaptive
techniques that dynamically adjust the value of - have been proposed. Most of them [1, 4, 19] are
based on the observation of some parameters of the LP behavior (for example the rollback frequency)
over a certain number of executed events, referred to as observation period, and recalculate the
checkpoint interval at the beginning of each period. A different approach can be found in [12],
where the recalculation of - is executed every Global-Virtual-Time (GVT) evaluation 2 .
Recently, two papers ([16, 17]) have shown that it is possible to reduce the overhead due to
checkpointing and state recovery by carefully selecting the positions of checkpoints, instead of
taking them periodically. The method in [16] takes the checkpoint decision on the basis of the
observation of differences between timestamps of two consecutive events. Whenever the execution
of an event is going to determine a large simulated time increment then a checkpoint is taken prior
to the execution of that event. This solution implicitly assumes that if the LP moves from the
state S to the state S 0 , then the probability that S will have to be restored due to rollback inThe GVT is defined as the lowest value among the timestamps of events either not yet executed or currently being
executed or carried on messages still in transit. The GVT value represents the commitment horizon of the simulation
because no rollback to a simulated time preceding GVT can ever occur. The GVT notion is used to reclaim memory
allocated for obsolete messages and state information, and to allow operations that cannot be undone (e.g., I/Os,
displaying of intermediate simulation results, etc. The memory reclaiming procedure is known as fossil collection.
the future of the execution is proportional to the simulated time increment while moving to S 0 .
Although this assumption is suited for several simulations ([3, 16]), it has never been extensively
tested; this limits the generality of such solution. The method in [17] is based on a notion of
probabilistic checkpointing which works as follows. For any state S an estimate of the probability
that it will have to be restored due to rollback is performed, namely P e (S). Then, before moving
from S to S 0 , a value ff uniformly distributed in the interval [0,1] is extracted and a checkpoint of
S is taken if ff recorded with probability equal to P e (S); therefore the higher
the probability that S will have to be restored, the higher the probability that it is recorded as a
checkpoint). What we noted in this method is that the probabilistic decision is actually memoryless
as it does not take into account the position of the last taken checkpoint to establish if S must be
recorded or not. If a checkpoint has been taken just few events ago, then S can be reconstructed
through coasting forward without incurring a significant time penalty (this is true especially in
the case of small grained coasting forward events). In this case there is no need to record S as a
checkpoint even if the probability P e (S) is not minimal.
The checkpointing technique we propose in this paper solves latter problem as the cost model
it relies on takes into account the position of the last taken checkpoint. In addition, as already
pointed out in the Introduction, the recovery overhead associated to any state is computed by
explicitly considering the granularity of any event involved in a (possible) coasting forward (and
not just a mean granularity value). Latter feature allows our solution to be highly general and to
have the potential for providing good performance in case of both small and large variance of the
event granularity.
3 Selecting Checkpoint Positions
In this section we present the cost model and the associated policy for selecting the positions of
checkpoints. Then a method to estimate probability values needed for the solution of the cost
model is introduced. Finally, we discuss the problem of memory usage which is characteristic of
any sparse checkpointing method (it is due to memory space allocated for obsolete messages which
cannot be discarded during the execution of the fossil collection procedure) and we show how to
solve this problem in our checkpointing technique with the introduction of additional (rare) periodic
checkpoints.
3.1 The Cost Model and the Selection Policy
The LP moves from one state to another due to the execution of simulation events. An example
of this is shown in Figure 1, where the arrow extending toward the right-end represents simulated
time, black circles represent event timestamps and labeled boxes represent state values at given
execution of the event e
with timestamp T
simulated time
event timestamps
LP states
e
Figure
1: The LP Moves from S to S 0 due to the Execution of e.
points in the simulated time (i.e., those points corresponding to event timestamps). In our example,
the LP moves from the state S to S 0 due to the execution of the event e with timestamp T . To
each state S passed through by the LP in the course of the simulation execution, we associate a
probability value, namely P (S), which is the probability that S will have to be restored due to
a (future) rollback. The value of P (S) will be used in the construction of the cost model which
expresses the checkpointing-recovery overhead associated to S.
Denoting with ffi s the time to save or reload a whole LP state, which is therefore assumed to be
constant as in most previous analyses (see [11, 13, 19, 21, 22]), the checkpointing overhead C(S)
associated to the state S can be expressed as follows ( 3
recorded
0 if S is not recorded
(1)
Expression (1) points out that if S is recorded as a checkpoint then there is a checkpointing
overhead associated to it which is quantified by the time to take a checkpoint ffi s .
Let us now model the recovery overhead associated to S. Before proceeding in the discussion
we remark that this overhead expresses only the latency to recover to the state S as a function of
the checkpointing activity of the LP; it does not take into account the effects of sending antievents
(recall that latter overhead is not directly dependent on checkpointing and state recovery actions,
which are those actions we are interested in). We denote as E(S) the set of all the events which
move the LP from the latest checkpointed state preceding S to S. For the example shown in Figure
2.a we have that the latest checkpointed state preceding S is X, and g. Instead,
for the example in Figure 2.b we have that the latest checkpointed state preceding S is Y , and
g. Denoting with ffi e the execution time for an event e 2 E(S), then we can associate
3 In the present analysis we use the same value ffi s for both the state saving time and the time to reload a recorded
state into the current state buffer as this is a common and realistic assumption. However, the analysis can be easily
extended to the cases where this assumption is not verified.
Y
simulated
time
checkpoint
E(S)
simulated
time
E(S)
checkpoint
(a) (b)
Figure
2: Two Examples for E(S).
the following recovery overhead R(S) to the state S ( 4
recorded
S is not recorded
(2)
Expression (2) states that, in case of rollback to S (this happens with probability P (S)), if S is
recorded as a checkpoint then the recovery overhead consists only of the time to reload S into the
current state buffer, namely ffi s . Otherwise, it consists of the time ffi s to reload the latest checkpoint
preceding S, plus the time to replay all the events in E(S) (i.e., the coasting forward time). Note
that while defining R(S) we have implicitly assumed that the probability P (S) does not change
depending on whether S is recorded or not. More technically, it is assumed that the checkpointing
actions do not affect the rollback behavior; this is a typical assumption which is common to most
analytical models (see [19, 21, 22]). Note that this assumption is not so distant from the real
behavior as usually the thrashing and throttling phenomena pointed out in [14], which are due to
variations of the checkpointing actions, are not so significant.
By combining (1) and (2) we get the following expression for the checkpointing-recovery overhead
CR(S) associated to the state S (note that CR(S) is the sum of C(S) and R(S)):
recorded
S is not recorded
Expression (3) represents our cost model. Using this model, we introduce the following selection
policy for determining the positions of checkpoints. Basically, the selection policy is such that the
state S is recorded as a checkpoint before the execution of the next event (i.e., the one which moves
the LP from S to its subsequent state) if such recording results in the minimization of the value of
CR(S). More technically, denoting with CR(S) y the value of CR(S) in case S is recorded, and with
4 Recall that ffi e keeps into account only the time to execute the event e; it does take into account the time to
send-out new events possibly scheduled during the execution of e. Therefore, ffi e expresses exactly the time to replay
e in a coasting forward, as the only purpose of coasting forward is to re-update state variables (i.e., no event is sent
out in such phase).
CR(S) n the value of CR(S) in case S is not recorded, then the selection policy can be synthesized
by the following expression:
Selection-Policy
before moving from S:
if CR(S) y - CR(S) n
then record S
else do not record S
Computing the value CR(S) y requires the knowledge of ffi s and P (S). The parameter ffi s is
usually known upon the execution of the simulation as it depends on the size (number of bytes)
of the state and on the time per byte needed for recording it ( 5 ). Instead, P (S) is unknown as
it depends on several parameters such as features proper of the simulation model and number of
processors used, among others. The presentation of a solution to estimate the value of P (S) at low
cost is delayed to Section 3.2.
Computing the value of CR(S) n requires (in addition to ffi s and P (S)) also the knowledge of
the execution time (granularity) of the events executed since the last checkpoint preceding S was
taken. In almost all simulations this granularity is determined by the type of the event. So, in
order to compute CR(S) n the LP needs to keep track of the type of the events in E(S). For the
case of simulations where the granularity of simulation events cannot be determined by their type,
the LP needs to monitor the CPU time actually used for the execution of the events in E(S).
Note that from among several parameters, the cost model determines the convenience of recording
S as a checkpoint prior to the execution of the next event basing also on:
(i) the position of the last taken checkpoint (which determines the events that are in E(S));
(ii) the exact granularity of the events executed from the last taken checkpoint (i.e., the granularity
of the events in E(S)).
We remark that the information in points (i) and (ii) actually encodes the maximal knowledge
related to the portion of the simulation already executed and to past checkpointing actions which
is relevant to establish the amount of recovery overhead associated to S in case S is not recorded
and a rollback to it occurs. More precisely, the positions of the checkpoints other than the latest
one preceding S, and the granularity of any event out of the set E(S) do not affect the time to
reconstruct S in case it is not recorded and a rollback to it occurs.
We would like to discuss next a fundamental point. The discussion is preceded by a simple
introductory example. Let us consider the portion of the simulation execution shown in Figure 3.
5 Recall that the recording of S can be done either via software (this is the typical solution) or through the use of
special purpose hardware [8].
checkpoint
checkpoint
E(S)
simulated simulated
checkpoint
(a) (b)
e
Figure
3: Effects of the Recording of S on CR(S 0 ).
immediately follows S and the event which moves the LP from S to S 0 is - e. In case S is recorded as
a checkpoint (see Figure 3.a), the set E(S 0 ) contains only the event - e so the checkpointing-recovery
overhead CR(S 0 ) associated to the state S 0 is:
recorded
recorded
Instead, if S is not recorded as a checkpoint (see Figure 3.b), then the set E(S 0 ) contains the
same events as the set E(S) (i.e., e 1 and e 2 in our example) plus the event -
e. Therefore, the
checkpointing-recovery overhead CR(S 0 ) becomes as follows:
recorded
recorded
(note that also to derive expressions (4) and (5) we suppose P (S 0 ) independent of checkpointing
actions).
The previous example points out that the outcoming decision of the selection policy on whether
the state S must be recorded or not, determines the shape of the function CR associated to the
states that will follow S in the simulation execution. This implies that taking the checkpoint
decision basing only on the minimization of the checkpointing-recovery overhead associated to
the current state of the LP could not lead to the minimization of the whole checkpointing-recovery
overhead of the simulation (i.e., that resulting from the sum of the checkpointing-recovery overheads
associated to all the states passed through in the course of the simulation). However, we recall
that such (global) minimization requires the knowledge of the exact sequence of states that will be
passed through in the course of the simulation execution (i.e., the exact sequence of events that
will be executed), which is unknown to any LP. Furthermore, even if such sequence is known, a
selection policy based on that minimization would require enormous computational effort to take
the checkpoint decision, which would dramatically decrease the execution speed of the simulation.
In conclusion, the selection policy here introduced selects the best checkpoint positions with respect
to the portion of the simulation already executed which is known by any LP.
3.2 Estimating Probability Values
As pointed out in Section 3.1, for any state S the solution of the cost model (i.e., computing the
values of CR(S) y and CR(S) n ) needs the knowledge of the probability P (S). In this section we
present a method to estimate this value. The method has resemblances to those presented in [3, 17].
Before entering the description, we recall that the method must be such that the CPU time and
the memory space needed for keeping track of statistical data must approach to zero, and that
the CPU time to compute the estimate of P (S) must approach to zero as well. If this does not
happen, the method could severely decrease the execution speed of the simulation. Note that if
the method uses a large amount of memory, then two negative effects on performance may occur:
(i) the management of large memory may lead to poor locality of reference in the virtual memory
system; (ii) the amount of memory destined to record messages and state information is reduced,
which may require more frequent GVT calculation and fossil collection execution.
Let st(S) denote the value of the simulated time associated to the state S and let e, with
timestamp t(e), be the event which moves the LP from S to its subsequent state. The execution
of the event e produces an increment in the simulated time of the LP, moving it from st(S) to
t(e). Then, to any state S we can associate a simulated time interval, namely I(S), whose length
is which is delimited as follows:
The probability P (S) that the state S will have to be restored due to a (future) rollback
corresponds to the probability that a rollback will occur in the simulated time interval I(S). Recall
that a rollback in the interval I(S) occurs either because events are scheduled later with timestamps
in that interval (i.e., after e is executed, an event e 0 such that st(S) ! t(e scheduled for
the LP) or because the LP that scheduled the event e rolls back revoking e (i.e., the antievent of e
arrives after e is executed).
We estimate the probability P (S) basing on the length of the interval I(S) and on the monitoring
of the frequency of rollback occurrence in simulated time intervals of a specified length. We define
simulated time points t i such
the simulated time positive semi-axis into intervals I For each state S, there exists an
interval I i such that the length of I(S), namely L(I(S)), is within that interval (i.e., t i -
For each interval I i the LP keeps a variable R i , initially set to zero, which counts the amount
of rollback occurrences in simulated time intervals I(X), such that t i -
whenever a rollback occurs to a state X such that t i - the variable R i is increased
by one; recall that computing the length of I(X) in order to identify the corresponding counter
R i to be updated is a quite simple operation, it only requires to compute the difference between
two simulated time values: one is the simulated time of the state X, the other is the timestamp
of the event which moved the LP from X to its subsequent state). Then, for any state S such
that estimated as R i =N , where N is a local variable which counts the
number of states passed through (i.e., the number of events executed by the LP 6 ).
Overall, the sequence of steps to take the checkpoint decision before moving from S is as follows.
Upon (but before) the execution of the event e which moves the LP from S to its subsequent state,
the length L(I(S)) of the interval I(S) is computed and the corresponding variable R i is identified.
Then, the estimate of P (S) is computed as the ratio between R i and N and the outcoming value is
used to solve the cost model; finally, the selection policy (see Section 3.1) determines the convenience
of recording S prior to the execution of the event e basing on the solution of the cost model.
A simple uniform decomposition of the simulated time positive semi-axis can be obtained by
imposing length equal to \Delta on each interval I i and by defining a value t max such that I
In this case the value max determines the number of intervals of the decomposition
(i.e., the amount of memory destined to the counters R i ). Furthermore, the value of \Delta can be chosen
depending on the average values of the distribution functions of the timestamp increment. In this
way, the individuation of the counter to be updated each time a rollback occurs, or the individuation
of the counters to estimate probability values is quite simple and introduces negligible overhead.
For example, when a rollback occurs to a state X, the index i of the counter R i to be updated is
easily computed as follows (recall again that L(I(X)) is computed by simply taking the difference
between to simulated time values):
Note that previous estimation method can be modified in order to cope with non-stationary
probability values which can be caused, for example, by variations of either the load on the processors
or the behavior of the LPs in the simulated time. In particular, basing on the common belief
that in most simulations the near past behavior is a good approximation of the near future behavior,
then probability values can be estimated by using statistical data on rollback occurrences which
are related to a temporal window (instead of those collected from the beginning of the execution).
Few hundred of events usually constitute a window length producing reliable results [4, 17, 19].
6 The events executed in coasting forward are not counted by N . They are not real simulation events as they are
actually an artefact of the state recovery procedure. Therefore, states passed through in any coasting forward are
not real simulation states of the LP.
CRy (S)
Pr (S)
Pe (S)
(b)
CRy (S)
(a)
Pr (S)
Pe (S)
interval for Pe (S) where the
selection policy produces a
interval for Pe (S) where the
selection policy produces a
correct decision correct decision
Figure
4: General Cases for the Functions CR y (S) and CR n (S).
3.2.1 A Discussion on the Effectiveness of the Estimation Method
The method we have proposed to estimate P (S) has the advantage that it can be implemented at
very low cost (in terms of both CPU time and memory space). On the other hand, it shows the
drawback that the estimated probability value might not be quite close to the real one (note that no
control on the trust of the estimate performed). In order to solve this problem, complex statistical
methods should be used which might have negative impact on performance, thus, in general, this
is not a feasible solution. We will show, however, that the effects of an estimate of probability
values which is not quite good are significant only for particular states (i.e., for a subset of the
states passed through in the course of the simulation). So the simple method here introduced is,
in general, enough refined to represent an effective solution for our specific problem.
Before proceeding in the discussion, we introduce the following simple notion. We say that the
selection policy introduced in Section 3.1 leads to a wrong decision anytime the checkpoint decision
based on the estimated value of P (S) is different from the one that would have been obtained by
considering the real value of P (S); otherwise we say that the selection policy produces a correct
decision.
In
Figure
4.a and in Figure 4.b we show two cases for the linear functions CR(S) y and CR(S) n
versus the value of P (S). Recall that when P (S) is equal to zero, CR n (S) is equal to zero as well, and
CR y (S) is equal to ffi s . The cases shown are general as the two functions have an intersection point
within the interval [0,1) for P (S) (a more particular situation is obtained when CR
in the whole interval do not intersect or, at best,
they intersect when P 1). We denote as b
P the value of P (S) corresponding to the intersection
point; in addition, we denote as P r (S) the real value of P (S) and with P e (S) the corresponding
estimated value.
Suppose
(see
Figure
4.a), in this case CR y (S) ? CR n (S), so there is no real
convenience of recording S as a checkpoint. The same decision is taken by the selection policy for any
estimated value P e (S) less than b
therefore, for any value P e
P the selection policy produces
always a correct decision. Suppose P r (S) - b
(see
Figure
4.b), in this case, CR y (S) - CR n (S) so
there is a real convenience of recording S. The same decision is taken by the selection policy for any
estimated value P e (S) larger than or equal to b
therefore, for any value P e (S) - b
P the selection
policy produces always a correct decision. Overall, we have that the selection policy produces a
correct decision anytime one of the following two cases occurs:
are lower than b
C2: both P r (S) and P e (S) are higher than or equal to b
Instead, it produces a wrong decision anytime one of the following two cases occurs:
Case C3 or C4 may occur if:
(i) the value of P r (S) is quite close to b
P (in this case, we may get a wrong decision even with
a small distance between P r (S) and P e (S); anyway the distance must be such that it moves
e (S) to the opposite side of P r (S) with respect to the value b
(ii) the value or P r (S) is quite far from b
P and a very large distance exists between P r (S) and
(anyway the distance must be such that it moves P e (S) to the opposite side of P r (S)
with respect to the value b
From previous considerations, we argue that in order for the selection policy to produce a
wrong decision (cases C3 and C4), then a set of conditions must be satisfied (i.e., those producing
situations (i) or (ii)). So the states for which these conditions are actually satisfied will be, in
general, a (small) subset of all the states passed through in the course of the simulation. Therefore,
for the majority of the states, the selection policy will produce correct decisions ( 7 ). This feature
derives from the fact that the selection policy actually maps values of a continuous function (i.e.,
the difference between CR y (S) and CR n (S)) into a boolean domain. This mapping into such a
discrete domain removes the effects of "noise" (i.e., the effects of an estimate of probability values
which is not good) unless the noise itself over-steps a threshold.
7 Recall, in addition, that there exists a set of states for which cases C3 and C4 can never occur independently of the
distance between the real and the estimated probability values. These are all the states S such that CRn (S) ! CRy (S)
in the whole interval [0,1) for P (S). For any of these states either the two functions CRn (S) and CRy (S) do not
intersect or they intersect in the point b
cases C3 and C4 cannot occur as neither Pr (S) nor Pe (S) can be
higher than one. For all these states the selection policy always produces a correct decision.
3.3 Adding Periodic Checkpoints to Cope with Memory Usage
Any sparse checkpointing solution suffers from a problem which is known as the memory usage
problem. A brief description of this problem is provided in this section. Then a solution allowing
our checkpointing technique to cope with this problem is presented.
Basically, the memory usage problem is related to the notion of GVT and to the fossil collection
procedure (which recovers memory allocated for obsolete state information and messages). Specif-
ically, as rollback to a simulated time equal to GVT is possible, then, in order to correctly support
state recovery, each LP must retain the latest recorded state (i.e., the latest checkpoint) with simulated
time T less than or equal to GVT and also all the messages carrying events with timestamp
larger than T ; therefore, that checkpoint and all those massages cannot be discarded during the
execution of the fossil collection procedure. If very few states are recorded in the course of the
simulation, then it is possible for the number of messages which must be retained to be very large.
The drawback incurred is that the amount of memory recovered during any fossil collection may be
small, thus the fossil collection procedure is invoked frequently (as memory saturates frequently).
This may have detrimental effects on performance.
Our checkpointing technique allows the possibility that very few states are recorded in the
course of the simulation execution. This may happen whenever for any state S, the value of the
probability P (S) approaches to zero. As an extreme case, if P (S) is exactly equal to zero for any
state S, then the selection policy of Section 3.1 will never induce the LP to take a checkpoint. This
pushes our technique to incur the memory usage problem. In order to prevent this problem, we
allow the LP to take (rare) periodic checkpoints. To this purpose, the LP maintains two integer
variables, namely max dist and event ex. The variable max dist records the maximum number
of event executions allowed between two consecutive checkpoint operations. The variable event ex
represents the actual distance, in terms of events, from the last checkpoint operation. By using
these two variables, the selection policy is modified as follows:
Modified-Selection-Policy
before moving from S:
then record S
else do not record S
The modified selection policy does not allow the distance between two consecutive checkpoints
to be larger than max dist events (i.e., max dist state transitions), thus avoiding the memory
usage problem. Checkpointing techniques based on periodic checkpoints tackle the memory usage
problem adopting defaults for the maximum value of the checkpoint interval - max which usually
are between 15 and 30 (see [4, 19]). Any value within that interval will be well suited for max dist.
3.4 A Final Description of the Checkpointing Technique
There is just another point to be fixed in order to provide a final, complete description of our
checkpointing technique. The modified selection policy introduced in Section 3.3 (just as the
original one) relies on the solution of the cost model which, in turn, needs the estimate of probability
values. This means that the policy cannot be applied if at least few statistical data are not available
(note that the problem of the absence of statistical data for the selection of the initial value of the
proper of the checkpointing technique is a common problem to almost all existing
techniques [4, 16, 17, 19]). To overcome this problem, we partition the execution of the LP into two
phases. Phase-1 starts at the beginning of the simulation execution and consists of few hundred
events. During this phase, the LP collect statistical data to estimate probability values, and records
as checkpoints all the states passed through (this can be easily done by adopting the modified
selection policy with max dist initially set to one). During the second phase, namely Phase-2,
the LP continues to collect statistical data (possibly using a windowing mechanism) and takes
the checkpoint decision basing on the modified selection policy with a value for max dist selected
within the interval [15,30]. In Figure 5, the complete behavior of the LP is reported.
(few hundred events)
settings: dist := 1
actions: collect statistical data; select checkpoint positions basing on the modified selection policy
Phase-2 (till the end of the simulation)
settings: select x 2 [15; 30]; max dist := x
actions: collect statistical data; select checkpoint positions basing on the modified selection policy
Figure
5: The Complete LP Behavior (only those Actions Relevant to the Checkpointing Technique
are Shown).
4 A Performance Study
In this section experimental results are reported to compare the performance achievable by using
the checkpointing technique proposed in this paper (hereafter CT1) to the one of previous solutions.
We have considered three previous checkpointing techniques for the comparison: the one presented
by Ronngren and Ayani (CT2) [19]; the one presented by Fleischmann and Wilsey (CT3) [4] and
the one presented by Quaglia (CT4) [17]. Both CT2 and CT3 induce the LP to take checkpoints
on a periodic basis. CT2 is based on an analytical model which defines the value of the time-optimal
checkpoint interval (the assumptions underlying the model have been already discussed
in Section 2). The model is used to recalculate the value of the checkpoint interval - basing on
the observed variations of the rollback frequency. CT3 is actually a heuristic algorithm for the
dynamic recalculation of - which has been derived basing on extensive profiling and analysis of
parallel optimistic simulation of digital systems. It is based on the monitoring of a cost function
which equals the sum of checkpointing and coasting forward overheads. The value one and an
adaptation direction towards increasing values is initially selected for - (the adaptation step is one,
i.e., at each adaptation point - is increased by one). Then, the adaptation direction is inverted
each time the monitored value of the cost function shows a significant increase. CT4 relies on the
probabilistic checkpoint decision already discussed in Section 2.
We did not consider any incremental state saving method in our comparison. This is because
previous studies ([13, 18, 20]) have already pointed out that incremental state saving and sparse
checkpointing outperform each other in distinct classes of simulation problems (so the two approaches
are effective each in a distinct domain). Specifically, incremental methods are preferable
for simulations with very large state size, very small portions of the state updated by event execution
and quite short rollback distances. For any other simulation setting, sparse checkpointing
provides better performance.
Before showing the results of the comparative analysis, we describe the main features of the used
hardware/software architecture, present the selected benchmark and introduce the performance
parameters we have measured.
4.1 The Hardware/Software Architecture and the Benchmark
As hardware architecture we used a cluster of machines (Pentium II 300 MHz - 128 Mbytes RAM)
connected via Ethernet. The number of machines in the cluster is four. Inter-processor communication
relies on message passing supported by PVM [24]. There is an instance of the Time Warp
kernel on each processor. The kernel schedules LPs for event execution according to the Smallest-
Timestamp-First policy; antievents are sent aggressively (i.e., as soon as the LP rolls back [9]);
fossil collection is executed periodically each one second.
We tested the performance of the checkpointing techniques using the synthetic benchmark
known as PHOLD model, originally presented in [7]. It consists of a fixed number of LPs and of a
constant number of jobs circulating among the LPs (that is referred to as job population). Both
the routing of jobs among the LPs and the timestamp increments are taken from some stochastic
distributions. We have chosen this benchmark for two main reasons: (i) its parameters (e.g., event
execution time, size of the state, etc.) can be easily modified, (ii) it is one of the most used
benchmarks for testing performance of checkpointing techniques [1, 16, 19, 21]. In addition, it is
important to remark that this benchmark usually shows a rollback behavior similar to many other
synthetic benchmarks and to several real world models. For this benchmark, we considered three
different configurations, with progressively more complex features concerning both the execution
time (i.e., the granularity) of simulation events and the behavior of the LPs (i.e., how they route jobs
among each other). In the third configuration, the benchmark actually models a complex system.
These configurations are separately described in details in each of the following paragraphs.
First Configuration (CONF-1). In this configuration, the PHOLD model is composed of 64
homogeneous LPs. The timestamp increment is exponentially distributed with mean 10 simulated
time units for all the LPs. The execution time (granularity) for any event is fixed at 140 microsec-
onds. The job population is one job per LP and jobs are equally likely to be forwarded to any other
LP. Two distinct cases for the size of the LP state were considered: (i) each LP has a fictitious
state of 2 Kbytes; in this case the time for recording the entire state is approximately
microseconds; (ii) each LP has a fictitious state of 8 Kbytes; in this case the time for recording the
entire state is approximately microseconds. In both cases, the fictitious state consists of
an array of integers, and the times reported above were obtained recording the state by copying all
its entries, one by one. The case 2 Kbytes state size models simulations with medium/small state
granularity (with respect to the event granularity), whereas, the case 8 Kbytes state size models
simulations with large state granularity.
Second Configuration (CONF-2). This configuration has the same features of CONF-1 except
for what concerns the granularity of simulation events. There are three distinct types of jobs (i.e.,
of simulation events), namely a, b and c. The three types have granularity
respectively. The job population is one job
per LP (the type of the job is selected from among the three job types according to an uniform
distribution). After a job is served (but before it is forwarded to another LP), the job type is
redefined by uniformly selecting it in the set fa; b; cg; so there is a probability of 1/3 that the
type of the job remains unchanged when it is forwarded to another LP. For this configuration we
considered the same two distinct state sizes as those of CONF-1.
Third Configuration (CONF-3). This configuration has the same features of CONF-2, except
for what concerns the routing of jobs among the LPs. There are 8 hot spot LPs to which 30% of
all jobs must be routed. The hot spot LPs change in the course of the simulation (they change
each 3 \Theta 10 4 simulated time units and the sequence of the changes is defined prior to the simulation
execution by randomly picking up new hot spots among all the LPs). This configuration possibly
gives rise to simulations which do not reach steady state for what concerns the rollback behavior;
furthermore, it is complex from both the point of view of the event granularity and the point of
view of the routing decisions which determine how simulation events are distributed among the
LPs. These features allow CONF-3 to nicely approximate simulation models of complex systems.
The same two state sizes of previous configurations were considered for CONF-3.
The three configurations of the benchmark were run using all the four machines of the cluster.
Each machine runs 16 LP (no other user load runs on any machine). For the checkpointing technique
CT1, probability values are estimated by using an uniform decomposition of the simulated time
positive semi-axis with
windowing mechanism is used in order to compute the estimate basing on statistical data which
refer to the last 500 executed events at most). The same decomposition is adopted for CT4. Finally,
for both CT2 and CT3 the recalculation of the value of the checkpoint interval - is executed each
500 events (i.e., the observation period is fixed at 500 events for both the techniques).
4.2 Performance Parameters
We report measures related to the following parameters:
ffl the event rate (ER), that is the number of committed events per second; this parameter
indicates how fast is the simulation execution with a given checkpointing technique;
ffl the efficiency (EFF), that is the ration between the number of committed events and the
total number of executed events (excluding coasting forward ones); this parameter indicates
how the percentage of CPU time spent executing productive simulation work (i.e., committed
events) is affected by the checkpointing technique;
ffl the average checkpointing overhead (ACO) per state, that is the average time spent for check-pointing
operations per each state passed through in the course of the simulation (note that
the number of states passed through is actually equal to the number of executed events, so
ACO also expresses the checkpointing overhead per event) ( 8 );
ffl the average recovery latency (ARL), that is the average time for state recovery in case of
rollback.
In addition, we also report some data to point out the memory utilization (MU) under different
checkpointing techniques. We recall that the average memory utilization cannot be observed without
interfering with the simulation execution. Instead, the maximum memory utilization (i.e., the
8 States which are passed through during the execution of any coasting forward are not taken into account
in the calculation of ACO. Therefore ACO expresses the checkpointing overhead per event computed overall the
committed/rolled-back events. As already pointed out in a previous note, coasting forward events are actually an
artefact of the state recovery procedure, so they are not real simulation events.
maximum amount of memory destined for keeping checkpoints and messages) can be easily measured
Note that the memory utilization must take into account also the memory destined to
messages carrying the events as this is an indicator for the memory usage problem pointed out
in Section 3.3. For each configuration of the benchmark we report the average observed values of
previous parameters, computed over 20 runs that were all done with different seeds for the random
number generation. At least 5 \Theta 10 6 committed event were simulated in each run.
4.3 Experimental Results
In the following paragraphs we report the obtained results for all the selected configurations of
the benchmark. Then, a final discussion on the results is presented. Note that we report also the
parameter values measured for the case of checkpoint before the execution of each simulation event
(this checkpointing technique is often referred to as copy state saving - CSS). A comparison with
results obtained under CSS is important to point out the real performance gain achievable through
sparse checkpointing techniques (so the simulations with CSS act as control simulations).
Case 2 Kbyte State Size
Checkpointing ACO ARL EFF ER maximum MU
Technique (microsec.) (microsec.) ([%]) (committed events per sec.) (Mbytes)
CSS 70 70 71:74 8544 19.7
CT4 22 178 73:82 10423 8.3
Case 8 Kbyte State Size
Checkpointing ACO ARL EFF ER maximum MU
Technique (microsec.) (microsec.) ([%]) (committed events per sec.) (Mbytes)
CSS 280 280 72:95 5981 54.9
44 663 74:54 8304 12.2
CT4 43 584 73:34 8617 11.5
Figure
CONF-1. The obtained results are reported in Figure 6 for both the case of 2 Kbytes and the
case 8 Kbytes state size. As general consideration, we have that the techniques from CT1 to CT4
show around the same values for EFF, which indicates that they affect the percentage of CPU
time spent executing productive simulation work almost in the same way. Instead, CSS originates
9 In our simulations the content of a message has size 32 bytes.
smaller values for EFF. This phenomenon is supposed to derive from the longer average recovery
latency, namely ARL, of CT1-CT4, which, as pointed out in [14], may give rise to a throttled
execution of the simulation (i.e., an execution with slightly less rollbacks). This indicates that,
compared to CSS, sparse checkpointing techniques not only show the advantage of a better balance
between checkpointing and recovery overheads, but, usually, also allow a reduction of the negative
effects of rollback on performance 10 .
For the case 2 Kbytes state size, we have that the performance under CT2 and that under CT3
are quite similar, with CT2 allowing slightly faster execution of the simulation (i.e., a higher value
for ER). Also, the amounts of memory used by the techniques are similar (CT3 shows a slightly
larger value of maximum MU). Compared to CSS, both these techniques allow faster execution, up
to 18% and 17% respectively. In addition, they reduce the amount of used memory of around 2.3
times.
and CT4 perform better than the other sparse checkpointing techniques (the values of ER
under CT1 and CT4 are larger than those observed for CT2 and CT3), with CT1 allowing faster
execution, up to 3%, compared to CT4. This phenomenon can be explained by looking at data
concerning ACO and ARL. The values of ACO under CT1 and CT4 are less than those under CT2
and CT3 (this is true especially for the CT1 technique which compared to CT2 and CT3 exhibits ER
which is between 6% and 7% higher). Furthermore, both CT1 and CT4 show values for ARL similar
to those of CT2 and CT3, thus allowing, on the average, fast state recovery with less checkpointing
overhead. This points out that an appropriate selection of checkpoint positions actually allows a
reduction of the checkpointing-recovery overhead. The reported values of maximum MU indicate
that all the techniques from CT1 to CT4 use around the same amount of memory.
The results obtained for the case of 8 Kbytes state size are quite similar to the previous ones.
The main difference is that, compared to CSS, all the techniques CT1-CT4 show a performance
gain which is further amplified. This is because ACO under CSS in the case of 8 Kbytes state
size is notably larger, up to 4 times, that that in the case of 2 Kbytes state size; instead, sparse
checkpointing techniques allow ACO in the case of 8 Kbytes state size to be less than 2 times larger
compared to that of the case 2 Kbytes state size.
CONF-2. Results reported in this section (see Figure 7) allow us to point out how the different
checkpointing techniques tackle the high variance of the event granularity (recall that CONF-2 is
such that there are three different event types with three distinct granularities). The data show that
the performance provided by CT2 and CT4 for CONF-2 is worse that that observed for CONF-1.
CT3 shows around the same performance. CT1 performs slightly better than under CONF-1 (it
shows a value of ACO which less than that observed for CONF-1). The consequence is that the
This is confirmed by results reported in [21].
Case 2 Kbytes State Size
Checkpointing ACO ARL EFF ER maximum MU
Technique (microsec.) (microsec.) ([%]) (committed events per sec.) (Mbytes)
CSS 70 70 70:48 8421 18.9
CT4 22 182 73:85 10226 7.1
Case 8 Kbytes State Size
Checkpointing ACO ARL EFF ER maximum MU
Technique (microsec.) (microsec.) ([%]) (committed events per sec.) (Mbytes)
CSS 280 280 74:46 6096 54.0
Figure
7: Results for CONF-2
performance gain of CT1 over the other techniques is slightly amplified compared to CONF-1. This
behavior directly derives form the high variance of the event granularity of CONF-2. As CT1 takes
into account the granularity of any event while selecting the positions of checkpoints, then CT1
will recognize a sequence of large grained events executed from the last taken checkpoint, and will
not allow this sequence to be long (i.e., it will break the sequence through a checkpoint). This will
happen especially if the sequence pushes the LP to pass through a state that has high (estimated)
probability to be restored due to a future rollback. Such sequences of events are not recognized
(and thus not broken) by the other techniques which may originate long recovery latency. So, in
order to bound that latency, these techniques induce the LPs to take more checkpoints. Finally,
we note that also for this configuration, the amounts of memory used by the different techniques
(except for the case of CSS) is quite similar.
CONF-3. The results for this configuration (see Figure 8) point out how the checkpointing
techniques tackle both the high variance of the event granularity and a complex (and also variable)
behavior for the routing of jobs among the LPs. Recall that this configuration pushes the benchmark
to exhibits features quite close to those of the simulation model of a complex system. It must be
noted that CONF-3 gives rise to simulations with higher efficiency compared to CONF-1 and
CONF-2; this is because the presence of the hot spot LPs allows the simulation to advance less
"chaotically" as a good percentage of the jobs are routed towards the hot spots (i.e., jobs are free
to move everywhere with lower probability). Basically, the performance gain shown by CT1 for the
previous configurations is confirmed. This indicates that the windowing approach for the collection
Case 2 Kbytes State Size
Checkpointing ACO ARL EFF ER maximum MU
Technique (microsec.) (microsec.) ([%]) (committed events per sec.) (Mbytes)
CSS 70 70 78:43 10654 25.1
CT3 19 262 82:54 11932 7.1
Case 8 Kbytes State Size
Checkpointing ACO ARL EFF ER maximum MU
Technique (microsec.) (microsec.) ([%]) (committed events per sec.) (Mbytes)
CSS 280 280 80:16 7846 68.6
Figure
8: Results for CONF-3
of statistical data to estimate probability values actually allows CT1 to get the potential to react to
dynamic changes in the rollback behavior originated by the variable routing of jobs in the lifetime of
the simulation. In particular, for the case 2 Kbytes state size, CT1 allows faster execution, between
3% and 10%, compared to all the other sparse checkpointing techniques. For the case 8 Kbytes
state size, the gain is between 3% and 8%. This gain derives especially from the much lower ACO
of CT1 compared to the other techniques.
4.3.1 General Comments
The performance data collected in this study indicate that there is a real advantage of an appropriate
selection of the positions of checkpoints. This advantage is actually amplified for the case of
simulation models exhibiting complex features. For these models, the checkpointing technique
must treat each state passed through in a specific way (i.e., the decision on whether that state
must be recorded as a checkpoint has to be taken by looking at features proper of the state, namely
probability to be restored due to rollback, position of the last taken checkpoint and granularity of
intermediate events). If this does not happen, there is the risk that a state which has very high
probability to be restored due to rollback is not recorded even if: (i) the last checkpoint was taken
several event executions ago and (ii) the intermediate events were large grained. This may have a
detrimental effect on performance in the case a rollback to that state really occurs, as the latency
to recover to that state through coasting forward might be quite long.
Checkpointing techniques in which the checkpoint decision is taken on a periodic basis do not
have the potential to tackle previous problem directly. The only way for these techniques to bound
the recovery latency is to select an "adequately small" value of the checkpoint interval, which in
turn may push the checkpointing overhead to be non minimal. The same recovery latency can be
obtained at the expense of less checkpointing overhead if an adequate selection of the checkpoint
positions is adopted.
Another important point which has been highlighted by our study is that a checkpointing
technique for an appropriate selection of the positions of checkpoints (like CT1) can be designed and
implemented basing on very simple statistical methods, which introduce quite negligible overhead
for being implemented (this is a basic requirement to be satisfied in order for the technique to not
over-charge the simulation program). Nevertheless, the technique itself has the potential to provide
quite good performance.
Summary
In this paper we have presented a general solution for tackling the checkpoint problem in Time Warp
simulations. The checkpointing technique we have proposed selects the positions of the checkpoints
basing on a cost model which expresses the checkpointing-recovery overhead associated to any
state passed through in the course of the simulation. The cost model determines the convenience
of recording the current state before the execution of the next event. This requires an estimate of
the probability that the current state will have to be restored due to rollback. We propose a low-overhead
simple solution for this problem and discuss its effectiveness for this specific application.
Simulation results are also reported to quantify the performance achievable by our checkpointing
technique. To this purpose a classical benchmark in several different configurations has been used.
The data show that the selection of the positions of checkpoints induced by our technique improves
performance compared to existing approaches, including conventional periodic checkpointing tech-
niques. This happens especially when the benchmark parameters are selected in order to let it
represent a simulation model with complex behavior. This indicates that the presented solution is
highly general, having the potential to allow faster execution in a wide class of simulations.
Acknowledgments
The author would like to thank Bruno Ciciani for many interesting discussions on the checkpoint
problem in Time Warp simulators. Special thank goes to Vittorio Cortellessa for his help in the
preparation of the simulation code.
--R
"Run-Time Selection of the Checkpoint Interval in Time Warp Simulations"
"Reducing Rollback Overhead in Time Warp Based Distributed Simulation with Optimized Incremental State Saving"
"Estimating Rollback Overhead for Optimism Control in Time Warp"
"Comparative Analysis of Periodic State Saving Techniques in Time Warp Simulators"
"State Saving for Interactive Optimistic Simulation"
"Parallel Discrete Event Simulation"
"Performance of Time Warp Under Synthetic Workloads"
"Design and Evaluation of the Rollback Chip: Special Purpose Hardware for Time Warp"
"Space Management and Cancellation Mechanisms for Time Warp"
"Virtual Time"
"Selecting the Checkpoint Interval in Time Warp Simulation"
"Adaptive Checkpoint Intervals in an Optimistically Synchronized Parallel Digital System Simulator"
"An Analytical Comparison of Periodic Checkpointing and Incremental State Saving"
"Effects of the Checkpoint Interval on Time and Space in Time Warp"
"Rollback-Based Parallel Discrete Event Simulation by Using Hybrid State Saving"
"Event History Based Sparse State Saving in Time Warp"
"Combining Periodic and Probabilistic Checkpointing in Optimistic Simulation"
"Fast-Software-Checkpointing in Optimistic Simulation: Embedding State Saving into the Event Routine Instructions"
"Adaptive Checkpointing in Time Warp"
"A Comparative Study of State Saving Mechanisms for Time Warp Synchronized Parallel Discrete Event Simulation"
"Event Sensitive State Saving in Time Warp Parallel Discrete Event Simu- lations"
"An Analytical Model for Hybrid Checkpointing in Time Warp Distributed Simulation"
"Incremental State Saving in SPEEDES Using C Plus Plus"
"A Framework for Parallel Distributed Computing"
"External State Management System for Optimistic Parallel Simulation"
--TR
--CTR
Francesco Quaglia , Andrea Santoro , Bruno Ciciani, Conditional checkpoint abort: an alternative semantic for re-synchronization in CCL, Proceedings of the sixteenth workshop on Parallel and distributed simulation, May 12-15, 2002, Washington, D.C.
Andrea Santoro , Francesco Quaglia, Communications and network: benefits from semi-asynchronous checkpointing for time warp simulations of a large state PCS model, Proceedings of the 33nd conference on Winter simulation, December 09-12, 2001, Arlington, Virginia
Andrea Santoro , Francesco Quaglia, Transparent State Management for Optimistic Synchronization in the High Level Architecture, Simulation, v.82 n.1, p.5-20, January 2006
Francesco Quaglia , Andrea Santoro, Modeling and optimization of non-blocking checkpointing for optimistic simulation on myrinet clusters, Journal of Parallel and Distributed Computing, v.65 n.6, p.667-677, June 2005
Diego Cucuzzo , Stefano D'Alessio , Francesco Quaglia , Paolo Romano, A Lightweight Heuristic-based Mechanism for Collecting Committed Consistent Global States in Optimistic Simulation, Proceedings of the 11th IEEE International Symposium on Distributed Simulation and Real-Time Applications, p.227-234, October 22-26, 2007
Moon Jung Chung , Jinsheng Xu, An overhead reducing technique for time Warp, Journal of Parallel and Distributed Computing, v.65 n.1, p.65-73, January 2005
Francesco Quaglia , Vittorio Cortellessa, On the processor scheduling problem in time warp synchronization, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.12 n.3, p.143-175, July 2002
Francesco Quaglia , Andrea Santoro, Modeling and optimization of non-blocking checkpointing for optimistic simulation on myrinet clusters, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Francesco Quaglia , Andrea Santoro, Nonblocking Checkpointing for Optimistic Parallel Simulation: Description and an Implementation, IEEE Transactions on Parallel and Distributed Systems, v.14 n.6, p.593-610, June | rollback-recovery;checkpointing;time warp;performance optimization;optimistic synchronization;cost models;parallel discrete-event simulation |
377030 | A Framework for Integrating Data Alignment, Distribution, and Redistribution in Distributed Memory Multiprocessors. | AbstractParallel architectures with physically distributed memory provide a cost-effective scalability to solve many large scale scientific problems. However, these systems are very difficult to program and tune. In these systems, the choice of a good data mapping and parallelization strategy can dramatically improve the efficiency of the resulting program. In this paper, we present a framework for automatic data mapping in the context of distributed memory multiprocessor systems. The framework is based on a new approach that allows the alignment, distribution, and redistribution problems to be solved together using a single graph representation. The Communication Parallelism Graph (CPG) is the structure that holds symbolic information about the potential data movement and parallelism inherent to the whole program. The CPG is then particularized for a given problem size and target system and used to find a minimal cost path through the graph using a general purpose linear 0-1 integer programming solver. The data layout strategy generated is optimal according to our current cost and compilation models. | Introduction
The increasing availability of massively parallel computers composed of a large number
of processing nodes is far from being matched by the availability of programming models
and software tools that enable users to get high levels of performance out of these systems.
The proliferation of small and medium-size parallel systems (such as desktop multiprocessor
PCs and workstations with a few processors or systems with a modest number of
processors with shared memory) has broadened the use of parallel computing in scientific
and engineering environments in which the user does not need to consider the underlying
system characteristics to get a reasonable performance. In this case, ease of programming
and portability are the main aspects to consider and this has favored the popularity of
shared-memory programming models.
A thorough understanding of the complexities of the target parallel system is required
to scale these applications to run on large systems composed of hundreds of processors
with NUMA interconnects. Although the availability of programming models such as
High Performance Fortran [18] offers a significant step towards making these systems truly
usable, the programmer is forced to design parallelization and data mapping strategies
which are heavily dependent on the underlying system characteristics. The combination of
HPF and shared-memory paradigms, which is common when targeting the multiple levels
of parallelism offered by current systems composed of SMP nodes, does not necessarily
reduce the complexity of the problem. These strategies are designed to find a balance
between the minimization of data movement and the maximization of the exploitable
parallelism. Using these strategies, the compiler generates a Single Program Multiple Data
(SPMD) program [16] for execution on the target machine. In a software-coherent NUMA
architecture the compiler translates the global references in HPF into local and non-local
references satisfied by the appropriate message-passing statements, usually respecting
the owner-computes rule (i.e., the processor owning a piece of data is responsible for
performing all computations that update it).
The best choice for a data mapping depends on the program structure, the characteristics
of the underlying system, the number of processors, the optimizations performed
by the supporting compilation system and the problem size. Crucial aspects such as data
movement, parallelism and load balancing have to be taken into consideration in a unified
way to efficiently solve the data mapping problem. Automatic data distribution tools
may play an important role in making massively parallel systems truly usable. They are
usually offered as source-to-source tools, which annotate the original program with data
mapping directives and executable statements offered by the data-parallel extensions of
current shared-memory programming models. Automatic data distribution maps arrays
onto the distributed-memory nodes of the system, according to the array access patterns
and parallel computation done in the application. The applications considered for automatic
optimization solve regular problems, i.e., use dense arrays as their main data
structures. These problems allow computation and communication requirements to be
derived at compile time.
1.1 Traditional Methods
Most automatic data mapping methods split the static data mapping problem into two
main independent steps: alignment and distribution. The alignment step attempts to
relate the dimensions of different arrays, minimizing the overall overhead of inter-processor
data movement. In [24] the authors prove that the alignment problem is NP-complete.
The distribution step decides which of the aligned dimensions are distributed, the number
of processors assigned to each of them, and the distribution pattern. Usually arrays are
distributed either in a BLOCK or CYCLIC fashion, although some tools also consider the
BLOCK-CYCLIC distribution, assigning blocks of consecutive elements to each processor
in a round-robin fashion. A good distribution maximizes the potential parallelism of the
application, balances the computational load, and offers the possibility of further reducing
data movement by serializing. However, these two steps are not independent: there is a
trade-off between minimizing data movement and exploiting parallelism.
When there is a single layout for the whole program, the mapping is said to be static.
However, for complex problems, remapping actions between code blocks can improve the
efficiency of the solution. In this case, the mapping is said to be dynamic. Note that a
dynamic data mapping requires data movement to reorganize the data layout between
code blocks. In order to solve the dynamic data mapping problem, most approaches
consider a set of reasonably good solutions (alignment and distribution) for each code
block and, in an additional step, one solution is selected for each code block that maximizes
the global behavior. Again, note that this approach may discard some solutions for each
phase that could lead to a global optimal solution. Kremer demonstrates in [19] that the
optimal selection of a mapping for each phase is again NP-complete, and Anderson and
Lam show in [2] that the dynamic data mapping problem in the presence of control flow
statements between phases is NP-hard.
1.2 Our Proposal
In this paper we propose a new framework to automatically determine the data mapping
and parallelization strategy for a Fortran 77 program, in the context of a parallelizing
environment for DMM systems. Our main interest has been to develop an approach to
find an optimal solution for the data mapping problem, given some characteristics of the
target architecture and assuming a predetermined compilation model.
Compared to previous approaches, our algorithm combines data distribution and dynamic
redistribution with parallelism information in a single graph representation: the
Communication-Parallelism Graph (CPG). The CPG contains information about data
movement and parallelism within phases, and possible data movement due to remapping
between them. All this information is weighted in time units representing data movement
and computation costs. This allows the alignment, distribution, and redistribution
problems to be solved together.
We use the CPG to model the data mapping problem as a minimal cost path problem
with a set of additional constraints that ensure the correctness of the solution. General
purpose linear 0-1 integer programming techniques, which guarantee the optimality of
the solution provided, are used to solve the minimal cost path problem. These techniques
have been proven to be effective in solving a variety of compilation problems [21].
The data mapping strategies considered are static and dynamic, one and two-dimensional,
with both BLOCK and CYCLIC distribution patterns, and the effects of control flow statements
between phases are considered by the framework. Our cost model is based on
profiling information obtained from a previous serial execution. In addition, some parameters
of the target system, such as the number of processors, the parallel thread creation
overhead, and the communication latency and bandwidth, have to be provided.
The generated data mapping strategy is used to annotate the original Fortran program
using HPF data mapping and loop parallelization directives. Note that the considered
strategies are based on the HPF model, therefore the optimality of the solution is conditioned
by the capabilities of the target HPF compiler and the accuracy of our cost model.
In our current implementation we do not handle communication optimization or pipelined
computation. Therefore we assume an HPF compiler that generates an SPMD code according
to the owner-computes rule, and that no loop transformation or communication
optimization is performed by the compiler.
The rest of the paper is organized as follows. In Section 2 we use a motivating example
to describe some related work. In order to simplify the presentation of the CPG, Section
3 describes the information required to model one-dimensional data mappings. This
model is extended in Section 4 to support two-dimensional data mappings. Section 5
describes the formulation of the minimal cost path problem used to find an optimal
data mapping. Several experiments have been performed to validate the accuracy and
feasibility of the model; these are presented in Section 6. And finally, some concluding
remarks are summarized in Section 7.
Related Work and Motivation
A large number of researchers have addressed the problem of automatic data distribution
in the context of regular applications. Due to the complexity of this problem, all
related work splits the global problem into several independent steps, usually alignment,
distribution, and remapping. In addition, most of them also perform a second level of
simplification by using heuristic algorithms to solve each step. Finally, another common
difference between the proposed methods is the cost model adopted.
Several Steps
Although data mapping can be specified by means of three attributes (alignment, distribution
and remapping), an automatic data mapping framework should not solve them
independently. Let us first motivate the coupling of the alignment and distribution sub-
problems. When they are solved independently (Li and Chen [24], Gupta et al. [15],
Kremer and Kennedy [20], Chatterjee et al. [8], Ayguad'e et al. [3]) the alignment step
may impose some constraints, in terms of parallelism exploitation, to the distribution
step. For instance, assume the simple example shown in Figure 1. The loop includes
two statements in its body: an assignment of matrix B to matrix A, and an assignment
of matrix B transposed to matrix C. An alignment algorithm will easily determine a perfect
alignment of arrays A and B and a transposition of array C, so that the solution is
communication free. However, this alignment forces the distribution step to partition a
different loop in the nest for each statement in the body (following the owner-computes
rule). This does not lead to any loop parallelization unless loop distribution is applied,
which may not be possible if the statements that cause the conflict are involved in a data
dependence cycle.
As a result of the alignment and distribution steps, all the previously referenced proposals
end up with a set of candidate static mappings for each phase. These candidate
mappings are the input to a final independent step that evaluates the usefulness of dy-
do
Figure
1: Simple example for alignment and distribution.
namic mappings [27, 20, 9, 3]. For instance, Figure 2 shows an excerpt of the Adi
integration kernel. The code consists of two phases: a sequence of sweeps along rows
in the first phase, followed by sweeps up and down columns in the second phase. If
these phases are analyzed in isolation, a row layout has the best performance in the first
phase, and a column layout has the best performance in the second phase. Note that the
solution for one phase leads to the sequentialization of the other phase, and that the sequentialization
of a parallelizable phase is far from being considered the best strategy for
a phase in isolation. Therefore, the remapping step will propose a dynamic transposition
of arrays between the two phases, but this requires data reorganization. Other proposals
export a set of solutions of each phase to the rest of the phases [20]. In this case, the
remapping step would consider the possibility of applying a static row or column layout
for the whole sequence of phases. Depending on the characteristics of the target system
(for instance, low bandwidth) this could be a better global solution even when one phase
is sequentialized. However, these proposals do not consider solutions that are not part
of the initial set of candidate solutions for each phase. This may lead to a situation in
which, for instance, a static two-dimensional distribution of rows and columns for both
phases (which may be skipped because of its performance in each isolated phase) is the
best global solution [5].
In fact, some later related work also claims for a simultaneous alignment and distribution
step [11, 7], in order to preserve parallelism while minimizing data movements.
do
do
c(i,
do
do
c(i,
Figure
2: Simple example for dynamic mapping.
However, they still propose an additional step to solve the dynamic data mapping problem
Algorithms
The alignment problem has been proven to be NP-complete; for this reason, Li and Chen
[24] (and other authors working from modifications of this initial work [15, 3]) propose a
heuristic algorithm to solve this problem. Other researchers propose the use of algorithms
based on dynamic programming [8]; however, in [20] they find an optimal solution to their
alignment problem by using 0-1 integer programming techniques. In order to solve the
distribution problem, an exhaustive search is usually performed. In [25] the authors
describe a model that exhaustively explores all distribution options, based on pattern
matching between the reference pattern of an assignment statement and a predefined set
of communication primitives. In [15] they use a constraint-based approach assuming a
default distribution. Furthermore, the authors in [1] combine mathematical and graph-based
problem representations to find a communication-free alignment. Then they use a
heuristic to eliminate the largest potential communication costs while still maintaining
sufficient parallelism.
The dynamic data mapping problem is again NP-complete. This is solved in [9] using
a divide-and-conquer approach. In [27] the authors use dynamic-programming techniques:
starting from a static solution, they recursively decompose it employing their cost model.
A dynamic-programming algorithm is also used in [23] to determine the best combination
of candidate layouts in a sequence of phases. In [3] a controlled exhaustive search is
considered to find the solution to the same problem. Finally, the authors in [20] formulate
the dynamic data partitioning problem as another 0-1 integer programming problem that
selects a single candidate data layout from a predetermined set of layout candidates.
Cost Model
All approaches need a cost model to make decisions in each step. Performance estimates
have to be precise enough to be able to distinguish the considered search space of possible
data mappings. With a communication/no communication [24] or a cheap/expensive [1]
cost model it is quite simple to obtain reliable solutions in complex programs. Another
option is to estimate performance through symbolic analysis of the code [25, 15, 3]; how-
ever, array data sizes have to be known at compile time, as well as the number of loop
iterations and the probabilities of conditional statements. This information has to be
provided by the user or obtained through profiling. In contrast, training sets [6] obtain
good performance estimations for the set of reported programs, although it is difficult
to build a training set general enough to guarantee that all possible source programs are
considered.
In general, it is difficult for an optimizing tool to make best guesses at compile time
with incomplete information. Thus, running the program serially first and obtaining some
profiling information is a common strategy adopted by many commercial optimizers, especially
when the quality of the solution depends heavily on the characteristics of the
source program.
Although some attempts have been made to add interaction between the three steps,
the solution proposed in this paper improves the related work in two main aspects: (i) we
present a unified representation that allows the compiler to explore solutions that would
not be obtained from isolated analysis; and (ii) we use linear 0-1 integer programming
techniques to find the optimal solution to the whole problem. Obviously, these improvements
trade off the computation time required to get an optimal solution; however, an
expensive technique can be an important tool for a compiler if it is applied selectively in
cases where the optimal solution is expected to result in a significant performance gain.
3 One-dimensional Data Mapping
A valid data mapping strategy in the one-dimensional data mapping case distributes, at
most, one dimension of each array over a one-dimensional grid of processors, in either
BLOCK or CYCLIC fashion. The distribution derived may be static or dynamic.
The number of processors N of the target architecture is assumed to be known at
compile time. The sequential execution of the original Fortran 77 program must be
profiled in order to obtain some problem-specific parameters, such as array sizes, loop
bounds and execution times, and probabilities of conditional statements.
3.1 The Communication-Parallelism Graph
In our framework, we define a single data structure that represents the effects of any
data mapping strategy allowed in our model. The name of this data structure is the
Communication-Parallelism Graph (CPG), and it is the core of our approach. The CPG is
an undirected graph G(V; E; H) that contains all the information about data movement
and parallelism in the program under analysis. It is created from the analysis of all
assignment statements within loops that reference at least one array. The set V of nodes
represents distributable array dimensions, the set E of edges represents data movement
constraints, and the set H of hyperedges 1 represents parallelism constraints. Edges and
hyperedges are labeled with symbolic information which is later used to obtain weights
following a particular cost model.
Programs are initially decomposed into computationally intensive code blocks named
phases. Each phase has a static data mapping strategy, and realignment or redistribution
actions can occur only between phases. In our approach we have adopted the following
definition of phase, made in [17]:
A phase is a loop nest such that for each induction variable occurring
in a subscript position of an array reference in the loop body,
the phase contains the surrounding loop that defines the induction
variable. This operational definition does not allow the overlapping
or nesting of phases.
3.1.1 Nodes
Nodes in G are organized in columns. There is a column V i in G associated with each
array i in each phase. If one array is used in several phases, there will be a column for each
phase in which this array appears. Each column contains as many nodes as the maximal
dimensionality d of all arrays in the program. V i [j] denotes, for j 2 f1::dg, the th
dimension of array i. Thus, each node represents a distributable array dimension. If one
array has dimensionality d 0 ! d, the array is embedded onto a d-dimensional template.
In this case, the additional nodes (d \Gamma d dimensions used to represent
data mappings where the array is not distributed.
3.1.2 Edges
Edges in G reflect possible alignment choices between pairs of array dimensions. Edges
connect dimensions of different arrays. An edge connecting dimension j 1 of array i 1 to
hyperedge is the generalization of an edge, as it can connect more than two nodes.
dimension j 2 of array i 2 (say edge
represents the effects, in terms of data
movement, of aligning and distributing these dimensions.
For each phase p in the program, the data movement information is obtained by
performing an analysis of reference patterns between pairs of arrays within the scope of p.
Reference patterns are defined in [24], and represent a collection of dependences between
arrays on both sides of an assignment statement. For each reference pattern between
with d \Theta d edges is added connecting each node of V i 1
to each
node of V i 2
. This set of edges represents the behavior of all alignment alternatives between
dimensions of both arrays. If (self-reference pattern) then only d self-edges are
added in V i 1
, one for each node.
Edges are labeled with data movement primitives, representing the type of data movement
performed if the corresponding array dimensions are distributed. The data movement
primitives considered in our framework include 1to1, 1toN, Nto1, and NtoN. A 1to1
primitive is defined as a data movement from one processor to another processor (shift
or copy). Similarly, a 1toN primitive is defined as a data movement from one processor
to several processors (broadcast). An Nto1 primitive is defined as a data movement from
several processors to one processor (reduction). Finally, an NtoN primitive is defined as
a data movement from several processors to several processors (multicast).
Remapping information, which has an impact on data movement, is included in G in
terms of data movement edges between phases. Data flow analysis detects whether an
array i in a phase p 1 (named used in a later phase, say p In this case,
with d \Theta d edges is added, connecting each node of the column associated with
array i at phase p 1 to each one associated with array i at phase p 2 . The label assigned
to each edge represents the data movement to be performed (a remapping)
if the corresponding dimensions are distributed. The dynamic model is further described
in [14].
Control flow statements between phases have to be considered when performing the
data flow analysis. Entry or exit points, conditional or iterative statements, can modify
the execution flow of a program and therefore cause a sequencing of the phases in the
program different from the lexicographic order. A control flow analysis is used to weight
the costs (as explained in the cost model) of the remapping edges set. This analysis is
further described in [13].
3.1.3 Hyperedges
Hyperedges in G reflect opportunities for parallelism. Each candidate parallel loop k has
a hyperedge H k in G, connecting all array dimensions that have to be distributed for the
loop to be parallelized. In distributed memory machines a loop can be fully parallelized
if it does not carry any data flow dependence [28]. Data-dependence analysis detects
the set of loops that are candidates for parallelization. According to the owner-computes
rule, the processor that owns a datum is responsible for performing all computations to
update it. Therefore, if a candidate parallel loop has to be parallelized, all left-hand side
array dimensions inside that loop subscripted with the loop control variable have to be
distributed.
This means that hyperedge H k links all those nodes V i [j] such that: 1) array i is
updated in the loop body enclosed by loop k (i.e., it appears on the left-hand side of the
assignment statement), and 2) the induction variable of loop k is used in the subscript expression
in dimension j. In this case, hyperedge H k is labeled with information associated
with the corresponding candidate parallel loop.
3.1.4 CYCLIC Information
CYCLIC distributions are useful for balancing the computational load of triangular iteration
spaces; however, if neighbor communication patterns appear in the code, a CYCLIC
distribution incurs excessive data movement.
In our framework we assume a BLOCK distribution by default, meaning that edge and
hyperedge labels in the CPG are assigned assuming a BLOCK distribution. However, if
the code contains triangular loops and it does not contain any neighbor communication,
then the CYCLIC distribution is assumed, meaning that labels in the CPG are assigned
assuming a CYCLIC distribution. In the event of conflict, both alternatives are considered
by duplicating the CPG. Labels in the first CPG copy are assigned assuming a BLOCK dis-
tribution, and labels in the second CPG copy are assigned assuming a CYCLIC distribution
(note that the cost model, described in section 3.3, is different according to whether the
distribution is BLOCK or CYCLIC). In this case, some data movement edges connecting
both CPG copies have to be added in order to allow arrays to change distribution pattern
between phases. Further details of this model are fully described in [12].
3.1.5 An Example
Figure
3 shows a simple code that is used as a working example throughout this paper.
The code consists of two loop nests that, following the assumed definition of phase, are
identified as phases.
do
do
A(i,
do
do
Figure
3: Sample code.
The maximum dimensionality of all arrays in the code is 2, therefore each column in
remap
Phase 1 Phase 2
loop j
loop i
remap
Figure
4: CPG for the sample code.
G has two nodes. In the first phase there are four columns, say corresponding to
arrays A, B, C, and D. Similarly, in the second phase there are three columns, say
corresponding to arrays C, D, and E. This can be seen in Figure 4. Note that although
array E is one-dimensional, its corresponding column V 7 has two nodes.
From the first assignment statement in the first phase, one reference pattern between
arrays A and B is identified:
This reference pattern indicates that if the first dimension of both arrays is distributed,
then a 1to1 data movement is needed (each processor has to send its last row to the
following processor). However, if the second dimension of both arrays is distributed, the
array accesses require no data movement. In addition, if the first dimension of array A
and the second dimension of array B (or vice versa) are aligned and distributed, an NtoN
data movement is necessary (this is a transposition, i.e., all processors send a block of
the array to all processors). A similar analysis is performed for each reference pattern in
each phase of the program. This information is shown in Figure 4, in which dotted edges
represent no data movement.
Remapping edges connect uses of the same array in different phases. For instance,
columns 3 and 5 in Figure 4 represent the same array C but in different phases. Edges
connecting the same dimension of these columns mean that the same distribution holds
between both phases and, therefore, no data movement is required. Edges connecting
different dimensions of these columns represent the effects of changing the distribution of
that array, i.e., a remapping.
Finally, a data dependence analysis detects that the do j loop in the first phase and
the do i loop in the second phase are candidates for parallelization. For the do j loop to
be parallelized, the second dimension of arrays A, B, and D have to be distributed in the
first phase. Therefore, a hyperedge connecting these nodes is inserted in G, and labeled
with information about this loop. Similarly, a hyperedge connecting the first dimension
of arrays C and D in the second phase is inserted. In this case, the hyperedge is labeled
with information about this loop, as can be seen in Figure 4.
3.2 Data Mapping with the CPG
The CPG contains all the information required in our model to estimate the performance
effects of the program for different mapping strategies. A valid data mapping strategy in
the one-dimensional case includes one node V i [j] from each column V i in G. This set of
nodes determines the array dimension j for each array i distributed in each phase. Note
that by selecting a set of nodes to be distributed, the alignment between them is implicitly
specified.
The performance effects for the selected data mapping strategy are estimated from the
set of edges and hyperedges that remain inside the selected set of nodes. Edges represent
data movement actions and hyperedges represent loops that can be effectively parallelized.
For instance, Figure 5 shows a valid data mapping strategy in which the second dimension
of arrays A, B, C, and D are aligned and distributed in the first phase, and the
first dimension of arrays C, D, and E are aligned and distributed in the second phase. The
effects of this data mapping strategy in the first phase are that there will be an NtoN
data movement of array C and that the do j loop will be parallelized. Similarly in the
second phase there will be an NtoN data movement of array E and the do i loop will be
parallelized. In addition, arrays C and D will be remapped between the two phases.
remap
Phase i Phase i+1
loop j
loop i
remap
Figure
5: Valid dynamic mapping in the CPG for the sample code.
3.3 Cost Model
In order to select an appropriate data mapping strategy, cost functions labeling edges and
hyperedges in the CPG are replaced by constant weights. Note that the accuracy of the
cost model is an orthogonal issue with respect to the framework presented in this paper:
the CPG could be weighted either with simple binary cost functions (cheap or expensive)
or by performing a complex performance prediction analysis.
The performance estimation is machine dependent; therefore, it has to be aimed at
a specific architecture. In our framework there is a configuration file with parameters
of the target system, such as the number of processors, the data movement latency and
bandwidth, and the parallel thread creation overhead. In addition, our cost model is based
on profiling information that provides array data sizes and the sequential computation
time for each loop.
The cost assigned to a data movement edge is computed as a function of the number
of bytes interchanged through remote memory accesses, and the machine specific latency
and bandwidth. Each reference pattern is matched to a set of predefined data movement
routines as defined in [25]. The routines considered in our framework, as introduced in
the previous section, are 1to1, 1toN, Nto1, and NtoN. According to the owner-computes
rule, the processor that owns the data on the left-hand side of an assignment statement
is responsible for computing that statement; therefore, the data to be moved is the non-local
data from the right-hand side of the statement. Given a data movement routine, the
number of processors, and the distribution pattern (BLOCK or CYCLIC), we can estimate
the block size of the data to move, and therefore the data movement time.
A hyperedge, associated with a candidate parallel loop, is weighted with the computation
time saved if that loop is effectively parallelized. Given the sequential computation
time of the loop and the shape of the iteration space, the number of processors and the
parallel thread creation overhead, and given a distribution pattern, this time can be easily
estimated.
For instance, in order to estimate the execution time for the first phase of the sample
code with the data mapping strategy illustrated in Figure 5 (distribute the second dimension
of each array in a BLOCK fashion), assume that the number of elements in each array
dimension is size (32 bit floating point per element), and that the sequential computation
time for the j loop is time seq. In addition, consider a target system with NP processors,
a data movement latency of LT seconds, a bandwidth of BW bytes/second, and a parallel
thread creation overhead of PT seconds.
There is an NtoN data movement of array C; therefore, the block size BS of the data
to move is computed (in bytes) as:
size \Theta
According to the features of the target system, the data movement time is estimated as:
seconds
In addition the do j loop in this phase can be parallelized, therefore the computation
time is estimated as:
And the total estimated time is, thus, move time plus comp time.
Note that all cost weights in the CPG are expressed in time units. This uniform
cost representation allows an estimation of the trade-offs between data movement and
parallelization gains. With this estimation, the CPG can be used as the main data
structure either in a performance estimation tool or in an automatic data mapping tool.
Further details of the cost model can be found in [12].
4 Two-dimensional Data Mapping
In this section we describe how to extend the CPG in order to support two-dimensional
distributions. We believe that for most scientific programs, restricting the number of
distributed dimensions of a single array to two does not lead to any loss of effective
parallelism. Even when higher-dimensional arrays show parallelism in each dimension,
restricting the number of distributed dimensions does not necessarily limit the amount of
parallelism that can be exploited [15].
The number of available processors N is known at compile time, and it is assumed to
be a number power of 2, i.e., two-dimensional processor topology is defined
by a grid of n 1 \Theta n 2 processors, where data mapping strategy in
the two-dimensional data mapping model distributes one or two dimensions of the arrays
over a two-dimensional grid of processors. The distribution may be static or dynamic,
and the processor topology may change according to the preferences of the program.
For simplicity in the explanation, we will initially assume that the n 1 \Theta n 2 processor
topology is fixed and known at compilation time (single topology). Although this is not
a realistic case, it allows us to introduce the general case, in which multiple processor
topologies are considered.
4.1 Single Topology
In this first case, we assume that the processor topology is two-dimensional, static,
and known at compilation time. Therefore, a valid data mapping strategy, in the two-dimensional
data mapping case with single topology, distributes two dimensions of the
arrays over a fixed n 1 \Theta n 2 grid of processors.
To this end, the CPG is made up of two undirected graph copies that are identical
except for their weights. In the first copy, named G 1 , all weights are computed assuming
processors. Likewise, in the second copy, named G 2 , all weights are computed assuming
processors. One G b copy (for b 2 f1; 2g), therefore, represents the effects of distributing
array dimensions over the b th dimension of the grid of processors. In order to distribute
dimension j of array i across the first dimension of the grid of processors (G 1 ), the node
has to be selected in G 1 (say V 1
[j]). Alternatively, to distribute dimension j of
array i across the second dimension of the grid of processors (G 2 ), the node V i [j] has to
be selected in G 2 (say V 2
[j]). This allows any array dimension to be mapped on any
dimension of the grid of processors.
According to this model, a valid data mapping strategy for the two-dimensional distribution
with a single topology problem contains one node V b
i [j] for each column V b
i in each
copy, with the additional restriction that one dimension j 1 selected in V 1
has to be
different from the dimension j 2 selected in V 2
The data movement and parallelization
effects for the selected two-dimensional data mapping strategy is estimated from the set
of edges and hyperedges that remain inside the selected set of nodes.
In
Figure
6 there is an example of a valid data mapping, in which the first dimension
of all arrays is distributed along the first dimension of the n 1 \Theta n 2 grid of processors, and
the second dimension of all arrays is distributed along the second dimension of the same
grid of processors. According to this data mapping, note that the do j loop in the first
phase is parallelized with n 2 processors and that the do i loop in the second phase is
parallelized with n 1 processors. Note also that the one-dimensional array E is distributed
only along the first dimension of the grid of processors. For this reason, as replication is
not considered in our framework, there is a 1toN data movement to satisfy the assignment
statement of array E to array D in the second phase.
loop i
loop j
Figure
solution in a two-dimensional CPG with n 1 \Theta n 2 topology.
Cost functions in the two-dimensional model have to be modified with respect to the
one-dimensional case. Data movement costs at each CPG copy are estimated assuming
that two array dimensions are distributed. In order to estimate the computation time for
nested loops, some edges connecting both CPG copies are inserted. These modifications
are fully described in [12].
4.2 Multiple Topologies
In order to consider any two-dimensional topology in our model, the idea is to build the
CPG with as many two-dimensional G copies as topologies may be considered. The symbolic
information contained at each copy is identical, but weights are computed according
to the number of processors assumed in the corresponding topology. For regularity, the
one-dimensional data mapping is modeled as a two-dimensional N \Theta 1 grid of processors.
Assuming that G ab is the graph copy corresponding to the b th dimension of the a th
topology, a valid data mapping strategy in the general two-dimensional distribution problem
has to select one node V ab
i [j] for each column V ab
i in each G ab copy within a single a
two-dimensional topology. As in the previous model, a dimension j 1 selected in V a 1
has to be different from dimension j 2 selected in V a 2 i [j 2 ]. The topology a selected for
one phase has to be the same for all arrays at that phase. However, the topology may
change between phases if necessary. One change in the distribution topology of an array
requires a redistribution, therefore additional data movement edges have to be inserted in
the CPG allowing this kind of remapping, and estimating the effects of the corresponding
data movement primitive.
Our current implementation is limited to two different topologies: the one-dimensional
N \Theta 1 topology, and a squared two-dimensional n 1 \Theta n 2 topology with
2 . If
m is an odd number, then n 1 is set to 2 \Theta n 2 . The extension to more than two topologies
is straightforward, and further details can be found in [12].
For instance, Figure 7 contains a valid general two-dimensional data mapping strategy.
In this case, the second dimension of all arrays in phase p 1 are aligned and distributed
on the one-dimensional grid of processors with N processors. The arrays C and D are
redistributed, and the first dimension of all arrays in phase p 2 are aligned and distributed
on the first dimension of the n 1 \Theta n 2 grid of processors, and the second dimension of all
arrays are aligned and distributed on the second dimension of the same two-dimensional
grid of processors.
Note that all nodes selected in phase p 1 belong to a single topology copy (the one-dimensional
topology), while all nodes selected in phase p 2 belong to another topology
copy (the two-dimensional one). this means that the distribution in the first phase is
one-dimensional, and the distribution in the second phase is two-dimensional.
loop j
G 11
procs
loop i
G 21
G 22
Phase
Figure
7: Valid solution in a general two-dimensional CPG.
5 Minimal Cost Path Problem Formulation
Given a valid data mapping strategy, the summation of weights of the edges that remain
inside the selected set of nodes is the data movement time estimation. The summation of
weights of the hyperedges that remain inside the selected set of nodes is the estimation
of the computation time saved due to parallelization. The total execution time of the
parallelized program is estimated as the sequential execution time plus the data movement
time minus the computation time saved due to parallelization. The optimal data mapping
strategy for the problem is that which minimizes the estimated parallel execution time.
In order to find the optimal data mapping strategy, according to our model, we translate
our data mapping problem into a minimal cost path problem. Some constraints have
to be added to guarantee the validity of the solution. In this section we describe the
formulation of our data mapping problem as a minimal cost path problem with a set of
additional constraints that guarantees the validity of the solution.
Linear programming (LP) provides a set of techniques that study those optimization
problems in which both the objective function and the constraints are linear functions.
Optimization involves maximizing or minimizing a function, usually with many variables,
subject to a set of inequality and equality constraints [26]. A linear pure integer programming
problem is an LP in which variables are subject to integrality restrictions. In
addition, in several models the integer variables are used to represent binary choices, and
therefore are constrained to be equal to 0 or 1. In this case the model is said to be a
linear programming problem.
In our framework, we model the whole data mapping problem as a linear 0-1 integer
programming problem, in which a 0-1 integer variable is associated with each edge and
hyperedge. The final value for each binary variable indicates whether the corresponding
edge or hyperedge belongs to the optimal solution. The objective function to minimize
is specified as the estimated execution time of the parallelized version of the original
program. Our problem is not purely a minimal cost path problem as several additional
restrictions have to be added to the path selection.
The 0-1 Variables
Assuming that G ab is the graph copy corresponding to the b th dimension of the a th topol-
ogy, let e ab
PQ denote the set of variables in G ab associated with edges connecting nodes in
column P to nodes in column Q. Note that according to our current implementation,
a 2g. Each set e ab
PQ contains d \Theta d elements. Let e ab
PQ [i; j] be
the variable in G ab associated with the edge connecting node i in column P to node j
in column Q. Its value is one if the corresponding edge belongs to the path, and zero
otherwise. Note that, as the graph is undirected, e ab
PQ [i; j] is equivalent to e ab
QP [j; i].
Redistribution edges behave like regular data movement edges; however, as they connect
different G ab copies, the sets of 0-1 integer variables associated with redistribution
edges are called r for simplicity in the notation of subscripts. Therefore, let r ab
PQ [i; j] be
the variable associated with the redistribution edge connecting node i of column P at G ab
to node j of column Q at G a 0 b , where a 0 denotes the alternate topology.
Finally, if an index k is assigned to each hyperedge, h ab
k will denote the 0-1 integer
variable in G ab associated to the k th hyperedge. Similarly, its value will be one if all the
nodes it links belong to the path, and zero otherwise.
The Model
Assume the D-dimensional data mapping problem, with T different topologies, each with
dimensionality D. A valid solution for this problem includes D nodes for each array, one
from each column, with the restriction that all nodes selected within a phase belong to a
single topology.
Some points should be noted about G before going into the details of the linear 0-1
integer programming model.
ffl All pairs of edges connecting the same two nodes can be replaced by a single edge
with weight equal to the addition of the weights of the original ones.
ffl There is a path between any pair of columns in G. If a set of columns is not
connected, then this set can be analyzed independently and assigned a different
data distribution strategy.
In order to guarantee the validity of the solution in the minimal cost path problem
formulation, some constraints have to be specified. These constraints can be organized in
the following sets:
The solution is a set of D paths.
Nodes selected in each path are distinct.
Each path within a single phase selects nodes of a different dimension (1
in a single topology.
Each path visits one node in each column.
connecting selected nodes are included in the solution.
connecting selected nodes are included in the solution.
The set of constraints C1 guarantees that a path in a G copy is connected. Thus for
each column Q connected to more than one column P and R, if one edge leading to a
node in Q is selected in the set e ab
PQ (or in the set r a 0 b
PQ when it exists), one edge leaving
this same node must be selected in the set e ab
QR (or in the set r ab
QR when it exists).
In terms of the variables and their values, it can be stated at each G ab copy that for
each node i of each column Q connected to more than one column P and R, the sum of
the values of variables associated with the edges that connect this node to column P must
be equal to the sum of the values of variables associated with the edges that connect this
node to column R:
d
e ab
d
e ab
The set of constraints C2 guarantees that paths do not have nodes in common, or in
other words, that the same array dimension is not distributed more than once. this can
be achieved by ensuring that the number of selected edges connecting each node in all
G ab copies to any other column is lower than or equal to one.
In terms of variables and their values, for each node i in column P connected to
another column Q by e ab
or r ab
PQ , the summation of the values of the variables associated
with the edges that connect this node to column Q at any G ab copy has to be lower than
or equal to one:X
d
e ab
The set of constraints C3 forces each selected path to belong to different dimensions
of the same topology. this can be modeled, for each topology a, by ensuring that the
number of selected edges in one dimension b of the topology equals the number of selected
edges in the alternate dimension b 0 of the same topology.
In terms of variables and their values, for each set of edges e ab
PQ and for each topology
a, the summation of the values of the variables associated with the edges in G ab must be
equal to the summation of the values of the variables associated with the edges in G ab 0
d
d
e ab
d
d
e ab 0
The sets of constraints C4 and C5 can be specified together, and these can be modeled
by forcing one edge to be selected in each dimension of a single topology. this can be
stated, in terms of variables and their values, by assuming the summation of each set of
edges e ab
PQ and r ab
PQ to be equal to one, for each dimension b of all topologies a.X
d
d
e ab
Finally, the set of constraints C6 ensures that one hyperedge is selected only when all
nodes connected by it have been selected. According to this model, a node i in column P
is selected in G ab if one edge e ab
PQ [i; j] or r ab
PQ [i; j] that connects it to any other column Q
has been selected. Assume that hyperedge h ab
connects n nodes in G ab , say nodes
from columns respectively. It can be stated, in terms of variables and their
values, that:
this must be accomplished for each hyperedge k at each G ab copy.
Example
For instance, assume the first phase of the one-dimensional CPG shown in Figure 4. There
are four columns, say (in the figure A, B, C, and D respectively); and three sets of
connecting these columns. In addition, one hyperedge (say
in the one-dimensional case, both a
and b equal 1 and, therefore, redistribution edges between CPG copies are not required.
The set of constraints C1 guarantees that the path in CPG is connected. Columns
and C are connected to more than one column, so one constraint is added for each
column:
The sets of constraints C2 and C3 guarantee the compatibility of the different paths
in the multi-dimensional CPG; therefore, they are not necessary in this example. The set
of constraints C4 and C5 are specified together, and force the selection of one edge to
each set of edges:
And the set of constraints C6 ensures the correct selection of the hyperedge:
Note that as the graph is undirected, the third constraint could also be specified as:
6 Experimental Results
Several experiments have been performed in order to validate different aspects of our
framework. First of all, we show the complexity in terms of computational time spent
in finding the optimal solution for a set of programs from different benchmark suites.
Secondly, the accuracy of the predictions is illustrated to demonstrate the validity of the
model.
6.1 Complexity of the Approach
The programs selected to evaluate the complexity of the model are the Alternating Direction
Implicit (Adi) integration kernel, the Erlebacher program developed by Thomas
Eidson at ICASE, programs Shallow, Tomcatv, and x42 from the xHPF benchmark
set 2 , and routine Rhs from the APPSP NAS benchmark set. For the purpose of this
evaluation, programs Erlebacher, Shallow, and Baro have been inlined (i.e., each
call has been replaced by the actual code), and routine Rhs has been transformed into a
single program.
Table
includes information about the number of code lines, the total number of
loops, the number of loops that are candidates for parallelization, the number of phases
in each program, the number of different arrays and their dimensionality, and the number
of different reference patterns between arrays. These characteristics are the parameters
that can determine the complexity of the final optimization problem.
Table
2 shows the number of 0-1 integer variables and the number of constraints
required (according to the model described in section 5) to formulate the minimal cost
path problem for one-dimensional data mappings. The last column shows the total CPU
time (in seconds) required to find the optimal solution. All CPU times were obtained
using a Sun UltraSparc. The model was built assuming a multiprocessor system with 8
2 xHPF is available by anonymous ftp at ftp.infomall.org in directory tenants/apri/Benchmarks
Program Lines Loops Parall Phases Arrays Dims Patts
Rhs 535 37 37 4 4 4 24
Tomcatv 178
Baro 1153 98 86 24 38 2 428
Shallow
Table
1: Characteristics of the selected programs.
processors and a bandwidth of 2 Mbytes per second.
Program edges hyper constr time
Erlebacher 1359 68 804 3.4
Rhs 336 37 176 0.5
Baro 1484 83 1608 10.4
Shallow 936 38 1004 3.9
Table
2: Characteristics of the one-dimensional model.
The most time-consuming application is Baro with 10.4 seconds. Shallow, Er-
lebacher, and X42 need 3.9, 3.4, and 3.1 seconds respectively, and all other programs
need half a second to be optimized.
In the two-dimensional data mapping problem assuming a single topology, the number
of CPG copies is duplicated, as well as the number of 0-1 integer variables. However,
the number of constraints required to model the problem is more than double because
additional constraints have to be added to relate the two CPG copies. Table 3 shows the
number of edges, hyperedges, constraints, and the total computation time spent to find
the optimal solution.
Programs Baro and Erlebacher require about two minutes to reach a solution,
Shallow, x42, and Rhs require between 32 and 46 seconds, and Tomcatv and Adi
need two and one seconds respectively.
Program edges hyper constr time
Erlebacher 2718 136 2014 117.6
Rhs 672 74 543 32.7
Tomcatv 496 20 583 2.0
Baro 2968 166 3703 125.7
Shallow 1872 76 2335 46.1
Table
3: Characteristics of the two-dimensional model with constant topology.
Finally, in Table 4 the number of edges, hyperedges, and constraints for the general
two-dimensional model is shown, together with the computation times required to find
the optimal solution.
Program edges hyper constr time
Erlebacher 8892 272 4065 3156.6
Rhs 2048 148 922 985.6
x42 4304 116 3481 2455.5
Baro 8944 332 7315 6354.5
Shallow 6160 152 4726 1636.7
Table
4: Characteristics of the general two-dimensional model for the selected programs.
In this model, the structure of the minimal cost path problem is harder to solve. Program
Baro requires almost two hours, while programs Erlebacher, x42, and Shallow
need between half an hour and one hour. Program Rhs needs 16 minutes, and the other
programs require just a few seconds.
Discussion
According to our experiments, only a few seconds are required to solve the one-dimensional
data mapping problem. However, in the two-dimensional case, the computation time
required is greater. Note that we decide alignment, distribution, parallelization of loops,
and dynamic changes in this strategy, for all phases of all routines in the program, together
in the same step. The number of data mapping candidates considered becomes 2 210 for
Baro, while the number of candidates for Erlebacher becomes 3 109 . Although the
longest computation time required to find an optimal data mapping was observed to
be up to two hours, it must be considered that the tool provides the optimal solution.
Therefore, this computation time is an investment that can be considered to be paid off
within each program run.
In order to reduce these computation times, note that the longest times are usually
for programs that have been inlined, i.e., programs Baro, Shallow, and Erlebacher.
The complexity of an inlined program becomes greater, as all routines are considered
together. We analyzed each routine of these programs in isolation. One routine from
program Baro requires two minutes, and all other routines need less than half a minute.
The analysis of each routine from program Shallow requires just a few seconds, and
all routines from program Erlebacher need less than one and a half minutes. These
results encourage us to consider inter-procedural analysis as a way to reduce the current
complexity.
Finally, we also observed that linear 0-1 integer programming solvers tend to find the
optimal solution, or at least some near-optimal solutions, at the beginning of the search,
although it requires many more iterations to explore the whole search space. The number
of iterations performed by the solver can be provided by the user as a parameter to limit
the search space. We obtained suboptimal solutions for Baro, Erlebacher, x42, and
Shallow in less than 10 minutes. The estimated performance of these solutions is higher
than 85% of the optimal estimated performance.
6.2 Accuracy of the Predictions
In order to test the accuracy of the predictions given by our model, some of the solutions
predicted were compared to the actual execution of the parallelized program on a Silicon
Graphics Origin 2000 with up to 32 processors. The Origin 2000 is a cache-coherent
non-uniform memory access multiprocessor with physically distributed memory, and a
high capacity 4 Mbyte cache memory for each processor. We distributed the arrays across
the caches, so that caches might act as a first level distributed memory. In this case, cache
memory accesses with higher latency. The programs selected
for these experiments are the Adi integration kernel, the Erlebacher program, the
Shallow benchmark, and the routine Rhs. As before, for the purpose of this evaluation,
programs Erlebacher and Shallow were inlined, and routine Rhs was transformed
into a program. Profiling information was obtained by executing the sequential code on
a single processor of the same Origin 2000 system. In all predictions we assumed a
bandwidth of 100 Mbytes per second.
We performed several experiments, trying different data mapping strategies and changing
the number of processors. Our framework is implemented as part of another automatic
data distribution tool (DDT [3]). this generates a file with the linear 0-1 integer programming
problem, that is the input to a general purpose solver. From the output of
this solver, we manually generate the parallel code. In order to control the scheduling of
the loop iterations according to the owner-computes rule, we strip-mined the distributed
loops. Details about these loop transformations can be found in [12]. The parallel code is
compiled using the native MIPSpro F77 compiler, but all compiler parallel optimizations
were disabled to avoid any change in our parallelization strategy. In order to generate the
model, all data movement costs were estimated assuming the caching effects, i.e., data is
transferred in cache lines.
In the first experiment, we compare the optimal solution suggested by our tool for the
set of selected programs with the actual execution on the Origin 2000 system, trying
different data mapping strategies and different numbers of processors, for each program.
With this experiment we intend to show that the proposed solution actually yields the
best result (among the executed strategies), and that predictions are close to the actual
measured executions.
The Adi program defines a two-dimensional data space and consists of a sequence of
initialization loops, followed by an iterative loop (with 6 phases) that performs the com-
putation. In each loop iteration, forward and backward sweeps along rows and columns
are done in sequence. The solution suggested by our tool is a dynamic one-dimensional
data mapping, distributing arrays by rows in the first computation flow and by columns
in the second computation flow. The resulting predicted parallel times of the optimal
solution using 2, 4, 8, 16, and 32 processors can be seen in the dotted line of Figure 8.
The solid lines show the measured execution times for the static one-dimensional row and
column distributions, and the dynamic one-dimensional strategy. The predicted parallel
times were performed using a profiled sequential execution time of 13.793 seconds. All
times in the Figure are expressed in seconds. Note that all predictions are within a 10%
of the actual measured execution times for the dynamic strategy (except in the execution
with processors, where the code falls into false sharing when arrays are distributed by
rows).
Number of Processors515
Execution
Time
Measured 1st dim
Measured 2nd dim
Measured dynamic
Predicted
Figure
8: Predicted and measured execution times for Adi.
The Erlebacher program is a 3D tridiagonal solver based on the Adi integration
kernel. The inlined program consists of 38 phases that perform symmetric forward and
backward computations along each dimension of four main three-dimensional arrays. In
[10] the authors point out that the best performance achieved for this program was obtained
with static two-dimensional distributions and pipelining computations. However,
pipelining computations are not considered in our model. Therefore the parallelization
strategy suggested by our tool is to distribute the third dimension of the arrays in the
first and second computation flows, and to redistribute before the third computation flow,
leaving the second dimension of the arrays distributed. The dotted line in Figure 9 shows
the predicted parallel times using 2, 4, 8, 16, and processors. Predicted parallel times
for a problem size 128 \Theta 128 \Theta 128 were performed using a profiled sequential execution
time of 5.855 seconds. The solid lines show the measured parallel times for the static
distribution of the first, second, and third dimensions, and the dynamic parallelization
strategy. Note that the actual measured execution times of the dynamic strategy are
within 10% of our predicted times.
Number of Processors26
Execution
Time
(secs) Measured 1st dim
Measured 2nd dim
Measured 3rd dim
Measured dynamic
Predicted
Figure
9: Predicted and measured execution times for Erlebacher.
The Shallow water equations model defines a set of 512 \Theta 512 sized arrays. The
inlined program consists of 27 phases, most of them within an iterative loop of NCYCLES
iterations. The optimal data mapping strategy suggested by the tool is the static one-dimensional
column distribution of all the arrays. The resulting predicted parallel times
of the optimal data distribution strategy can be seen in Figure 10, together with the
measured parallel times for the static row and column data distributions. Predicted
parallel times were computed using a profiled sequential execution time of 45.152 seconds.
Note that predicted times in this example are within 5% of the actual measured execution
times, although all executions obtain a similar performance.
Number of Processors103050
Execution
Time
Measured 1st dim
Measured 2nd dim
Predicted
Figure
10: Predicted and measured execution times for Shallow.
The Rhs routine defines a set of 5 \Theta 64 \Theta 64 \Theta 64 four-dimensional arrays. It consists
of four phases (36 loops) performing flux differences in the second, third and fourth di-
rections. The solution suggested by our tool is a dynamic one-dimensional data mapping,
where arrays are distributed in the fourth dimension in the first three phases, and in the
third dimension in the fourth phase. The predicted parallel times of the optimal solution,
together with the measured times for the static distribution of the second, third, and
fourth dimensions, and the dynamic strategy are shown in Figure 11. Predicted parallel
times were computed using a profiled sequential execution time of 165.413 seconds.
In our last experiment, we forced our tool to generate a fixed strategy for the Adi code,
in order to analyze the performance predictions with different data distribution strategies.
In
Table
5 the predicted and measured execution times (in seconds) of several strategies
are listed. Row and Col correspond to the static one-dimensional row and column
Number of Processors50150
Execution
Time
(secs) Measured 2nd dim
Measured 3rd dim
Measured 4th dim
Measured dynamic
Predicted
Figure
11: Predicted and measured execution times for Rhs.
distributions respectively. Dyn is the dynamic one-dimensional strategy, and 2-d is the
squared static two-dimensional data distribution strategy. All these implementations were
predicted and executed with a different number of processors, ranging from 2 to 32.
Predicted 10.82 9.32 8.58 8.20 8.02
ROW Measured 9.90 8.87 8.75 8.22 15.11
Predicted 9.88 7.91 6.93 6.43 6.19
COL Measured 9.97 7.90 6.93 6.70 6.64
Predicted 7.68 4.03 2.07 1.05 0.52
DYN Measured 6.85 4.00 2.26 1.18 1.85
Predicted 9.88 6.89 3.94 3.48 1.88
Measured 9.97 6.63 4.13 3.73 2.19
Table
5: Comparison of measured and predicted execution times for row, column, dynamic,
and two-dimensional data mappings with the Adi code.
Predictions were performed using a profiled sequential time for the Adi code of 13.793
seconds. Note that all predictions for each data mapping strategy are within 10% of the
actual measured parallel execution time (except codes that fall into false sharing 3 ).
3 False sharing occurs in executions with processors when arrays are distributed by rows (one-
Conclusions
Automatic data distribution tools in the context of distributed memory multiprocessor
systems usually decompose the parallelization problem into three independent steps:
alignment, distribution, and remapping; however, these steps are not really independent.
In addition, most algorithms used to solve each of these steps are based on heuristics. The
work presented in this paper represents the first automatic data mapping and parallelization
prototype that provides an optimal solution, according to our cost and compilation
models. The contributions of this proposal with respect to the previous work are:
ffl Definition of a model that represents the whole data mapping problem. this allows
the alignment, distribution, and remapping problems to be solved within a single
step.
ffl Formulation of a minimal cost path problem that provides a solution to the model.
The use of linear 0-1 integer programming techniques guarantee that the solution
provided is optimal.
Our framework is based on the definition of a single data structure, named the
Communication-Parallelism Graph (CPG), that integrates all the data movement and
parallelism related information inherent in each phase of the program, plus additional information
denoting remapping possibilities between them. The data mappings considered
in the framework can be one or two-dimensional, static or dynamic, BLOCK or CYCLIC, and
take into account the effects of control flow statements between phases. Our cost model
is based on profiling information obtained from a previous serial execution.
Experiments show that the cost model is fairly accurate (usually within 10%) in predicting
the performance of different data mapping strategies. In addition, we have shown
the complexity of our approach in terms of computation time spent to find an optimal
dimensional row distribution, and dynamic distribution in phases where arrays are distributed by rows).
solution. Although in the one-dimensional case the time required to find an optimal solution
to our benchmark set is a matter of seconds, in the general two-dimensional case this
time can increase up to two hours, trading off the quality of the solution and the computation
time of the analysis. However, we have shown that these times can be dramatically
reduced if near-optimal solutions are accepted. In this case our model can succumb to
the same problem as previous work, since some data mappings would be missed from the
search space. In summary, integrating sufficient information to solve automatic data mapping
in a single graph is ambitious; however, an expensive technique can be an important
tool if it is applied selectively.
A large number of additional aspects should be considered in the model definition
in order to extend the capabilities of the framework, and consequently the quality of
the solutions generated. As part of our future work we plan to include in the model
information that reflects data movement optimizations, such as detection and elimination
of redundant communication and overlapping of communication and computation, and
information that estimates the cache effects of data distributions. In addition to this, the
development of an inter-procedural analysis module may reduce the computation time
required to find an optimal solution. The set of solutions considered in our model is
currently limited to those that generate either parallel or sequential loops. As shown in
[4, 22, 10], better solutions can be obtained by handling pipelining computations. this
feature could be modeled in our framework through appropriately weighted hyperedges.
--R
Automatic Computation and Data Decomposition for Multiprocessors.
Global optimizations for parallelism and locality on scalable parallel machines.
Data distribution and loop parallelization for shared-memory multiprocessors
Tools and techniques for automatic data layout: A case study.
A static performance estimator to guide data partitioning decisions.
The alignment-distribution graph
Array distribution in data-parallel programs
Towards compiler support for scalable parallelism using multipartitioning.
Automatic data decomposition for message-passing machines
Automatic Data Distribution for Massively Parallel Processors.
Dynamic data distribution with control flow analysis.
A framework for automatic dynamic data mapping.
Automatic Data Partitioning on Distributed Memory Multicomputers.
Programming for parallelism.
Automatic data layout for High Performance Fortran.
The High Performance Fortran Handbook.
Automatic Data Layout for Distributed Memory Machines.
Optimal and near-optimal solutions for hard compilation problems
Fortran red - a retargetable environment for automatic data layout
Efficient algorithms for data distribution on distributed memory parallel computers.
Index domain alignment: Minimizing cost of cross-referencing between distributed arrays
Compiling communication-efficient programs for massively parallel machines
John Wiley
Automatic selection of dynamic partitioning schemes for distributed-memory multicomputers
An Optimizing Fortran D Compiler for Distributed-Memory Machines
--TR
--CTR
Minyi Guo , Yi Pan , Zhen Liu, Symbolic Communication Set Generation for Irregular Parallel Applications, The Journal of Supercomputing, v.25 n.3, p.199-214, July
Skewed Data Partition and Alignment Techniques for Compiling Programs on Distributed Memory Multicomputers, The Journal of Supercomputing, v.21 n.2, p.191-211, February 2002
Bjorn Franke , Michael F. P. O'Boyle, A Complete Compiler Approach to Auto-Parallelizing C Programs for Multi-DSP Systems, IEEE Transactions on Parallel and Distributed Systems, v.16 n.3, p.234-245, March 2005 | loop parallelization;linear 0-1 integer programming;distribution;distributed-memory multiprocessor;automatic data mapping;performance prediction;redistribution;alignment |
377264 | Local Encoding Transformations for Optimizing OBDD-Representations of Finite State Machines. | Ordered binary decision diagrams are the state-of-the-art representation of switching functions. In order to keep the sizes of OBDDs tractable, heuristics and dynamic reordering algorithms are applied to optimize the underlying variable order. When finite state machines are represented by OBDDs the state encoding can be used as an additional optimization parameter. In this paper, we analyze local encoding transformations which can be applied dynamically. First, we investigate the potential of re-encoding techniques. We then propose the use of an XOR-transformation and show why this transformation is most suitable among the set of all encoding transformations. The presented theoretical framework establishes a new optimization technique for OBDDs. | Introduction
Ordered binary decision diagrams (OBDDs) which have been introduced by Bryant [Bry86]
provide an efficient graph-based data structure for switching functions. The main optimization
parameter of OBDDs is the underlying variable order. In order to find a good order
two techniques were applied so far: the use of heuristics which try to exploit the structure of
a circuit representation (see e.g. [MWBS88]), and dynamic reordering techniques [Rud93].
Unfortunately, there are many applications, in particular in the field of sequential analysis,
where these two optimization techniques for OBDDs reach their limits. Hence, one essential
problem in logic synthesis and verification is to find new techniques to minimize OBDDs in
these applications.
When OBDDs are used to represent finite state machines the OBDD-size does not only
depend on the variable order but also on the state encoding. For a fixed state encoding there
are many finite state machines whose OBDD-representations are large w.r.t. all variable orders
[ATB94]. Therefore the relationship between the OBDD-size and the state encoding
becomes of increasing interest, see e.g. [QCC The importance of this relationship
is underlined by recent ideas to apply heuristic state re-encoding techniques to speed
up a verification process between similar-structured finite state machines [QCC
The underlying general problem of all these efforts is the following: Given the OBDDs
for the next-state and output functions of a finite state machine - if one is interested in
the input/output behavior of the machine, in how far can the internal state encoding be
exploited to minimize OBDD-sizes ? Our approach targets at applying local encoding trans-
formations, i.e. transformations which involve only a limited number of encoding bits. These
in: Proceedings FMCAD'96, LNCS
Supported by DFG-Graduiertenkolleg "Mathematische Optimierung".
transformations can be interpreted as a re-encoding of the symbolic states. The aim is to
minimize OBDD-sizes by the iterated application of local transformations. The advantage
of this approach is that the costs for applying these transformations are still manageable.
The paper is structured as follows: We begin with recalling some important definitions
and point out the principle potential of state re-encodings w.r.t. OBDD-sizes. Then, in
Section 4, we analyze the advantages of local encoding transformations. In Section 5 we
propose the application of the XOR-transformation and show why this transformation is
most promising among the set of all encoding transformations. At the end of the paper we
describe an implementation of this transformation and give some first experimental results
which illustrate the positive impact of the presented ideas.
Preliminaries
2.1 Finite state machines
a finite state machine, where Q is the set of states, I the input
alphabet, O the output alphabet,
the output function and Q 0 the set of initial states. As usual, all components of the state
machine are assumed to be binary encoded. Let p be the number of input bits, n be the
number of state bits and m be the number of output bits. In particular, with
is a function IB n \Theta IB p ! IB n , is a function IB n \Theta IB p ! IB m , and Q 0 is a subset of IB n .
2.2 Binary decision diagrams
Ordered binary decision diagrams (OBDDs) [Bry86] are rooted directed acyclic graphs representing
switching functions. Each OBDD has two sink nodes which are labeled 1 and 0.
Each internal node is labeled by an input variable x i and has two outgoing
edges, labeled 1 and 0 (in the diagrams the 1-edge is indicated by a solid line and the 0-edge
by a dotted line). A linear variable order is placed on the input variables. The variable
occurrences on each OBDD-path have to be consistent with this order. An OBDD computes
a switching function f : IB n ! IB in a natural manner: each assignment to the input
variables x i defines a unique path through the graph from the root to a sink. The label of
this sink defines the value of the function on that input.
The OBDD is called reduced if it does not contain any vertex v such that the 0-edge and
the 1-edge of v leads to the same node, and it does not contain any distinct vertices v and
v 0 such that the subgraphs rooted in v and v 0 are isomorphic. It is well-known that reduced
OBDDs are a unique representation of switching functions f with respect to
a given variable order [Bry86]. The size of an OBDD is the number of its nodes. Several
functions can be represented by a multi-rooted graph called shared OBDD. In the following,
all next-state and output functions are represented by a shared OBDD.
2.3 The transition relation
For a finite state machine M , the characteristic function of its transition relation is defined
by
Y
1in
Hence, the function T computes the value 1 for a triple (x; only if the state
machine in state x and input e enters the state y. The variables x are called
current-state variables and the variables y are called next-state variables.
3 Motivation: The Potential of Re-encoding
In order to demonstrate how much the size of an OBDD-representation depends on the
choice of the state encoding, let us consider an autonomous counter, a finite state machine
with a very simple structure:
Example 1. An autonomous counter (see for example [GDN92]) with 2 n states q
is an autonomous (i.e. input-independent) finite state machine with ffi(q i
Figure
1: Autonomous counter
ae
oe
ae
oe
ae
oe
ae
oe
ae
oe
Transitions: input/output
The following theorem shows that almost all encodings for the autonomous counter lead
to exponential-size OBDDs, even for their optimal variable order.
Theorem 1. Let e(n) denote the number of n-bit encodings for the autonomous counter
with 2 n states which lead to a (shared) OBDD of size at most 2 n =n w.r.t. their optimal
variable order. Let denote the number of all possible counter encodings. Then
the ratio e(n)=a(n) converges to zero as n tends to infinity.
The proof of the theorem can be found in the appendix. It is based on ideas of [LL92]
and classical counting results of Shannon [Sha49]. An analogous result can be established
for the characteristic function of the transition relation and its OBDD-size.
Definition 2. An encoding transformation, shortly called a re-encoding, is a bijective
that transforms the given state encoding to a new encoding. (For an
example see Figure 2.) If a state s is encoded by a bit-string c 2 IB n , then its new encoding
is ae(c).
Figure
2: Encoding transformation ae(c
original new
state encoding encoding
q3
1=0
1=0
1=0
new: 11
This modification of the internal state encoding does not modify the input/output
behavior of the state machine. The machine with the new encoding is denoted by
Its encoded next-state function, output function and set of initial
states are computed as follows:
The transition relation of the re-encoded machine M 0 can be obtained from the transition
relation of M as follows:
Lemma 3. Let T (x; be the characteristic function of the transition relation of M .
Then the characteristic function T 0 (x; of the transition relation of M 0 is
Y
ae
Therefore can be obtained from T (x; by the substitutions y i
Proof. The lemma is a consequence of the following equivalences:
ae
Example 1 (ctd. The large potential of re-encoding techniques can now be demonstrated
at the example of the autonomous counter: There exists an encoding such that the transition
relation of the autonomous counter with 2 n states and n encoding bits has at most 5n \Gamma 1
nodes [TM96] even if the variable order is fixed to x Hence, for each given
encoding of a finite state machine, there exists a re-encoding which leads to OBDDs of linear
size. As according to Theorem 1 most encodings lead to OBDDs of exponential size, the
gain between the original OBDD and the OBDD after a suitable re-encoding is exponential
in most cases. The aim now is to find the suited re-encoding that leads to small OBDD-sizes.
In the previous section we have shown that re-encodings may have a large impact on the
OBDD-size. It is possible that the OBDD becomes much smaller, but in the case of a badly
chosen re-encoding the OBDD could even become much larger. This situation is comparable
to the problem of finding a good variable order for an OBDD. When changing the variable
order of an OBDD, the graph may become much smaller in the best case or much larger in
the worst case. This sensitivity is the main reason why it is hard to find a good re-encoding
or a good variable order.
For the effective construction of good variable orders it has turned out that the most
efficient strategies are based on local exchanges of variables. The presently best strategies
for finding good variable orders dynamically are based on the sifting algorithm of Rudell
[Rud93, PS95]. The main principle of this algorithm is based on a subroutine which finds
the optimum position for one variable, if all other variables remain fixed. This subroutine is
repeated for each variable. There are two main reasons why this strategy works so efficiently:
Bounded size alteration: If one variable x i is moved to another position in the OBDD,
the size of the OBDD cannot change arbitrarily much, in particular it cannot explode.
[BLW95] have shown the following theorem:
Theorem 4. Let P be an OBDD. If a variable x i is moved to a later position in the
variable order, then the size of the resulting OBDD P 0 satisfies
If a variable x i is moved to an earlier position in the variable order, then the size of
the resulting OBDD P 0 even satisfies the relation
Practical studies have shown that in most cases the resulting sizes are even far below
the worst-case estimations. Hence, each application of the above mentioned subroutine
keeps the size of the OBDD manageable. However, this bounded size alteration for
the subroutine does not mean that the optimization potential is limited. The iteration
of this subroutine allows to minimize OBDDs very effectively.
Continuity: The procedure for moving a variable x i to a different position in the order
works continuously: During this process only the variables between the original and
the new position of x i are involved, and all nodes labeled by the remaining variables
remain untouched. In particular, the time complexity of this operation is very small
if x i is moved to an adjacent position, and it increases with the number of variables
between the original and the new position of x i in the variable order.
In the case of re-encoding the situation is analogous. It seems to be very hard to find
the right global re-encoding, whereas it is very promising to combine and iterate operations
with restricted local effect. Our approach to construct local re-encodings ae : IB n ! IB n is
to keep most of the bits fixed (i.e. ae i vary only on a small number of bits.
In particular, if we vary only 2 bits, we will speak of two-bit re-encodings. In this case it
follows from the worst-case bounds for the synthesis and the substitution of OBDDs that
the OBDDs remain polynomial.
Example 2. The exchange variables re-encoding ae
follows (The following definition shows the case i ! j, the case i ? j is defined
Obviously, the exchange variables re-encoding has the same effect on the next-state functions
as exchanging the state variables x i and x j in the variable order. From all the (2 n )!
possible encoding transformations n! can be generated by the iterated application of this
transformation type. The inverse mapping ae \Gamma1 from Equation 1 does not affect the size of
the resulting OBDDs as this mapping only causes the renaming of the two functions
Note, that the transformation which exchanges the encodings of two fixed states may
not be seen as a local operation, although the transformation seems to be very simple.
5 The XOR-Transformation
We will now propose XOR-transformations. This transformation is a local re-encoding which
operates on two bits.
Definition 5. An XOR-transformation ae i;j , 1
Short . For an example see Figure 3.
Figure
3: The XOR-transformation q 1 7! q 1 \Phi q 2
original new
encoding encoding
q1 q2 q new
Indeed, XOR-transformations provide a solid basis for the design of effective re-encodings
due to the following facts:
1. The number of possible re-encodings generated by the iterated application of XOR-
transformations is much larger than the number of possible variable orders. Thus,
XOR-transformations considerably enlarge the optimization space. On the other hand,
the number of these re-encodings is much smaller than the number of all re-encodings
which makes it possible to keep the search space manageable.
2. The size influence of this transformation is bounded in a reasonable way like in the
case of local changes in the variable order.
3. A precise analysis even shows that an XOR-transformation contains the same asymmetry
as the movement of one variable in the variable order. Namely, the bounds for
the effect of a transformation x i 7! x i \Phi x j depends on the position of x i and x j in the
variable order.
4. The XOR-transformation is in fact the only new possible re-encoding on two variables.
5. The XOR-transformation can be implemented efficiently like an exchange of two variables
in the order.
In the following subsection we will prove these statements.
5.1 Enumeration results
The following combinatorial statements characterize the size of the optimization space provided
by the use of XOR-transformations.
Lemma 6. (1) Let t(n) be the number of possible encoding transformations that can be
generated by the iterated application of XOR-transformations. It holds
(2) The quotient of t(n) and the number of all possible encoding transformations converges
to zero as n tends to infinity.
(3) Let v(n) := (2n)! denote the number of possible variable orders for the transition relation
of an autonomous finite state machine with n state bits. The fraction v(n)=t(n) converges
to zero as n tends to infinity.
Statement 3 says that in the case of autonomous state machines, there are much more
encoding transformations generated by XOR-transformations than variable orders for the
transition relation. This relation also holds when the number of input bits is fixed and the
number of state bits becomes large.
Proof. (1) Obviously, each XOR-transformation is a regular linear variable transformation
over the field ZZ 2 . Moreover, the XOR-transformations provide a generating system for all
regular linear variable transformations. Therefore the state encodings which can be obtained
by iterated XOR-transformations are in 1-1-correspondence with the regular n \Theta n-matrices
over ZZ 2 .
The number of these matrices can be computed as follows: The first row vector b 1 can be
chosen arbitrarily from ZZ n
The i-th row vector b i , 2 i n, can be chosen arbitrarily
from
These are 2 possibilities for the vector b i . This proves the claimed number.
(2) This statement follows from the relation
(3) It holds
!1:In particular, the number of possible encoding transformations which can be generated
by the iterated application of XOR-transformations is smaller than 2
which is exactly the
number of all n \Theta n-matrices over ZZ 2 .
It follows from the previous proof that all exchanges of two state variables can be simulated
by the iterated application of XOR-transformations.
5.2 Bounded size alteration
Let ae be the XOR-transformation q i 7! q i \Phi q j . Then the inverse transformation is defined
by
i.e. we have . The effect of ae(ffi(ae \Gamma1 (\Delta))) in Equation 1 can be split into two parts:
1. Substitute the current-state variable x i by x i \Phi x j .
2. Replace the function ffi i by
It does not matter which of these two steps is executed first.
Lemma 7. Let P be the OBDDs for
n be the OBDDs after
the application of the XOR-transformation q i 7! q i \Phi q j . The following holds:
The upper bound immediately follows from the facts that the substitution of an OBDD P 2
into one variable of an OBDD P 1 leads to an OBDD of size at most O(size(P 1
and that the operation P 1 \Phi P 2 leads to an OBDD of size at most O(size(P 1
the lower bounds it suffices to observe
In case of the transition relation both the current-state variables and the next-state
variables have to be substituted. This leads to the result
where P and P 0 are the original and the re-encoded OBDD for the transition relation,
respectively.
5.3 Stronger bounds
For a more refined analysis of the XOR-transformation we use the following theorem from
[SW93]. In particular, we will refine the analysis for the substitution of a variable x i by
in an OBDD.
Theorem 8. The reduced OBDD representing f with the variable order x contains
as many x i -nodes as there are different functions f S , S ae depending essentially
on x i (i.e. f S
where a
otherwise. 2
Let s k be the number of nodes labeled by x k in the OBDD P and s
k be the number of
nodes labeled by x k in the OBDD P 0 which is the result of the transformation.
Theorem 9. The size of an OBDD w.r.t. the variable order x after the application
of the substitution x i 7! x i \Phi x j is bounded from above by
and by
The proof of this theorem can be found in the appendix. It applies ideas from [BLW95],
in which local changes in the variable order are analyzed.
Corollary 10. Let P be an OBDD and P 0 the resulting OBDD after the substitution x i 7!
The analogy between the behavior of the XOR-transformation and the local changes in
the variable order recommends to use XOR-transformations for the optimization of OBDD-
sizes.
The XOR-transformation x i 7! x i \Phi x j for can be visualized as
shown in Figure 4. Let A and B be the two sub-OBDDs whose roots are the children of an
Consider a path from x j to x i . If this path contains the 0-edge of x j , the subgraph
rooted in x i remains unchanged. If instead the path contains the 1-edge of x j , the 0- and
the 1-successor of the x i -node are exchanged. This modification can prevent subgraph-
isomorphisms in the new sub-OBDDs which are rooted in an x k -node,
Figure
4: Mutation x i 7! x i \Phi x j for x
(a) path along 0-edge of x j
remains
unchanged
(b) path along 1-edge of x j
5.4 General two-bit re-encodings
The effect of each two-bit re-encoding can be split into the two-parts "Substitute the two
variables x i and x j by some functions" and "replace the two functions
functions". The variable substitution has an impact on all functions which depend essentially
on x i or x j , whereas the function replacement only affects the functions ffi
The next table shows that all re-encodings which are induced by the bijective
can be obtained by a combination of maximal one XOR-
transformation, an exchange variable transformation and the identity. Hence, beside the
exchange variable transformation merely XOR-transformations are needed to produce all
two-bit re-encodings. We write a two-bit re-encoding which is induced by f as
ae f
22
The substitution x i 7! x i does not affect the size of the OBDD. As x i \Phi x
each of the above 24 transformations has the same effect w.r.t. the OBDD-size as
a combination of the exchange variables transformation, the XOR-transformation and the
identity operation. Moreover, for each of the 24 transformations, a combination of at most
two of the "basis" transformations suffices.
Implementation aspects
In this section we will describe how to implement the XOR-substitution x i 7! x i \Phi x j
efficiently. Our starting point is the consideration of local changes in the variable order. In
order to modify the variable order of OBDDs we iterate exchanging variables in adjacent
levels. Since an exchange of adjacent variables is a local operation consisting only of the
relinking of nodes in these two levels, this can be done efficiently as shown in Figure 5. In
order to move a variable x i behind an arbitrary variable x j in the order, the exchanges of
adjacent variables are iterated.
Figure
5: Exchanging two neighboring variables
Level exchange
In case of the XOR-operation and adjacent variables x i and x j , we can proceed analogous
to the level exchange. Figure 6 shows the case where x j is the direct successor of x i in the
order. The case where x i is the direct successor of x j in the order works analogously. If x i
and x j are not adjacent, it would of course be helpful if we could simulate the substitution
by a sequence like x i 7! x i \Phi x
this straightforward idea does not work, as this would require operations in the intermediate
steps which influence more than two adjacent levels.
Figure
Performing x i 7! x i \Phi x j for two neighboring variables x i , x j
r h g s
A method that works and is only slightly more expensive than the exchange of two non-adjacent
variables is the following: First, shift the variable x i to a new position in the order
which is adjacent to x j . Then perform the XOR-operation, and then shift the variable x i
back to its old position. This technique retains the locality of the operation, as only nodes
with a label x k are influenced whose position in the order is between x i and x j .
6 Do All FSM-Descriptions Profit From XOR-Re-encodings?
In principle, the applicability of the XOR-transformation is not restricted to the use of
OBDDs as underlying data structure. It can also be applied to other data structures for
Boolean functions. However, the strong relationship between the XOR-transformation and
local changes in the variable order like in the case of OBDDs does not always transfer to
other representations. We will demonstrate this effect on OFDDs.
Ordered functional decision diagrams (OFDDs) [KSR92] are a modification of OBDDs
which seem to be more compact for arithmetic functions. Each node v with label x i in an
OBDD represents a Shannon decomposition
whereas each node v with label x i in an OFDD represents a Reed-Muller decomposition
In both decompositions the functions g and h are independent of x i and are the functions
which are represented by the subgraphs rooted in the two successor nodes of v.
It has been shown in [BLW95] that local changes in the variable order have the same
effect on OFDDs like on OBDDs. In particular, the exchange of two variables x i and x j in
the order only affects the nodes of an OBDD resp. OFDD which are labeled by a variable
whose position in the order is between x i and x j . These observations justify the notions
of local changes. From the proof of Theorem 9 it follows that the XOR-transformation for
OBDDs has also this pleasant local property. However, in spite of the fact that the Reed-Muller
decomposition seems to operate well with XOR-transformations, the substitution
does not have the local property like in the case of OBDDs.
Figure
7: OFDD for
x1 .
To prove this statement, consider the OFDD in Figure 7 which represents the function
for some functions independent of x . The function f 0
s:
Hence, there must exist a node in the OFDD for f 0 representing the function h \Phi r \Phi s. As
h, r and s are arbitrary functions, the substitution operation x i 7! x i \Phi x j does not have
the local property. However, as the \Phi-operation is a polynomial operation on OFDDs, the
result of the substitution remains polynomial.
A tight relationship between our re-encoding techniques and specific OBDD-variants is
the following: In a more general setting, the concept of domain transformations has been
proposed for the manipulation of switching functions [BMS95, FKB95]. The corresponding
variants of OBDDs are called TBDDs. In the following we show that our encoding
transformations can also be interpreted as TBDD-transformations.
IB be a switching function, and let : IB n ! IB n be
a bijective mapping. A -TBDD-representation of f is an OBDD-representation of f ffi ,
where ffi denotes the composition of functions.
It turns out that every re-encoding function ae defines a transformation within the TBDD-
concept. However, the OBDDs for the next-state functions of a re-encoded machine with
re-encoding function ae are not isomorphic to the ae-TBDDs, as in the OBDDs of the re-encoded
machine, the transformation ae \Gamma1 is also involved (see Equation (1) in section 3).
7 Experimental Results
In this section we present some very first experimental results on the extended optimization
techniques for OBDDs. We built up some routines on top of the OBDD-package of D. Long
and used the ISCAS89 benchmark circuits s1423, s5378, s9234 which have a large number
of state bits and have also formed the set of examples in [RS95]. Each optimization run
consists of three phases: First, we applied Rudell's sifting algorithm [Rud93] for finding a
good variable order. Then some minimization based on XOR-transformations is performed.
Finally, sifting is applied once more to re-establish a suitable variable order. The table shows
the obtained shared OBDD-sizes of the next-state functions in comparison to the sizes that
were obtained without the minimization by XOR-transformations.
without with
Circuit # state bits XOR XOR
The minimization based on XOR-transformation works as follows: In a preprocessing
step we compute promising pairs (i; j) for an XOR-transformation. The heuristic criteria
for considering a pair (i; j) as promising are:
1. the next-state functions have a nearly equal support, or
2. the variables x i and x j appear in nearly the same functions.
Then, as long as improvements are possible, the best XOR-transformation among these
pairs are applied. In order to avoid the expensive computations
this step if the variable substitution x i 7! x i \Phi x j yields a good intermediate result.
It must be admitted that in our experiments, the running times are significantly higher
than the running times for pure sifting. This is due to a non-optimal implementation of
the XOR-transformation and due to the large number of performed XOR-transformations.
However, we think that intensive studies of different strategies (analogous to the intensive
and finally quite successful studies of the variable order) should be able to improve these
results by far. Hence, we think that these experimental results underline the optimization
potential of the XOR-transformation.
An efficient implementation of the XOR-transformation and the construction of effective
minimization strategies is in progress.
8 Conclusion and Future Work
We have proposed and analyzed new re-encoding techniques for minimizing OBDDs. In
particular, we have proposed the XOR-transformation and shown that this transformation
is in fact the only new transformation on two state variables. This transformation can in
certain cases significantly enrich the set of basic operations for the optimization of OBDDs.
In the future we propose to study heuristics for choosing the right transformation pairs
and efficient combinations of variable reordering techniques with the new proposed trans-
formations. Furthermore, the dynamic application of the new re-encoding technique in the
traversal of a finite state machine seems promising: It helps to reduce the OBDD-sizes for
the set of reached states and also to reduce the efforts of the image computations.
9
Acknowledgements
We wish to thank Stefan Krischer, Jan Romann, Anna Slobodov'a and Fabio Somenzi for
interesting discussions and many valuable comments.
--R
BDD variable ordering for interacting finite state machines.
Simulated annealing to improve variable orderings for OBDDs.
BDD minimization by truth table permutations.
Sequential Logic Testing and Verifi- cation
Multilevel logic synthesis based on functional decision diagrams.
On the OBDD-representation of general Boolean functions
Logic verification using binary decision diagrams in a logic synthesis environment.
Who are the variables in your neighborhood.
Incremental FSM re-encoding for simplifying verification by symbolic traversal
Dynamic variable ordering for ordered binary decision diagrams.
The synthesis of two-terminal switching circuits
State encodings and OBDD-sizes
--TR
Graph-based algorithms for Boolean function manipulation
Sequential logic testing and verification
On the OBDD-Representation of General Boolean Functions
BDD variable ordering for interacting finite state machines
Efficient OBDD-based boolean manipulation in CAD beyond current limits
Who are the variables in your neighborhood
Dynamic variable ordering for ordered binary decision diagrams
Linear sifting of decision diagrams
Algorithms and Data Structures in VLSI Design
On the Influence of the State Encoding on OBDD-Representations of Finite State Machines
--CTR
Carla Piazza , Alberto Policriti, Ackermann encoding, bisimulations and OBDDs, Theory and Practice of Logic Programming, v.4 n.5-6, p.695-718, September 2004 | local transformation;OBDD;state encoding;finite state machine;ordered binary decision diagram |
377806 | Data locality enhancement by memory reduction. | In this paper, we propose memory reduction as a new approach to data locality enhancement. Under this approach, we use the compiler to reduce the size of the data repeatedly referenced in a collection of nested loops. Between their reuses, the data will more likely remain in higher-speed memory devices, such as the cache. Specifically, we present an optimal algorithm to combine loop shifting, loop fusion and array contraction to reduce the temporary array storage required to execute a collection of loops. When applied to 20 benchmark programs, our technique reduces the memory requirement, counting both the data and the code, by 51% on average. The transformed programs gain a speedup of 1.40 on average, due to the reduced footprint and, consequently, the improved data locality. | INTRODUCTION
Compiler techniques, such as tiling [29, 30], improves temporal
data locality by interchanging the nesting order of time-
iterative loops and array-sweeping loops. Unfortunately,
data dependences in many programs often prevent such loop
interchange. Therefore, it is important to seek locality enhancement
techniques beyond tiling.
In this paper, we propose memory reduction as a new approach
to data locality enhancement. Under this approach,
we use the compiler to reduce the size of the data repeatedly
referenced in a collection of nested loops. Between
END DO
END DO
END DO
E(I
END DO
(a) (b)
E(I
E(I
END IF
END DO
E(I
END DO
(c) (d)
Figure
1: Example 1
their reuses, the data will more likely remain in higher-speed
memory devices, such as the cache, even without loop inter-
change. Specically, we present an optimal algorithm to
combine loop shifting, loop fusion and array contraction to
contract the number of dimensions of arrays. For exam-
ples, a two-dimensional array may be contracted to a single
dimension, or a whole array may be contracted to a scalar.
The opportunities for memory reduction exist often because
the most natural way to specify a computation task may
not be the most memory-e-cient, and because the programs
written in array languages such as F90 and HPF are often
memory ine-cient. Consider an extremely simple example
(Example 1 in Figure 1(a)), where array A is assumed dead
after loop L2. After right-shifting loop L2 by one iteration
Figure
1(b)), L1 and L2 can be fused (Figure 1(c)). Array
A can then be contracted to two scalars, a1 and a2, as Figure
1(d) shows. (As a positive side-eect, temporal locality
of array E is also improved.) The aggressive fusion proposed
here also improves temporal data locality between dierent
loop nests.
In [8], Fraboulet et al. present a network
ow algorithm for
memory reduction based on a retiming theory [16]. Given a
perfect nest, the retiming technique shifts a number of iterations
either to the left or to the right for each statement in
the loop body. Dierent statements may have dierent shifting
factors. Three issues remain unresolved in their work:
Their algorithm assumes perfectly-nested loops and it
applies to one loop level only. Loops in reality, how-
ever, are mostly of multiple levels and they can be arbitrarily
nested. For perfectly-nested loops, although
one may apply their algorithm one level at a time, such
an approach does not provably minimize the memory
requirement. Program transformations such as loop
coalescing can coalesce multiple loop levels into a single
level. Unfortunately, however, their algorithm is
not applicable to the resulting loop, because the required
loop model is no longer satised.
Since their work does not target data locality, the relationship
between memory minimization and local-
ity/performance enhancement has not been studied.
In general, minimization of memory requirement does
not guarantee better locality or better performance.
Care must be taken to control the scope of loop fusion,
lest the increased loop body may increase register spills
and cache replacements, even though the footprint of
the whole loop nest may be reduced.
There are no experimental data to show that memory
requirement can actually be reduced using their
algorithm. Moreover, since their algorithm addresses
memory minimization only, there have been no experimental
data to verify our conjecture that reduced
memory requirement can result in better locality and
better performance, especially if the scope of loop fusion
is under careful control.
In this paper, we make the following main contributions:
We present a network
ow algorithm which provably
minimizes memory requirement for multi-dimensional
cases. We handle imperfectly nested loops, which contains
a collection of perfect nests, by a combination of
loop shifting, loop fusion and array contraction. We
completely reformulate the network
ow which exactly
models the problem for the multi-dimensional general
case.
The work in [8] could also be applied to our program
model if loop fusion is applied rst, possibly enabled by
loop shifting. However, our new algorithm is preferable
because (1) for multidimensional cases, our algorithm
is optimal and polynomial-time solvable with the same
complexity as the heuristic in [8], and (2) our algorithm
models imperfectly-nested loops directly.
For the general case, we propose a heuristic to control
fusion such that the estimated numbers of register
spills and cache misses should not be greater than the
ones in the original loop nests. Even though the benchmarking
cases tested so far are too small to utilize such
a heuristic, it can be important to bigger cases.
Many realistic programs may not immediately t our
program model, even though it is already more general
than that in [8]. We use a number of compiler
algorithms to transform programs in order to t the
model.
We have implemented our memory reduction technique
in our research compiler. We apply our technique to
benchmark programs in the experiments. The results
not only show that memory requirement can be
reduced substantially, but also that such a reduction
can indeed lead to better cache locality and faster execution
in most of these test cases. On average, the
memory requirement for those benchmarks is reduced
by 51%, counting both the code and the data, using
the arithmetic mean. The transformed programs have
an average speedup of 1.40 (using the geometric mean)
over the original programs.
In the rest of this paper, we will present preliminaries in Section
2. We formulate the network
ow problem and prove
its complexity in Section 3. We present controlled fusion
and discuss enabling techniques in Section 4. Section 5 provides
the experimental results. Related work is discussed in
Section 6 and the conclusion is given in Section 7.
2. PRELIMINARIES
In this section, we introduce some basic concepts and present
the basic idea for our algorithm. We present our program
model in Section 2.1 and introduce the concept of loop dependence
graph in Section 2.2. We make three assumptions
for our algorithm in Section 2.3. In Section 2.4, we illustrate
the basic idea for our algorithm. In Section 2.5, we introduce
the concept of reference window, based on which our
algorithm is developed. Lastly in Section 2.6, we show how
the original loop dependence graph can be simplied while
preserving the correctness of our algorithm.
2.1 Program Model
We consider a collection of loop nests, L1 , L2
1, in their lexical order, as shown in Figure 2(a). The
label L i denotes a perfect nest of loops with indices L i;1 ,
starting from the outmost loop. (In
Example 1, i.e. Figure 1(a), we have
Loop L i;j has the lower bound l i;j and the upper bound
respectively, where l i;j and u i;j are loop invariants. For
simplicity of presentation, all the loop nests L i , 1 i m,
are assumed to have the same nesting level n. If they do
not, we can apply our technique to dierent loop levels incrementally
[27]. Other cases which do not strictly satisfy
this requirement are transformed to t the model using techniques
such as code sinking [30].
The array regions referenced in the given collection of loops
are divided into three classes:
An input array region is upwardly exposed to the beginning
of L1 .
An output array region is live after Lm .
A local array region does not intersect with any input
or output array regions.
By utilizing the existing dependence analysis, region analysis
and live analysis techniques [4, 11, 12, 19], we can compute
input, output and local array regions e-ciently. Note
that input and output regions can overlap with each other.
In Example 1 (Figure 1(a)), E[0 : N ] is both the input array
(c) (d)
(a) (b) (e)
Figure
2: The original and the transformed loop nests
region and the output array region, and A[1 : N ] is the local
array region. Figure 3(a) shows a more complex example
(Example 2), which resembles one of the well-known Livermore
loops. In Example 2, where each declared
array is of dimension [JN
ZZ, ZA[1,2:KN], ZB[2:JN,KN+1] are input array regions.
ZP, ZR, ZQ, ZZ are output array regions. ZA[2:JN,2:KN]
and ZB[2:JN,2:KN] are local array regions.
Figure
2(b) shows the code form after loop shifting but before
loop fusion, where p j (L i ) represents the shifting factor
for loop L i;j . In the rest of this paper, we assume that loops
are coalesced into single-level loops [30, 27] after loop
shifting but before loop fusion. Figure 2(c) shows the code
form after loop coalescing but before loop fusion, where l i
and lower and the upper loop bounds for
the coalesced loop nest L i . Figure 2(d) shows the code form
after loop fusion. The loops are coalesced to ease code generation
for general cases. However, in most common cases,
loop coalescing is unnecessary [27]. Figure 2(e) shows the
code form after loop fusion without loop coalescing applied.
Array contraction will then be applied to the code shown in
either Figure 2(d) or in Figure 2(e).
2.2 Loop Dependence Graph
We extend the denitions of the traditional dependence distance
vector and dependence graph [14] to a collection of
loops as follows.
Denition 1. Given a collection of loop nests, L1
Lm , as in Figure 2(a), if a data dependence exists from iteration
n) of loop L1 to iteration (j1 ;
loop L2 , we say the distance vector is (j1
Denition 2. Given a collection of loop nests, L1 , L2
Lm , a loop dependence graph (LDG) is a directed graph
E) such that each node in V represents a loop
nest
Each directed edge, e =< represents a data
dependence (
ow, anti- or output dependence) from L i to
END DO
END DO
L2: DO
END DO
END DO
L3: DO
END DO
END DO
L4: DO
END DO
END DO
(a)
(b)
(c)
Figure
3: Example 2 and its original and simplied
loop dependence graphs
. The edge e is annotated by a distance vector 1 ~
dv(e).
For each dependence edge e, if its distance vector is not
constant, we replace it with a set of edges as follows. Let
S be the set of dependence distances e represents. Let ~
be the lexicographically minimum distance in S. Let
f ~
d1 6 ~ d, ~
d1)g. Any vector
From [29, 30], ~
(~ u is lexicographically greater than
n).
~
d1 in S1 (a subset of S) should lexically be neither smaller
than nor equal to any other vector in S. We replace the
original edge e with (jS1 annotated by ~
d0 and
~
Figure
3(b) shows the loop dependence graph for the example
in Figure 3(a), without showing the array regions.
As an example, the
ow dependence from L1 to L3 with
~
is due to array region In
Figure
3(b), where multiple dependences of the same type
ow, anti- or output) exist from one node to another, we
use one arc to represent them all in the gure. All associated
distance vectors are then marked on this single arc.
2.3 Assumptions
We make the following three assumptions in order to simplify
our formulation in Section 3.
Assumption 1. The loop trip counts for perfect nests L i
and L j are equal at the same corresponding loop level h,
n. This can be also stated as u i;h l i;h
Assumption 1 can be satised by applying techniques such
as loop peeling and loop partitioning. If the dierence in
the iteration counts is small, loop peeling should be quite
eective. Otherwise, one can partition the iteration spaces
of certain loops into equal pieces.
Throughout this paper, we use (h) to denote the loop trip
count of loop L i at level h, which is constant or symbol-
icly constant w.r.t. the program segment under consid-
eration. Denote ~
and
equals to the number of accumulated level-n loop iterations
which will be executed in one level-h loop iteration. Let
In this paper, we also denote i
as the number of static write references due to local array
regions 2 in loop L i . We arbitrarily assign each static write
reference in L i a number 1 k i in order to distinguish
them. Take loops in Figure 3(b) as an example, we have
~
Assumption 2. The sum of the absolute values of all dependence
distances at loop level h in loop dependence graph
E) should be less than one-fourth of the trip count
of a loop at level h. This assumption can also be stated
as jEj
dv(ek )j < 1~
for all ek 2 E annotated with the
dependence distance vector ~
Assumption 2 is reasonable because, for most programs, the
constant dependence distances are generally very small. If
non-constant dependence distances exist, the techniques discussed
in Section 4.2, such as loop interchange and circular
loop skewing, may be utilized to reduce such dependence
distances.
2 In the rest of this paper, the term of \a static write ref-
erence" means \a static write reference due to local array
regions".
d2
d2
(a) (b)
Figure
4: An illustration of memory minimization
Assumption 3. For each static write reference r, each instance
of r writes to a distinct memory location. There
should be no IF-statement within the loops to guard the
statement which contains the reference r. Dierent static
references should write to dierent memory locations.
If a static write reference does not write to a distinct memory
location in each loop iteration, we apply scalar or array
expansion to this reference [30]. Later on, our technique
should minimize the total size of the local array regions.
In case of IF statements, we assume both branches will be
taken. In [27], we discussed the case in which the regions
written by two dierent static write references are the same
or overlap with each other.
2.4 Basic Idea
Loop shifting is applied before loop fusion in order to honor
all the dependences. We associate an integer vector ~ p(L i )
with each loop nest L i in the loop dependence graph. Denote
is the shifting
factor of L j at loop level k (Figure 2(b)). For each dependence
edge with the distance vector ~
dv, the
new distance vector is ~ p(L
dv ~ p(L i ). Our memory
minimization problem, therefore, reduces to the problem of
determining the shifting factor, p j (L i ), for each Loop L i;j ,
such that the total temporary array storage required is minimized
after all loops are coalesced and legally fused.
dv ~ p(L i ) be the distance vector of ~
dv
after loop shifting. In this paper, ~
v2 denotes the inner
product of ~
v1 and ~
v2 . After loop coalescing, the distance
vector ~ v becomes ~~ v, which we call the coalesced dependence
distance. In order to make loop fusion legal,
must hold, on which the legality of our transformation stands.
The key to memory minimization is to count the number
of simultaneously live local array elements after transformation
(loop shifting, loop coalescing and loop fusion). As an
example, Figure 4(a) shows part of the iteration space for
three loop nests after loop coalescing but before loop fusion.
The rectangle bounds the iteration space for each loop nest.
Each point in the gure represents one iteration. The two
arrows represent the only two
ow dependences, d1 and d2 ,
which are due to the same static write reference, say r1 .
That is, r1 is the source of d1 and d2 .
After loop fusion, all the iteration spaces from dierent loop
nests map to a common iteration space. Figure 4(b) shows
how three separate iteration spaces map to a common one.
Based on Assumption 3, ~~ v also represents the number of
simultaneously live variables due to ~ v in this common iteration
space. In Figure 4(b), the number of simultaneously
live variables is 1 for d1 and is 3 for d2 .
However, there could be overlaps between the simultaneously
live local array elements due to dierent dependences.
In
Figure
4(b), the simultaneously live array elements for dependences
d1 and d2 overlap with each other. In this case,
the number of simultaneously live local array elements due
to the static write reference r1 will be the greater between
the two due to dependences d1 and d2 , i.e., 3 in this case.
Given a collection of loops, after fusion, the total number of
simultaneously live local array elements equals to the sum of
the number of simultaneously live local array elements due
to each static write reference.
2.5 Reference Windows
In [9], Gannon et al. use a reference window to quantify the
minimum cache footprint required by a dependence with a
loop-invariant distance. We shall use the same concept to
quantify the minimum temporary storage to satisfy a
ow
dependence.
Denition 3. (from [9]) The reference window, W (X
for a dependence on a variable X at time t,
is dened as the set of all elements of X that are referenced
by S1 at or before t and will also be referenced (according
to the dependence) by S2 after t.
In
Figure
1(a), the reference window due to the
ow dependence
(from L1 to L2 due to array A) at the beginning of
each loop L2 iteration is f A(I); A(I g. Its
reference window size ranges from 1 to N . In Figure 1(c),
the reference window due to the
ow dependence (caused by
array A) at the beginning of each loop iteration is f A(I 1)
g. Its reference window size is 1.
Next, we extend Denition 3 for a set of
ow dependences
as follows.
Denition 4. Given
ow dependence edges e1 , e2
es , suppose their reference windows at time t are W1 , W2 ,
respectively. We dene the reference window of f
es g at time t as [ s
Since the reference window characterizes the minimum memory
required to carry a computation, the problem of minimizing
the memory required for the given collection of loop
nests is equivalent to the problem of choosing loop shifting
factors such that the loops can be legally coalesced and fused
and that, after fusion, the reference window size of all
ow
dependences due to local array regions is minimized. Given
a collection of loop nests which can be legally fused, we
need to predict the reference window after loop coalescing
and fusion.
Denition 5. For any loop node L i (in an LDG) which
writes to local array regions R, suppose iteration (j1
becomes iteration j after loop coalescing and fusion. We dene
the predicted reference window of L i in iteration (j1
as the reference window of all
ow dependences due to R
in the beginning of iteration j of the coalesced and fused
loop. Suppose the predicted reference window with iteration
jn) has the largest size of those due to R. We
dene it as the predicted reference window size of the entire
loop L i due to R. We dene the predicted reference window
due to a static write reference r in L i as the predicted reference
window of L i due to be the array regions written by r.
convenience, if L i writes to nonlocal regions only, we
dene its predicted reference window to be empty.)
Based on Denition 5, we have the following lemma:
Lemma 1. The predicted reference window size for the
kth static write reference r in L i must be no smaller than
the predicted reference window size for any
ow dependence
due to r.
Proof. This is because the predicted reference window
size for any
ow dependence should be no smaller than the
minimum required memory size to carry the computation
for that dependence. The predicted reference window size
for the kth static write reference r in L i should be no smaller
than the memory size to carry the computation for all
ow
dependences due to r.
Theorem 1. Minimizing memory requirement is equivalent
to minimizing the predicted reference window size for
all
ow dependences due to local array regions.
Proof. By Denition 5 and Lemma 1.
Given a dependence with the distance vector ~
dv is the dependence distance
for after loop coalescing but before loop fusion, which we
also call the coalesced dependence distance. Due to Assumption
3, ~ ~
dv also represents the predicted reference window
size of both in the coalesced iteration space and in the
original iteration space.
Lemma 2. Loop fusion is legal if and only if all coalesced
dependence distances are nonnegative.
Proof. This is to preserve all the original dependences.
Take loop node L2 in Figure 3(c) as an example. The predicted
reference window size of L2 due to the static write
reference ZB(J; K) is the same as the predicted reference
window size of L2 , since ZB(J; K) is the only write reference
in L2.
2.6 LDG Simplification
The loop dependence graph can be simplied by keeping
only dependence edges necessary for memory reduction. The
simplication process is based on the following three claims.
Claim 1. Any dependence from L i to itself is automatically
preserved after loop shifting, loop coalescing and loop
fusion. This is because we are not reordering the computation
within any loop L i .
2. Among all dependence edges from L i to L j ,
suppose that the edge e has the lexicographically
minimum dependence distance vector. After loop shifting
and coalescing, if the dependence distance associated with
e is nonnegative, it is legal to fuse loops L i and L j . This is
because after loop shifting and coalescing, the dependence
distances for all other dependence edges remain equal to or
greater than that for the edge e and thus remain nonnega-
tive. In other words, no fusion-preventing dependences ex-
ist. We will prove this claim in Section 3 through Lemma 3.
3. The amount of memory needed to carry a computation
is determined by the lexicographically maximum
ow-dependence distance vectors which are due to local array
regions, according to Lemma 1.
During the simplication, we also classify all edges into two
classes: L-edges and M-edges. The L-edges are used to determine
the legality of loop fusion. The M-edges will determine
the minimum memory requirement. All M-edges are
ow dependence edges. But an L-edge could be a
ow, an
anti- or an output dependence edge. It is possible that an
edge is classied both as an L-edge and as an M-edge. The
simplication process is as follows.
For each combination of the node L i and the static
reference r in L i where i > 0, among all dependence
edges from L i to itself due to r, we keep
only the one whose
ow dependence distance vector is
lexicographically maximum. This edge is an M-edge.
For each node L i where we remove all dependence
edges from L i to itself.
For each node L i where i > 0, among all dependence
edges from L i to L j (j 6= i), we keep only one dependence
edge for legality such that its dependence distance
vector is lexicographically minimum. This edge
is an L-edge. For any static write reference r in L i ,
among all dependence edges from L i to L j (j 6= i) due
to r, we keep only one
ow dependence edge whose distance
vector is lexicographically maximum. This edge
is an M-edge.
For each node L i where among all dependence
edges from L i to L j (j 6= i), we keep only the dependence
edge whose dependence distance vector is lexicographically
minimum. This edge is an L-edge.
The above process simplies the program formulation and
makes graph traversal faster. Figure 3(c) shows the loop
dependence graph after simplication of Figure 3(b). In
Figure
3(c), we do not mark the classes of the dependence
edges. As an example, the dependence edge from L1 to L3
marked with (0; 0) is an L-edge, and the one marked with
(0; 1) is an M-edge. The latter edge is associated with the
static write reference ZA(J; K).
3. OBJECTIVE FUNCTION
In this section, we rst formulate a graph-based system to
minimize the number of simultaneously live local array el-
ements. We then reduce our problem to a network
ow
problem, which is solvable in polynomial time.
3.1 Formulating Problem 1
Given a loop dependence graph G, the objective function
to minimize the number of the simultaneously live local array
elements for all loop nests can be formulated as follows.
is an edge in G.)
subject to
~ ~
We call the system dened above as Problem 1. In (2),
~ ~
M i;k represents the number of simultaneously live array
elements due to the kth static write reference in L i .
Constraint (3) says that the coalesced dependence distance
must be nonnegative for all L-edges after loop coalescing
but before loop fusion. Constraint (4) says that the number
of simultaneously live local array elements due to the kth
static write reference in L i , ~ ~
must be no smaller than
the number of simultaneously live local array elements for
every M-edge originated from L i and due to the kth static
write reference in L i .
Combining the constraint (3) and Assumption 2, the following
lemma says that the coalesced dependence distance is
also nonnegative for all M-edges.
Lemma 3. If the constraint (3) holds, ~(~ p(L
holds for all M-edges e =< in G.
Proof. If
~ ~
dv(e). If ~
holds. Otherwise,
assume that the rst non-zero component of ~
dv(e) is the
hth component. Based on Assumption 2, we have ~ ~
For an M-edge e2 =< there must exist an
L-edge e1 =< >. The constraint (3) guarantees that
holds. We have ~(~ p(L
~
~
dv(e1 )).
By the denition of L-edges and M-edges, we have ~
~
Similar to the proof for the case of in the
above, we can prove that ~( ~
holds.
From the proof of Lemma 3, we can also see that for any
dependence which is eliminated during our simplication
process in Section 2.6, its coalesced dependence distance is
also nonnegative, given that the constraint (3) holds. Hence,
the coalesced dependence distances for all the original dependences
(before simplication in Section 2.6) are nonnega-
tive, after loop shifting and coalescing but before loop fusion.
Loop fusion is legal according to Lemma 2.
In Section 2.6, we know that for any
ow dependence edge
e3 from L i to L j due to the static write reference r which
is eliminated during the simplication process, there must
exist an M-edge e4 from L i to L j due to r. From the proof of
holds. Hence, the constraint (4) computes the predicted
reference window size, ~ ~
ow dependences
originated from L i due to the kth static write reference
in the unsimplied loop dependence graph (see Section
2.2). According to Lemma 1, the constraint (4) correctly
computes the predicted reference window size, ~ ~
M i;k .
3.2 Transforming Problem 1
We dene a new problem, Problem 2, by adding the following
two constraints to Problem 1. (e =< is an
edge in G.)
~
Lemma 4. Given any optimal solution for Problem 1,
we can construct an optimal solution for Problem 2, with
the same value for the objective function (2).
Proof. The search space of Problem 2 is a subset of
that of Problem 1. Given an LDG G, the optimal objective
function value (2) for Problem 2 must be equal to or
greater than that for Problem 1. Given any optimal solution
for Problem 1, we nd the shifting factor (~ p) and
~
M i;k values for Problem 2 as follows.
1. Initially let ~ p and ~
i;k values from Problem 1 be
the solution for Problem 2. In the following steps, we
will adjust these values so that all the constraints for
Problem 2 are satised and the value for the objective
function (2) is not changed.
2. If all ~ p values satisfy the constraint (5), go to step 4.
Otherwise, go to step 3.
3. This step nds ~ p values which satisfy the constraint
(5).
Following the topological order of nodes in G, nd the
rst node L i such that there exists an L-edge
where the constraint (5) is not satised.
(Here we ignore self cycles since they must represent
M-edges in G.) Suppose ~
is the sth and the rst
nonzero component of ~
. Let ~
the only two nonzero components
are the sth and the (s
by ~ p(L Because of ~ ~ the new ~ p
values, including ~ p(L j ), satisfy the constraints (3) and
(4). The value for the objective function (2) is also not
changed.
If ~ p(L
lexicographically nega-
tive, we can repeat the above process. Such a process
will terminate within at most n times since otherwise
the constraint (3) would not hold for the optimal solution
of Problem 1.
Note that the node L i is selected based on the topological
order and the shifting factor ~ p(L j ) is increased
compared to its original value. For any L-edge with the
destination node L j , if the constraint (5) holds before
updating ~ p(L j ), it still holds after the update. Such a
property will guarantee our process to terminate.
Go to step 2.
4. This step nds ~
i;k values which satisfy the constraint
(6).
Given nd the ~
value which satises the constraint (6) such that the
constraint (6) becomes equal for at least one edge.
If the ~
achieved above satises the constraint (4),
we are done. Otherwise, we increase the nth component
of the ~
M i;k value until the constraint (4) holds
and becomes equal for at least one edge.
Find all ~
values. The value for the objective function
(2) is not changed.
With such ~ p and ~
i;k values, the value for the objective
function (2) for Problem 2 is the same as that for Problem
1. Hence, we get an optimal solution for Problem 2 with
the same value for the objective function (2).
Theorem 2. Any optimal solution for Problem 2 is also
an optimal solution for Problem 1.
Proof. Given any optimal solution of Problem 2, we
take its ~ p and ~
M i;k values as the solution for Problem 1.
Such ~ p and ~
M i;k values satisfy the constraints (3)-(4), and
the value for the objective function (2) for Problem 1 is
the same as that for Problem 2. Such a solution must be
optimal for Problem 1. Otherwise, we can construct from
Problem 1 another solution of Problem 2 which has lower
value for the objective function (2), according to Lemma 4.
This contradicts to the optimality of the original solution
for Problem 2.
By expanding the vectors in Problem 2, an integer programming
problem results. General solutions for IP
problems, however, do not take the LDG graphical characteristics
into account. Instead of solving the IP problem,
~ ~
~ ~
~ ~ (1)
(1)
Figure
5: The transformed graph (G1) for Figure
3(c)
we transform it into a network
ow problem, as discussed in
the next subsection.
3.3 Transforming Problem 2
Given a loop dependence graph G, we generate another
graph
For any node L create a corresponding node ~
in G1 .
For any node L has an outgoing M-edge,
let the weight of ~
L i be w( ~
~. For each static
reference rk (1 k i ) in L i , create another
node ~
in G1 , which is called the sink of ~
due to
rk . Let the weight of ~
be w( ~
For any node L i 2 G which does not have an outgoing
M-edge, let the weight of ~
For any M-edge < in G due to the static write
reference rk , suppose its distance vector is ~
dv. Add
an edge < ~
> to G1 with the distance vector
~
dv.
For any L-edge < suppose its distance
vector is ~
dv. Add an edge < ~
to G1 with the
distance vector ~
dv.
For the original graph in Figure 3(c), Figure 5 shows the
transformed graph.
We assign a vector ~ q to each node in G1 as follows.
For each node ~
For each node ~
The new system, which we call Problem 3, is dened as
follows. is an edge in G1 annotated by ~
dk .)
subject to
~
dk ~ 0; 8e (8)
Theorem 3. Problem 3 is equivalent to Problem 2.
Proof. We have
~ 0~
Hence the objective function (2) is equivalent to (7).
For each edge e =< ~
in G1 , the inequality (8) is
equivalent to
where e1 is an L-edge in G from L i to L j . Inequality (10) is
equivalent to (5), hence inequality (8) is equivalent to (5).
For each edge e =< ~
> in G1 , the inequality (8) is
equivalent to
~
where e1 is an M-edge in G from L i to L j due to the kth
static write reference in L i . Inequality (11) is equivalent to
(6), hence inequality (8) is equivalent to (6).
Similarly, it is easy to show that the constraints (3) and (4)
are equivalent to constraint (9).
Note that one edge in G could be both an L-edge and an M-
edge, which corresponds to two edges in G1 . From Assumption
2, the following inequality holds for the transformed
graph
dv(ek )j < 1~
where ek 2 E1 is annotated with the dependence distance
vector ~
Same as Problem 2, Problem 3 can be solved by linearizing
the vector representation so that the original problem
becomes an integer programming problem, which in its general
form, is NP-complete. In the next subsections, however,
we show that we can achieve an optimal solution in polynomial
time for Problem 3 by utilizing the network
ow
property.
3.4 Optimality Conditions
We develop optimality conditions to solve Problem 3. We
utilize the network
ow property. A network
ow consists
of a set of vectors such that each vector f(e i ) corresponds
to an edge e i 2 E1 and, for each node v i 2 V1 , the sum of
ow values from all the in-edges should be equal to w(v i )
plus the sum of
ow values from all the out-edges. That is,
where ek =< :; v i > represents an in-edge of v i and
represents an out-edge of v i .
Lemma 5. Given there exists at least one
legal network
ow.
Proof. Find a spanning tree T of G1 . Assign the
ow
value to be ~ 0 for all the edges not in T . Hence, if we can
nd a legal network
ow for T , the same
ow assignment is
also legal for G1 .
We assign
ow value to the edges in T in reverse topological
order. Since the total weight of the nodes in T is equal to
~ 0, a legal network
ow exists for T .
Based on equation (13), given a legal network
ow, we have
For any node v 2 V1 , we have
or 1. For our network
ow algorithm, we abstract
out the factor ~ from w(v) such that w(v) is represented
by c only. Such an abstraction will give each
ow value the
ck is an integer constant.
Suppose f(ek ) ~ 0 for the edge ek 2 E1 , which is equivalent
to ck 0. With the constraint (9), we have
Hence, we have
Therefore, with the equation (14), if f(ek ) ~ 0, we have
Collectively, we have the optimality conditions stated as
the following theorem such that if they hold, the inequality
becomes the equality and the optimality is achieved
for Problem 3.
Theorem 4. If the following three conditions hold,
1. Constraints (8) and (9) are satised, and
2. A legal network
ow f(ek exists such that ck
3. jV 1 j
dk holds, i.e., inequality
becomes an equality.
then Problem 3 achieves an optimal solution
dk .
Proof. Obvious from the above discussion.
Solving Problem 3
Here, let us consider each vector w(v i ), ~
dk as a single
computation unit. Based on the duality theory [24, 2],
Problem 3, excluding the constraint (9), is equivalent to
subject to
The constraint (9) is mandatory for the equivalence between
Problem 3 and its dual problem, following the development
of optimality conditions in Section 3.4 [1]. The constraint
(19) in the dual system precisely denes a
ow property,
where each edge e i is associated with a
ow vector f(e i ).
We dene Problem 4 as the system by (7)-(8) and (18)-
(20). Similar to w(v i ), the vector f(ek ) is represented by ck
Although apparently the search space
of Problem 4 encloses that of Problem 3, Problem 4
has correct solutions only within the search space dened
by Problem 3.
Based on the property of duality, Problem 4 achieves an
optimal solution if and only if
The constraints (8), (19) and (20) holds, and
The objective function values for (7) and (18) are equal,
dk holds.
If we can prove that the constraint (9) holds for the optimal
solution of Problem 4, such a solution must also be optimal
for Problem 3, according to Theorem 4.
There exist plenty of algorithms to solve Problem 4 [1,
2]. Although those algorithms are targeted to the scalar
system (the vector length equals to 1), some of them can
be directly adapted to our system by vector summation,
subtraction and comparison operations. A network simplex
algorithm [2] can be directly utilized to solve our sys-
tem. The algorithmic complexity, however, is exponential
in the worst case in terms of the number of nodes and
edges in G1 . Several graph-based algorithms [1], on the
other hand, have polynomial-time complexity. Examples include
successive shortest path algorithm with the complexity
scaling algorithm with the complexity
O(jV1 jjE1 jlogjV1 j), and so on. From [1], the current fastest
polynomial-time algorithm for solving network
ow problem
is enhanced capacity scaling algorithm with the complexity
logjV1 j). For these algorithms, we
have the following lemma.
Lemma 6. For any optimal solution of ~
q i in Problem
4, there exists a spanning tree T in G1 such that each edge
Proof. This is true due to the foundation of the simplex
method [2].
Let T be the spanning tree in Lemma 6. If we x any ~ q to
be ~ 0, all ~
can be determined uniquely. With
such uniquely-determined ~
ds
For any e =<
dk , with the
inequality (21), we have
ds j: (22)
For the inequality (22), based on the inequality (12), we
have
annotated with ~
Lemma 7. We have ~( ~
annotated with ~
dk , subject to the
constraints (8) and (23).
Proof. If ~
holds.
Otherwise, assume the rst non-zero component is the hth
for ~
dk . Then, q
and q (h)
With the constraint (23), we have
Hence, Inequality (12) guarantees that the constraint
holds when the optimality of Problem 4 is achieved.
The optimal solution for Problem 4 is also an optimal solution
for Problem 3.
3.6 Successive Shortest Path Algorithm
We now brie
y present one of the network
ow algorithms,
successive shortest path algorithm [1], which can be used to
solve Problem 4.
The algorithm is depicted in Figure 6. We let f(ek
are
scalars. After the rst while iteration, the algorithm always
Output: ~
Procedure:
Initialize the sets
while
Select a node vk 2 E and v l 2 D.
Determine shortest path distances ~
j from node vk to all
other nodes in G1 with respect to the residue costs
c
where the edge <
is annotated with ~
d ij in G1 .
Let P denote a shortest path from vk to v l .
Update ~
the
ow value in the residue network
ow graph.
Augment - units of
ow along the path P .
and the residue graph.
while
Figure
The successive shortest path algorithm
maintains feasible shifting factors and nonnegativity of
ow
values by satisfying the constraints (8) and (20). It adjusts
the
ow values such that the constraint (19) holds for all
edges in G1 when the algorithm ends. For the complete description
of the algorithm, including the concept of reduced
cost and residue network
ow graph, the semantics of sets E
and D, etc., please refer to [1].
For Example 2 (in Figure 3), after applying successive shortest
path algorithm, we have ~ p(L1
and ~ p(L2
Figure
7 shows the transformed code
for Example 2 after memory reduction.
4. REFINEMENTS
4.1 Controlled Fusion
Although array contraction after loop fusion will decrease
the overall memory requirement, loop fusion at too many
loop levels can potentially increase the working set size of the
loop body, hence it can potentially increase register spilling
and cache misses. This is particularly true if a large number
of loops are under consideration. To control the number of
fused loops, after computing the shifting factors to minimize
the memory requirement, we use a simple greedy heuristic,
Pick and Reject (see Figure 8), to incrementally select loop
nests to be actually fused. If a new addition will cause
the estimated cache misses and register spills to be worse
than before fusion, then the loop nest under consideration
will not be fused. The heuristic then continues to select
fusion candidates from the remaining loop nests. The loop
nests are examined in an order such that the loops whose
fusion saves memory most are considered rst. We estimate
register spilling by using the approach in [22] and estimate
cache misses by using the approach in [7].
It may also be important to avoid performing loop fusion at
too many loop levels if the corresponding loops are shifted.
This is because, after loop shifting, loop fusion at too many
loop levels can potentially increase the number of operations
either due to the IF-statements added to the loop body or
due to the eect of loop peeling. Coalescing, if applied,
may also introduce more subscript computation overhead.
Although all such costs tend to be less signicant than the
costs of cache misses and register spills, we still carefully
END DO
END IF
END DO
END DO
END IF
END DO
Figure
7: The transformed code for Figure 3(a) after
memory reduction
control the fusion of innermost loops. If the rate of increased
operations after fusion exceeds a certain threshold, we only
fuse the outer loops.
4.2 Enabling Loop Transformations
We use several well-known loop transformations to enable
eective fusion. Long backward data-dependence distances
make loop fusion ineective for memory reduction. Such
long distances are sometimes due to incompatible loops [26]
which can be corrected by loop interchange. Long backward
distances may also be due to circular data dependences
which can be corrected by circular loop skewing [26]. Fur-
thermore, our technique applies loop distribution to a node,
the dependence distance vectors originated from L i
are dierent from each other. In this case, distributing the
loop may allow dierent shifting factors for the distributed
loops, potentially yielding a more favorable result.
5. EXPERIMENTAL RESULTS
We have implemented our memory reduction technique in a
research compiler, Panorama [12]. We implemented a net-
work
ow algorithm, successive shortest path algorithm [1].
The loop dependence graphs in our experiments are relatively
simple. The successive shortest path algorithm takes
less than 0.06 seconds for each of the benchmarks. To measure
its eectiveness, we tested our memory reduction technique
on 20 benchmarks on a SUN Ultra II uniprocessor
workstation and on a MIPS R10K processor within an SGI
Procedure Pick and Reject
Input: (1) a collection of m loop nests, (2) ~ ~
the estimated number of register
spills np and the estimated number of cache misses nm, both in
the original loop nests.
Output: A set of loop nests to be fused, FS.
Procedure:
1. Initialize FS to be empty. Let OS initially contain all the m
loop nests.
2. If OS is empty, return FS. Otherwise, select a loop nest L i
from OS such that the local array regions R written in L i can
be reduced most, i.e., the dierence between the size of R and
the number of the simultaneously live array elements due to the
static write references in L i , should lexically be neither smaller
than nor equal to any other loop nest in OS. Let TR be the set
of loop nests in OS which contain references to R. Estimate a,
the number of register spills, and b, the number of cache misses,
after fusing the loops in both FS and TR and after performing
array contraction for the fused loop. If (a np^b nm), then
FS FS[TR, OS OS TR. Otherwise, OS OS fL i g
and go to step 2.
Figure
8: Procedure Pick and Reject
Origin 2000 multiprocessor. We present the experimental
results on the R10K. The results on the Ultra II are
similar [27]. The R10K processor has a 32KB 2-way set-associative
data cache with a 32-byte cache line, and it
has a 4MB 2-way set-associative unied L2 cache with a
128-byte cache line. The cache miss penalty is 9 machine
cycles for the L1 data cache and 68 machine cycles for the
L2 cache.
5.1 Benchmarks and Memory Reduction
Table
1 lists the benchmarks used in our experiments, their
descriptions and their input parameters. These benchmarks
are chosen because they either readily t our program model
or they can be transformed by our enabling algorithms to
t. With additional enabling algorithms developed in the
future, we hope to collect more test programs. In this table,
\m/n" represents the number of loops in the loop sequence
(m) and the maximum loop nesting level (n). Note that
the array size and the iteration counts are chosen arbitrarily
for LL14, LL18 and Jacobi. To dierentiate benchmark
swim from SPEC95 and SPEC2000, we denote the SPEC95
version as swim95 and the SPEC2000 version as swim00.
Program swim00 is almost identical to swim95 except for
its larger data size. For combustion, we change the array
size (N1 and N2) from 1 to 10, so the execution time will
last for several seconds. Programs climate, laplace-jb,
laplace-gs and all the Purdue set problems are from an
HPF benchmark suite at Rice University [20, 21]. Except
for lucas, all the other benchmarks are written in F77. We
manually apply our technique to lucas, which is written in
F90. Among 20 benchmark programs, our algorithm nds
that the purdue-set programs, lucas, LL14 and combustion
do not need to perform loop shifting. For each of the benchmarks
in Table 1, all m loops are fused together. For swim95,
swim00 and hydro2d, where only the outer loops are
fused. For all other benchmarks, all n loop levels are fused.
For each of the benchmarks, we examine three versions of
the code, i.e. the original one, the one after loop fusion but
before array contraction, and the one after array contrac-
Table
1: Test programs
Benchmark Name Description Input Parameters m/n
LL14 Livermore Loop No. 14
Jacobi Jacobi Kernel w/o convergence test
tomcatv A mesh generation program from SPEC95fp reference input 5/1
swim95 A weather prediction program from SPEC95fp reference input 2/2
swim00 A weather prediction program from SPEC2000fp reference input 2/2
hydro2d An astrophysical program from SPEC95fp reference input 10/2
lucas A promality test from SPEC2000fp reference input 3/1
mg A multigrid solver from NPB2.3-serial benchmark Class 'W' 2/1
combustion A thermochemical program from UMD Chaos group
purdue-02 Purdue set problem02 reference input 2/1
purdue-03 Purdue set problem03 reference input 3/2
purdue-04 Purdue set problem04 reference input 3/2
purdue-07 Purdue set problem07 reference input 1/2
purdue-08 Purdue set problem08 reference input 1/2
purdue-12 Purdue set problem12 reference input 4/2
purdue-13 Purdue set problem13 reference input 2/1
climate A two-layer shallow water climate model from Rice reference input 2/4
laplace-jb Jacobi method of Laplace from Rice
laplace-gs Gauss-Seidel method of Laplace from Rice
combustion purdue-02 purdue-03 purdue-04 purdue-07 purdue-08 purdue-12 purdue-13 climate laplace-jb laplace-gs
Benchmarks (for each benchmark, from left to right: original
and transformed codes)
Normalized
Occupied
Memory
Size
Code
Data
(Data Size for the Original Programs (unit: KB))
swim00 hydro2d lucas mg combustion
191000 11405 142000 8300 89
purdue-12 purdue-13 climate laplace-jb laplace-gs
4194 4194 169 6292 1864
Figure
9: Memory sizes before and after transfor-
mation
tion. Among these programs, only combustion, purdue-07
and purdue-08 t the program model in [8]. In those cases,
the algorithm in [8] will derive the same result as ours. So,
there is no need to list those results.
For all versions of the benchmarks, we use the native Fortran
compilers to produce the machine codes. We simply
use the optimization
ag \-O3" with the following adjust-
ments. We switch prefetching for laplace-jb, software
pipelining for laplace-gs and loop unrolling for purdue-03.
For swim95 and swim00, the native compiler fails to insert
prefetch instructions in the innermost loop body after memory
reduction. We manually insert prefetch instructions into
the three key innermost loop bodies, following exactly the
same prefetching patterns used by the native compiler for
the original codes.
Figure
9 compares the code sizes and the data sizes of the
original and the transformed codes. We compute the data
size based on the global data in common blocks and the local
data dened in the main program. The data size shown for
each original program is normalized to 100. The actual data
size varies greatly for dierent benchmarks, which are listed
in the table associated with the gure. For mg and climate,
the memory requirement diers little before and after the
program transformation. This is due to the small size of the
contractable local array. For all other benchmarks, our technique
reduces the memory requirement considerably. The
arithmetic mean of the reduction rate, counting both the
data and the code, is 51% for all benchmarks. For several
small purdue benchmarks, the reduction rate is almost
100%.
5.2 Performance
Figure
compares the normalized execution time, where
\Mid" represents the execution time of the codes after loop
fusion but before array contraction, and \Final" represents
the execution time of the codes after array contraction. The
geometric mean of speedup after memory reduction is 1.40
over all benchmarks.
The best speedup of 5.67 is achieved for program purdue-03.
combustion purdue-02 purdue-03 purdue-04 purdue-07 purdue-08 purdue-12 purdue-13 climate laplace-jb laplace-gs
Normalized
Execution
Time
Original
Mid
Final
Figure
10: Performance before and after transfor-
combustion
Normalized
Cache
Ref/Miss
Count
DL1-Hit
DL1-Miss
L2-Miss
(Original, Mid and Final are from left to right for each
Figure
11: Cache statistics before and after trans-
formation
This program contains two local arrays, A(1024; 1024) and
P (1024), which carry values between three adjacent loop
nests. Our technique is able to reduce both arrays into
scalars and to fuse three loops into one.
5.3 Memory Reference Statistics
To further understand the eect of memory reduction on
the performance, we examined the cache behavior of dier-
ent versions of the tested benchmarks. We measured the
reference count (dynamic load/store instructions), the miss
count of the L1 data cache, and the miss count of the L2
unied cache. We use the perfex package to get the cache
statistics. Figures 11 and 12 compare such statistics, where
the total reference counts in the original codes are normalized
to 100.
When arrays are contracted to scalars, register reuse is often
increased. Figures 11 and 12 show that the number of total
references get decreased in most of the cases. The total
number of reference counts, over all benchmarks, is reduced
by 21.1%. However, in a few cases, the total reference counts
get increased instead. We examined the assembly codes and
found a number of reasons:50150250350purdue-02 purdue-03 purdue-04 purdue-07 purdue-08 purdue-12 purdue-13 climate laplace-jb laplace-gs
Normalized
Cache
Ref/Miss
Count
DL1-Hit
DL1-Miss
L2-Miss
(Original, Mid and Final are from left to right for each
Figure
12: Cache statistics before and after transformation
(cont.)
1. The fused loop body contains more scalar references in
a single iteration than before fusion. This increases the
register pressure and sometimes causes more register
spilling.
2. The native compilers can perform scalar replacement [3]
for references to noncontracted arrays. The fused loop
body may prevent such scalar replacement for two reasons
If register pressure is high in a certain loop, the
native compiler may choose not to perform scalar
replacement.
After loop fusion, the array data
ow may become
more complex, which then may defeat the native
compiler in its attempt to perform scalar replacement
3. Loop peeling may decease the eectiveness of scalar
replacement since fewer loop iterations benet from
it.
Despite the possibility of increased memory reference counts
in a few cases due to the above reasons, Figures 11 and 12
show that cache misses are generally reduced by memory re-
duction. The total number of cache misses, over all bench-
marks, is reduced by 63.8% after memory reduction. The
total number of L1 data cache misses is reduced by 57.3%
after memory reduction. The improved cache performance
seems to often have a bigger impact on execution time than
the total reference counts.
5.4 Other Experiments
In [27], we reported how our memory reduction technique
aects prefetching, software pipelining, register allocation
and unroll-and-jam. We conclude that our technique does
not seem to create di-culties for these optimizations.
6. RELATED WORK
The work by Fraboulet et al. is the closest to our memory
reduction technique [8]. Given a perfectly-nested loop,
they use retiming [16] to adjust the iteration space for individual
statements such that the total buer size can be
minimized. We have compared their algorithm with ours in
the introduction and in Section 5.1.
Callahan et al. present unroll-and-jam and scalar replacement
techniques to replace array references with scalar variables
to improve register allocation [3]. However, they only
consider the innermost loop in a perfect loop nest. They do
not consider loop fusion, neither do they consider array partial
contraction. Gao and Sarkar present the collective loop
fusion [10]. They perform loop fusion to replace arrays by
scalars, but they do not consider partial array contraction.
They do not perform loop shifting, therefore they cannot
fuse loops with fusion-preventing dependences. Sarkar and
Gao perform loop permutation and loop reversal to enable
collective loop fusion [23]. These enabling techniques can
also be used in our framework.
Lam et al. reduce memory usage for highly-specialized multi-dimensional
integral problems where array subscripts are
loop index variables [15]. Their program model does not
allow fusion-preventing dependences. Lewis et al. proposes
to apply loop fusion and array contraction directly to array
statements for array languages such as F90 [17]. The same
result can be achieved if the array statements are transformed
into various loops and loop fusion and array contraction
are then applied. They do not consider loop shifting in
their formulation. Strout et al. consider the minimum working
set which permits tiling for loops with regular stencil of
dependences [28]. Their method applies to perfectly-nested
loops only. In [6], Ding indicates the potential of combining
loop fusion and array contraction through an example. How-
ever, he does not apply loop shifting and does not provide
formal algorithms and evaluations.
Loop fusion has been studied extensively. To name a few
publications, Kennedy and McKinley prove maximizing data
locality by loop fusion is NP-hard [13]. They provide two
polynomial-time heuristics. Singhai and McKinley present
parameterized loop fusion to improve parallelism and cache
locality [25]. They do not perform memory reduction or loop
shifting. Recently, Darte analyzes the complexity of loop fusions
[5] and claims that the problem of maximum fusion of
parallel loops with constant dependence distances is NP-complete
when combined with loop shifting. His goal is to
nd the minimum number of partitions such that the loops
within each partition can be fused, possibly enabled by loop
shifting, and the fused loop remains parallel. Mainly because
of dierent objective functions, his problem and ours
yield completely dierent complexity. Manjikian and Abdelrahman
present shift-and-peel [18]. They shift the loops
in order to enable fusion. None of the works listed above
address the issue of minimizing memory requirement for a
collection of loops and their techniques are very dierent
from ours.
7. CONCLUSION
In this paper, we propose to enhance data locality via a
memory reduction technique, which is a combination of loop
shifting, loop fusion and array contraction. We reduce the
memory reduction problem to a network
ow problem, which
is solved optimally in O(jV j 3 ) time. Experimental results so
far show that our technique can reduce the memory requirement
signicantly. At the same time, it speeds up program
execution by a factor of 1.40 on average. Furthermore, the
memory reduction does not seem to create di-culties for a
number of other back-end compiler optimizations. We also
believe that memory reduction by itself is vitally important
to computers which are severely memory-constrained and to
applications which are extremely memory-demanding.
8.
ACKNOWLEDGEMENTS
This work is sponsored in part by National Science Foundation
through grants CCR-9975309, ACI/ITR-0082834 and
MIP-9610379, by Indiana 21st Century Fund, by Purdue
Research Foundation, and by a donation from Sun Microsys-
tems, Inc.
9.
--R
Network Flows: Theory
Linear Programming and Network Flows.
Improving register allocation for subscripted variables.
Interprocedural array region analyses.
On the complixity of loop fusion.
Improving
On estimating and enhancing cache e
Loop alignment for memory accesses optimization.
Strategies for cache and local memory management by global program transformation.
Collective loop fusion for array contraction.
Structured data ow analysis for arrays and its use in an optimizing compiler.
Experience with e-cient array data ow analysis for array privatization
Maximizing loop parallelism and improving data locality via loop fusion and distribution.
The Structure of Computers and Computations
Optimization of memory usage and communication requirements for a class of loops implementing multi-dimensional integrals
Retiming synchronous circuitry.
The implementation and evaluation of fusion and contraction in array languages.
Fusion of loops for parallelism and locality.
Array data- ow analysis and its use in array privatization
Applications benchmark set for fortran-d and high performance fortran
Problems to test parallel and vector languages.
Optimized unrolling of nested loops.
Optimization of array accesses by collective loop transformations.
Theory of Linear and Integer Programming.
A parameterized loop fusion algorithm for improving parallelism and cache locality.
New tiling techniques to improve cache temporal locality.
Performance enhancement by memory reduction.
Improving Locality and Parallelism in Nested Loops.
High Performance Compilers for Parallel Computing.
--TR
Theory of linear and integer programming
Strategies for cache and local memory management by global program transformation
Linear programming and network flows (2nd ed.)
Structured dataflow analysis for arrays and its use in an optimizing complier
Improving register allocation for subscripted variables
Optimization of array accesses by collective loop transformations
Network flows
Array-data flow analysis and its use in array privatization
Improving locality and parallelism in nested loops
Interprocedural array region analyses
Fusion of Loops for Parallelism and Locality
Experience with efficient array data flow analysis for array privatization
The implementation and evaluation of fusion and contraction in array languages
Schedule-independent storage mapping for loops
New tiling techniques to improve cache temporal locality
Optimized unrolling of nested loops
High Performance Compilers for Parallel Computing
Structure of Computers and Computations
Optimization of Memory Usage Requirement for a Class of Loops Implementing Multi-dimensional Integrals
On Estimating and Enhancing Cache Effectiveness
Collective Loop Fusion for Array Contraction
Maximizing Loop Parallelism and Improving Data Locality via Loop Fusion and Distribution
On the Complexity of Loop Fusion
Loop Alignment for Memory Accesses Optimization
Improving effective bandwidth through compiler enhancement of global and dynamic cache reuse
--CTR
G. Chen , M. Kandemir , M. J. Irwin , G. Memik, Compiler-directed selective data protection against soft errors, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Yonghong Song , Cheng Wang , Zhiyuan Li, A polynomial-time algorithm for memory space reduction, International Journal of Parallel Programming, v.33 n.1, p.1-33, February 2005
David Wonnacott, Achieving Scalable Locality with Time Skewing, International Journal of Parallel Programming, v.30 n.3, p.181-221, June 2002
G. Chen , M. Kandemir , M. Karakoy, A Constraint Network Based Approach to Memory Layout Optimization, Proceedings of the conference on Design, Automation and Test in Europe, p.1156-1161, March 07-11, 2005
Apan Qasem , Ken Kennedy, Profitable loop fusion and tiling using model-driven empirical search, Proceedings of the 20th annual international conference on Supercomputing, June 28-July 01, 2006, Cairns, Queensland, Australia
Alain Darte , Guillaume Huard, New Complexity Results on Array Contraction and Related Problems, Journal of VLSI Signal Processing Systems, v.40 n.1, p.35-55, May 2005
Benny Thrnberg , Qubo Hu , Martin Palkovic , Mattias O'Nils , Per Gunnar Kjeldsberg, Polyhedral space generation and memory estimation from interface and memory models of real-time video systems, Journal of Systems and Software, v.79 n.2, p.231-245, February 2006
Daniel Cociorva , Gerald Baumgartner , Chi-Chung Lam , P. Sadayappan , J. Ramanujam , Marcel Nooijen , David E. Bernholdt , Robert Harrison, Space-time trade-off optimization for a class of electronic structure calculations, ACM SIGPLAN Notices, v.37 n.5, May 2002
Geoff Pike , Paul N. Hilfinger, Better tiling and array contraction for compiling scientific programs, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-12, November 16, 2002, Baltimore, Maryland
Yonghong Song , Rong Xu , Cheng Wang , Zhiyuan Li, Improving Data Locality by Array Contraction, IEEE Transactions on Computers, v.53 n.9, p.1073-1084, September 2004
Zhiyuan Li , Yonghong Song, Automatic tiling of iterative stencil loops, ACM Transactions on Programming Languages and Systems (TOPLAS), v.26 n.6, p.975-1028, November 2004
Chen Ding , Ken Kennedy, Improving effective bandwidth through compiler enhancement of global cache reuse, Journal of Parallel and Distributed Computing, v.64 n.1, p.108-134, January 2004
Mahmut Taylan Kandemir, Improving whole-program locality using intra-procedural and inter-procedural transformations, Journal of Parallel and Distributed Computing, v.65 n.5, p.564-582, May 2005 | loop fusion;data locality;loop shifting;array contraction |
377895 | Optimizing threaded MPI execution on SMP clusters. | Our previous work has shown that using threads to execute MPI programs can yield great performance gain on multiprogrammed shared-memory machines. This paper investigates the design and implementation of a thread-based MPI system on SMP clusters. Our study indicates that with a proper design for threaded MPI execution, both point-to-point and collective communication performance can be improved substantially, compared to a process-based MPI implementation in a cluster environment. Our contribution includes a hierarchy-aware and adaptive communication scheme for threaded MPI execution and a thread-safe network device abstraction that uses event-driven synchronization and provides separated collective and point-to-point communication channels. This paper describes the implementation of our design and illustrates its performance advantage on a Linux SMP cluster. | INTRODUCTION
With the commercial success of SMP architectures, SMP
clusters with commodity components have been widely deployed
for high performance computing due to the great
economic advantage of clustering [1, 2]. MPI is a message-passing
standard [14] widely used for high-performance parallel
applications and has been implemented on a large array
of computer systems. This paper studies fast execution of
MPI programs on dedicated SMP clusters.
In the MPI paradigm, all the MPI nodes execute the same
piece of program under separate address spaces. Each MPI
node has a unique rank, which allows an MPI node to identify
itself and communicate with its peers. As a result, global
variables declared in an MPI program are private to each
MPI node. It is natural to map an MPI node to a pro-
cess. However, communication between processes have to
go through operating system kernels, which could be very
costly. Our previous studies [16, 18] show that process-based
implementations can su#er large performance loss on multi-programmed
shared-memory machines (SMMs).
Mapping each MPI node to a thread opens the possibility
of fast synchronization through address space sharing. This
approach requires a compiler to transform an MPI program
into a thread-safe form. As demonstrated in our previous
TMPI work [16, 18], the above approach can deliver significant
performance gain for a large class of MPI C programs
on multiprogrammed SMMs.
Extending a threaded MPI implementation for a single SMM
to support an SMP cluster is not straightforward. In an
SMP cluster environment, processes (threads) within the
same machine can communicate through shared memory
while the communication between processes (threads) on
di#erent machines have to go through the network, which
is normally several orders of magnitude slower than shared-memory
access. Thus, in addition to mapping MPI nodes
to threads within a single machine, it is important for an
e#cient MPI implementation to take advantage of such a
two-level communication channel hierarchy.
The common intuition is that in a cluster environment, inter-node
messaging delay dominates the performance of com-
munication, thus the advantage of executing MPI nodes as
threads diminishes. As will be shown later, our experience
counters this intuition and multi-threading can yield great
performance gain for MPI communication in an SMP cluster
environment. This is because using threads not only speeds
up the synchronization between threads on a single SMP
node, it also greatly reduces the bu#ering and orchestration
overhead for the communication among threads on di#erent
nodes.
In this paper, we present the design and implementation of
a thread-based MPI system on a Linux SMP cluster, and
examine the benefits of multi-threading on such a platform.
The key optimizations we propose are a hierarchy-aware and
adaptive communication scheme for threaded MPI execution
and a thread-safe network device abstraction that uses
event-driven synchronization and provides separated collective
and point-to-point communication channels. Event-driven
synchronization among MPI nodes takes advantage
of lightweight threads and eliminates the spinning overhead
caused by busy polling. Channel separation allows more
flexible and e#cient design of collective communication prim-
itives. The experiments we conduct is on a dedicated SMP
cluster. We expect that greater performance gain can be
demonstrated on a non-dedicated cluster and that is our
future work.
The rest of the paper is organized as follows: Section 2 introduces
the background and an overview of our thread-based
MPI system (TMPI) on a cluster. Section 3 discusses the
design of our TMPI system. Section 4 reports our performance
studies. Section 5 concludes the paper.
2. BACKGROUND AND OVERVIEW
Our design is based on MPICH, which is known as a portable
MPI implementation and achieves good performance across
a wide range of architectures [8]. Our scheme is targeted at
MPI programs that can be executed using threads. Thus
we first give the type of MPI programs we address in this
paper and briefly give an overview of the MPICH system.
Then, we give a high-level overview of our thread-based MPI
system (TMPI) on SMP clusters.
2.1 Using Threads to Execute MPI Programs
Using threads to execute MPI programs can improve performance
portability of an MPI program and minimize the
impact of multiprogramming due to fast context switch and
e#cient synchronization between threads. As shown in [16,
our experiments on an SGI Origin 2000 indicate that
threaded MPI execution outperforms SGI's native MPI implementation
by an order of magnitude on multiprogrammed
SMMs.
As has been mentioned before, the need for compile-time
transformation of an MPI program emerges from the process
model used in the MPI paradigm. The major task of
this procedure is called variable privatization. Basically we
provide a per-thread copy of each global variable and insert
statements in the original program to fetch each thread's
private copy of that variable inside each function where
global variables are referenced. Our algorithm is based on a
general mechanism available in most thread libraries called
thread-specific data or TSD. We extend TSD to make it
feasible to run multi-threaded MPI programs. Note that
our TSD-based transformation algorithm is able to support
multithreading within a single MPI node. However,
our current TMPI runtime system does not allow threads
within a single MPI node call MPI functions simultaneously
(MPI_THREAD_SERIALIZED). For detailed information, please
refer to [18].
Not every MPI program can be transformed to map MPI
nodes to threads. One major restriction a#ecting the applicability
of threaded execution is that an MPI program
cannot call low-level library functions which are not thread-safe
(e.g. signals). Since most of the scientific programs do
not involve such type of functions (particularly, MPI specification
discourages the use of signals), our techniques are applicable
to a large class of scientific applications. There are
two other minor factors that need to be mentioned, which
should not a#ect much the applicability of our techniques.
1. The total amount of memory used by all the MPI
nodes running on one SMP node can fit in a single
virtual address space. This should not be a problem
considering 64-bit OS becomes more and more popular
2. There is a fixed per-process file descriptor table size
for most UNIX systems. Since UNIX's network library
uses file descriptors to represent stream connec-
tions, applications might fail if the total number of
files opened by all the MPI nodes on a single SMP
is relatively large. Applications with a regular read-
compute-write pattern can be modified by separating
the file I/O from the program to circumvent this problem
2.2 MPICH for SMP Clusters
MPICH follows a sophisticated layered design which is well-documented
and discussed in the literature. Our design borrows
some ideas from MPICH and we briefly summarize the
architectural design of MPICH in this section.
The goal of MPICH's layered design is to make it easy and
fast to port the system to new architectures, yet allow room
for further tune-up by replacing a relatively small part of the
software. As shown in Figure 1, MPICH's communication
implementation consists of four layers. From bottom to top,
they are:
1. Device layer. This includes various operating system
facilities and software drivers for all kinds of communication
devices.
2. ADI layer. This layer encapsulates the di#erences of
various communication devices and provides a uniform
interface to the upper layer. The ADI layer exports a
point-to-point communication interface [7].
3. MPI point-to-point primitives. This is built directly
upon the ADI layer. It also manages high-level MPI
communication semantics such as contexts and communicators
4. MPI collective primitives. This is built upon the point-
to-point primitive layer. Messages share the same channel
for both point-to-point communication and collective
communication. MPICH uses special tagging to
distinguish messages belong to a user point-to-point
communication and internal messages for a collective
operation.
MPI Collective
MPI Point-to-Point
ADI
Chameleon
Interface T3D SGI others
shmem
MPI Collective
MPI Point to Point
Abstract
Device Interface
Devices
Figure
1: MPICH Communication Architecture.
To port MPICH to a di#erent platform, it is only necessary
to wrap the communication devices on the target system to
provide the ADI interface. This design was mainly targeted
to large parallel systems or networked clusters. It maps
MPI nodes to individual processes. It is not easy to modify
the current MPICH system to map each MPI node to
a lightweight thread because the low-level layers of MPICH
are not thread-safe. (Even though the latest MPICH release
supports the MPI-2 standard, its MT level is actually
MPI_THREAD_SINGLE.)
cluster node
- MPI node (process)
- MPICH daemon process
- Inter-process pipe
Shared memory
connection
WS
WS
WS
WS
Figure
2: MPICH Using a Combination of TCP and
Shared Memory.
As shown in Figure 1, the current support for SMP clusters
in MPICH is basically a combination of two devices: a
shared-memory device and a network device such as TCP/IP.
Figure
2 shows a sample setup of such a configuration. In
this example, there are 8 MPI nodes evenly scattered on
cluster nodes. There are also 4 MPI daemon processes,
one on each cluster node, that are fully connected with each
other. The daemon processes are necessary to drain the messages
from TCP connections and to deliver messages across
cluster-node boundaries. MPI nodes communicate with daemon
processes through standard inter-process communication
mechanisms such as domain sockets. MPI nodes on
the same cluster node can also communicate through shared
memory.
WS
WS
WS
WS
cluster node
- MPI node (process)
- MPICH daemon process
- Inter-process pipe
connection
Figure
3: MPICH Using TCP Only.
One can also configure MPICH at compile time to make
it completely oblivious to the shared-memory architecture
inside each cluster node and use loopback devices to communicate
among MPI nodes running on the same cluster
node. In this case, the setup will look like Figure 3. In
the example, we show the same number of MPI nodes with
the same node distribution as in the previous configuration.
What is di#erent from the previous one is that there are now
8 daemon processes, one for each MPI node. Sending a message
between MPI nodes on the same cluster node will go
through the same path as sending a message between MPI
nodes on di#erent cluster nodes, but possibly faster.
2.3 Threaded MPI Execution on SMP Cluster
cluster node
- MPI node (thread)
connection
Direct mem access
and thread sync
WS
WS
WS
WS
Figure
4: Communication Structure for Threaded MPI
Execution.
In this section, we provide an overview of threaded MPI
execution on SMP clusters and describe the potential advantages
of TMPI. To facilitate the understanding, we take
the same sample program used in Figure 2 and illustrate the
setup of communication channels for TMPI (or any thread-based
MPI system) in Figure 4. As we can see, we map MPI
nodes on the same cluster node into threads inside the same
process and we add an additional daemon thread in each
process. Despite the apparent similarities to Figure 2, there
are a number of di#erences between our design and MPICH.
1. In TMPI, the communication between MPI nodes on
the same cluster node uses direct memory access instead
of the shared-memory facility provided by operating
systems.
2. In TMPI, the communication between the daemons
and the MPI nodes also uses direct memory access
instead of domain sockets.
3. Unlike a process-based design in which all the remote
message send or receive has to be delegated to the
daemon processes, in a thread-based MPI implemen-
tation, every MPI node can send to or receive from
any remote MPI node directly.
As will be shown in later sections, these di#erences have
an impact of the software design and provide potential performance
gain of TMPI or any thread-based MPI systems.
Additionally, TMPI gives us the following immediate advantages
over process-based MPI systems. Comparing with
an MPICH system that uses a mixing of TCP and shared
memory (depicted in Figure 2):
1. For MPICH, usually shared memory is a limited system
resource. There is an OS-imposed limit on the
maximum size for a single piece of shared memory 1 ;
there is also a system-wide bound of the total size of
all the shared memory. In fact, one test benchmark
coming with MPICH failed because of limited shared-memory
resource. A thread-based system doesn't suffer
from this problem because threads in the same process
can access the whole address space.
2. There is no easy way for MPICH to aggregate di#er-
ent types of devices. As a result, each MPI node has
to use non-block polling to check if there are pending
messages on either device, which could waste cpu
cycles when the sender is not ready yet. Synchronizations
among threads are more flexible and lightweight.
Combined with our event-driven style daemons, all
MPI nodes can freely choose busy spinning or blocking
when waiting for a message.
3. Shared memory used in MPICH is a persistent system-wide
resource. There is no automatic way to perform
resource cleaning if the program doesn't clean it during
its execution. Operating systems could run out of
shared-memory descriptors when buggy user programs
exit without calling proper resource cleanup functions
and leave out garbage shared memory in the OS. Thus
the reliability of user programs running on an MPICH-
based cluster is more sensitive to this type of errors.
For a thread-based system such as TMPI, there is no
such a problem because shared address space access is
a completely user-level operation.
Comparing with a thread-based MPI system, a purely TCP/IP-
based MPICH implementation as depicted in Figure 3 has
the following disadvantages:
1. There will be two more data copying to send a message
between two MPI nodes on the same cluster node in
MPICH. Synchronization between two MPI nodes also
becomes more complicated than a thread-based system
such as TMPI.
2. There will be a proliferation of daemon processes and
network connections. This situation will get even worse
for "fatter" cluster nodes (nodes with more processors)
on which we run more MPI nodes.
2.4 Related Works
MPI on network clusters has also been studied in a number
of other projects. LAM/MPI [13, 4] is a MPI system
based upon a multicomputer management environment
called Trollius [3]. It is di#erent from MPICH in the sense
that its design is specific for network clusters and that the
lower level communication is provided by a stand-alone service
through its unique Request Progression Interface. It
does not address the issue of how to optimize MPI performance
on a cluster of SMPs. Sun's MPI implementation [17]
discusses how to optimize MPI's collective communication
primitives on large scale SMP clusters, the focus of the work
1 On the version of RedHat Linux 6.0 we installed (kernel
version 2.2.15), this number is 4MB.
is on how to optimize collective operations on a single fat
SMP node. MPI-StarT [9] made a couple of optimizations
for SMP clusters by modifying MPICH's ADI layer. They
propose a two-level broadcast scheme that takes advantage
of the hierarchical communication channels. Our collective
communication design extends their idea and is highly optimized
for threaded MPI execution. MagPIe [10] optimizes
MPI's collective communication primitives for clusters connected
through a wide-area network. In MPI-FM [11] and
MPI-AM [19, 12], they attempt to optimize the performance
of lower level communication devices for MPI. Their techniques
can be applied to our TMPI system. MPI-Lite [15],
LPVM [20], and TPVM [6] study the problem of running
message passing programs using threads on a single shared-memory
machine.
To our knowledge, there is no research e#ort towards running
MPI nodes using threads on SMP clusters. Our re-search
complements the above work by focusing on taking
advantage of executing each MPI node using a thread.
3. SYSTEM DESIGN AND IMPLEMENTA-
TION
In this section, we detail the design and implementation of
our thread-based MPI system - TMPI.
3.1 System Architecture
MPI
INTER INTRA
others
MPI Communication
Inter- and Intra-Machine
Communication
Abstract
Network and
Thread Sync Interface
OS Facilities
THREAD
pthread other
thread impl
Figure
5: TMPI Communication Architecture.
The system architecture for MPI communication primitives
is shown in Figure 5 2 . The four layers from bottom to top
are:
1. Operating system facilities, such as TCP socket interface
and pthread.
2. Network and thread synchronization abstraction layer.
Potentially this layer allows for the portability of TMPI
to other systems or performance tune-up by providing
di#erent implementations of network communication
and thread synchronization. The thread synchronization
abstraction is almost a direct mapping to pthread
APIs, except that threads are launched in a batch
style: create all the thread entities, start them through
a single function call, and by the time that function call
returns, all the threads will finish their execution. The
network device abstraction is tailored to our threaded
MPI execution model and is di#erent from either the
traditional socket interface or MPICH's ADI interface.
We will talk about them in detail in Section 3.2 and
Section 3.3.
2 There are some other synchronization primitives not shown
in
Figure
5, such as compare-and-swap.
3. Low level abstraction of communication management
for threads on the same (INTRA) and di#erent (IN-
cluster node(s). The intra-cluster-node communication
manages the communication among threads
on a single cluster node. The inter-cluster-node communication
layer wraps the network abstraction interface
and manages the logical communication on the
multi-thread level. To be more specific, each thread
has a local rank unique among threads on the same
cluster node and a global rank unique among all the
threads on all the cluster nodes. Given a global rank,
we need to find on which cluster node the thread resides
and what is its local rank on that cluster node as
well; reverse lookup is also needed. Another functionality
of the inter-cluster-node communication module
is resource discover and allocation. The API allows
for flexible placement of MPI nodes, e.g. how many
cluster nodes should be used and what are the ranks
of the MPI nodes placed on each cluster node. These
decisions can be made anywhere from completely automatic
or fully controlled by user supplied parameters.
4. MPI communication primitive implementation, including
the notions of communicators, message tags and
contexts. It is implemented upon three building blocks:
intra- and inter-cluster-node communication, and thread
interface. Particularly, all collective primitives are implemented
with the awareness of the two-level communication
hierarchy.
Despite some similarities to the MPICH design as shown in
Figure
1, there are a couple of notable di#erences between
the two systems:
. The top layer in TMPI is aware of di#erent mechanisms
to communicate between threads on the same
or di#erent cluster node(s) so that the implementation
can organize the communication to take advantage of
the two-level hierarchical communication channel.
. As will be further discussed in Section 3.3 and Section
3.4, in TMPI, collective communication primitives
are not built upon point-to-point communication prim-
itives. Instead, they are implemented independently
in the top layer and the network device abstraction
provides both point-to-point and collective communication
among processes on di#erent cluster nodes.
3.2 TMPI Network Device Abstraction
The network device abstraction in TMPI (called NETD) abstracts
a network application as a group of relevant processes
on di#erent machines, each with a unique rank, and provides
basic communication functionalities to the application. It is
a thin layer that contains 28 core functions grouped into
three categories:
Connection Management This includes the creation of
processes on a number of cluster nodes and the setup/tear-
down of communication channels. All the relevant
processes are fully connected with each other and can
query about their own ID and the total number of the
processes.
Collective Communication It provides collective communication
among relevant processes created by NETD.
The current implementation uses an adaptive algorithm
to choose among three di#erent spanning trees
based on the number of the processes involved. When
the size is small, a simple scatter tree is used; hyper-cube
is used when the size is large; and balanced binary
tree will be chosen when the size falls in the middle.
Point-to-Point Communication NETD has its unique
point-to-point communication API. Each message contains
a message header part and an optional payload
part. The content of a message header is not interpreted
by NETD. To receive a message, a caller must
provide NETD a callback function (called a message
handle). NETD bu#ers the message header and invokes
the handle with the sender ID of the message
and the message header. The message handle will
be responsible for examining the header and performing
necessary actions such as receiving the payload
part if there is one. Such an interface is necessary
to e#ciently support MPI communication primitives
in TMPI. An MPI node can receive a message from
an unknown source (MPI_ANY_SOURCE). Such a situation
is complicated in TMPI because normal network
devices do not provide atomic receive operation. Thus
if multiple threads wait for messages from the same
port, a single logic message could be fragmented and
received by di#erent threads. To get around this prob-
lem, TMPI uses a daemon thread to receive all the in-coming
messages and invoke a message handle in the
context of the daemon thread. That message handle
will be responsible for decoding the actual source and
destination, bu#ering the incoming data, and notifying
the destination thread.
3.3 Separation of Point-to-Point and Collective
Communication Channels in NETD
As we've mentioned before that in Figure 4, every thread in
a process can access all the communication ports originated
from that process. This feature inspired us into the idea
of separating collective and point-to-point communication
channels, which allows TMPI to take full advantage of the
synchronous nature of collective communication and eliminate
intermediate daemon overhead. For this reason, in Figure
4, each thick line actually represents two TCP connec-
tions, one dedicated for collective operations and the other
for point-to-point communication. The daemon threads are
only responsible for serializing incoming messages for the
point-to-point communication.
The separation of point-to-point communication channels
and collective communication channels is based on careful
observations of MPI semantics. In MPI, receive operations
are much more complicated than collective operations. Besides
the wildcard receive operation we discussed before,
it can be "out-of-order" with the notion of message tags,
and "asynchronous" with the notion of non-block receives.
Thus a daemon thread is required not only for serialization,
but also for bu#ering purposes. Due to limited bu#ering in
most network devices, if there is no daemon threads actively
drains incoming messages, deadlock situations that are not
permitted in MPI standard could happen. Figure 6 shows
such an example where node 0 and node 1 would be blocked
at the MPI_Bsend statement without the presence of daemon
threads even if there is enough bu#er space available.
For collective communication, MPI operations are never "out-
of-order", and always "synchronous" in the sense that a collective
operation will not be completed until all the MPI
nodes involved reaches the same rendezvous point 3 . Further
more, the structure of the spanning tree is determined
at runtime before each operation.
MPICH does not separate collective and point-point communication
channels, and all high-level MPI collective communication
primitives are implemented on the top of point-
to-point communication layer. As a result, all collective operations
go through point-to-point communication daemons,
which cause unnecessary overhead. Separation of point-to-
point and collective communication channels could benefit a
process-based MPI implementation as well; however it may
not be as e#ective as in TMPI because two processes on the
same cluster node cannot directly share the same network
communication ports (e.g. sockets).
3.4 Hierarchy-Aware Collective Communication
Design
The collective communication in TMPI are implemented as
a series of intra- and inter-cluster-node collective communi-
cation. For examples, MPI_Bcast will be an inter-bcast followed
by an intra-bcast and MPI_Allreduce will be an intra-
reduce followed by an inter-reduce, then an inter-bcast and
finished with an intra-bcast. The intra-cluster-node collective
communication takes advantage of address space sharing
and is implemented based on an e#cient lock-free FIFO
queue algorithm. In this way, collective communication can
take full advantage of the two-level communication hierarchy
Conceptually, having the above two-phase collective communication
is the same as building a spanning tree in two steps.
This idea was first mentioned in the MPI-StarT project [9].
Essentially there is a designated root MPI node on each cluster
node that forms a spanning tree connecting all the cluster
nodes. All the other MPI nodes connect to the local root
on the same cluster node to form the whole spanning tree.
Such a spanning tree will have exactly n-1 edges that cross
cluster-node boundaries (called "network edges") where n is
the number of cluster nodes involved in the communication.
Noted that MPICH which uses the shared memory setting
actually does not take the two-step approach and builds a
spanning tree directly from the given number of MPI nodes
without knowing the distribution of MPI nodes on cluster
nodes. As a result, a spanning tree for collective communication
in MPICH may have more network edges. Figure 7
compares the spanning trees for a sample MPI program of
size 9 running on 3 cluster nodes. As we can see, TMPI (the
left part) results in 2 network edges and MPICH (the right
part) has 5 network edges.
3 It is possible to employ techniques such as pipelining and
asynchronous operations to optimize collective operations.
However, such optimizations are not required in the MPI
standard and its e#ectiveness is not very eminent in real
applications based on our experience.
Spanning trees for an MPI program with 9 nodes on three
cluster nodes. The three cluster nodes contain MPI node 0-2,
3-5 and 6-8 respectively. Thick edges are network edges.
TMPI
MPICH
Figure
7: Collective Communication Spanning Trees in
TMPI and MPICH.
Adaptive Buffer Management in Point-to-
Point Communication
The point-to-point communication in TMPI bears a lot of
similarities to MPICH design. Conceptually, each MPI node
has a receive queue and an unexpected-message queue. When
a sender comes before the corresponding receiver, a send request
handle will be deposited to the receiver's unexpected-
message queue; similarly, if a receiver comes before the corresponding
sender, a receive request handle will be stored in
the receive queue. When a pair of sender and receiver are on
di#erent cluster nodes, the daemon thread on the receiver
side will act on behalf of the remote sender.
One di#cult problem facing this design is the temporary
bu#ering of a message when the corresponding receiver is
not ready yet. For intra-cluster-node point-to-point com-
munication, we can always block the sender till the receiver
comes if there is no internal bu#er space available. How-
ever, when a sender sends a message to a remote receiver, it
does not know whether there will be su#cient bu#er space
to hold the message. Because the message size in MPI could
be arbitrarily large, a traditional conservative solution is the
three-phase asynchronous message-passing protocol [5].
In TMPI, address space sharing and fast synchronization
among threads lead us to an e#cient adaptive bu#ering
solution, which is a combination of an optimistic "eager-
pushing" protocol and a three-phase protocol. Basically,
the sender needs to make a guess based on the message size
whether to transfer the actual message data with the send
request metadata (the eager-pushing protocol) or send only
the metadata first and send out data later when the receiver
is ready (the three-phase protocol). The remote daemon on
the receiver side will acknowledge whether the sender has
made a correct guess or not.
Figure
8 shows the three simplified cases in TMPI's inter-
cluster-node point-to-point communication protocol and we
provide a detailed description in the following. Note that the
figures do not accommodate the cases for "synchronous" or
"ready" send operations. In Figure 8 (a), the sender sends
the request with the actual data. On the receiver side, either
the receiver already arrives or there are still internal
bu#er space available, so the daemon accepts the data, store
the send request information and the actual data into the
proper queue, notify the sender that the data are accepted,
and wake up the receiver if necessary. The sender-side dae-
MPI_Bsend(buf, BIG_SIZE, type, 1, .);MPI_Recv(buf, BIG_SIZE, type, 1, .);
MPI_Bsend(buf, BIG_SIZE, type, 0, .);MPI_Recv(buf, BIG_SIZE, type, 0, .);
Figure
Possible deadlock without daemon threads.
(a) Successful Eager-Push
req/dat
R
got dat
(b) Failed Eager-Push Degrades
to Three-Phase Protocol
req/dat
got req
receiver ready
dat
R
got dat
R
(c) Three-Phase Protocol
got req
receiver ready
dat
R
got dat
R
sender D daemon Q msg queue R receiver
network transaction msg queue op receiver arrival/wakeup
network boundary
Figure
8: Point-to-Point Communication Between Nodes on Di#erent Cluster Nodes.
mon receives that confirmation and frees the data on behalf
of the sender. In Figure 8 (b), the sender still sends out the
data with the request, but this time the receiver-side daemon
cannot accept the incoming data. The daemon receives
and discards the data, store only the request metadata in
the unexpected-message queue. Later on, when the receiver
arrives, it discovers the partially completed send request and
asks for the associated data from the sender. The sender-side
daemon then sends the data to the receiver side. When
the receiver-side daemon receives the data, it saves the data
to the receiver-supplied user bu#er and tells the sender that
the data are received. Subsequently, the sender-side daemon
will deallocate the bu#er upon receiving the acknowledge.
Note that the actual data are transferred twice in this case.
In
Figure
8 (c), the sender decides to send the request part
and data part separately. The whole flow is essentially the
same as Figure 8 except that the actual data are only transfered
once. This design allows for optimal performance when
the sender makes the correct guess and functions correctly
with degraded performance when it guesses wrong.
The remaining issue is to decide how the sender should
switch between the eager-push and three-phase protocols.
The decision is complicated by the fact that the internal
bu#er space in TMPI is shared by all the MPI nodes in the
same cluster node and aggregated among a series of requests.
So, against the common intuition to choose "eager-pushing"
protocol as often as possible, it could still be preferable not
to bu#er a request even though there is bu#er space avail-
able. This is because the same amount of bu#er space might
be used to hold data from multiple subsequent requests.
Like a traditional non-preemptive scheduling problem, we
should favor short requests and any algorithm could be sub-optimal
due to the lack of future knowledge. In the current
implementation of TMPI, the sender will send the data with
a request if and only if the message size is under a statically
defined threshold. On the receiver side, the receiver daemon
greedily bu#ers incoming data whenever possible. The current
threshold is set to 100KBytes based on some empirical
study.
Finally, allowing a receive daemon to send messages could
result in a deadlock condition if not designed carefully. In
TMPI, the NETD layer supports both blocked and non-blocked
send. A non-blocked send merely puts the message
into a queue and there is a send daemon responsible
for sending the message out (and deallocate the memory if
necessary). Our receive daemon always uses non-blocked
send.
3.6 Further Discussions
Benefit of address space sharing in TMPI.
One benefit of a thread-based MPI implementation is the potential
saving of data copying through address-space shar-
ing. For intra-cluster-node point-to-point communication,
TMPI only needs at most one intermediate data copying
while for MPICH, it takes two intermediate data copying
with shared memory or three intermediate data copying
without shared memory. For inter-cluster-node point-to-
point communication, since the daemons at either the sender-
and receiver-side can access the sender- and receiver-bu#er
in TMPI, it takes zero to two intermediate data copying.
For MPICH, it always needs three intermediate data copy-
ing. Additionally, data only need to move across process
boundaries once in TMPI while data need to be transfered
across process boundaries three times in MPICH. Since the
two involving processes must be synchronized to transfer
data from one to the other, MPICH performance is more
sensitive to the OS scheduling in a multiprogrammed environment
TMPI scalability.
The presence of a single receive daemon thread to handle
all the incoming messages is a potential bottleneck in terms
of scalability. In TMPI, it is possible to configure multiple
daemons and all the incoming connections are partitioned
among multiple daemons. However, currently it is just a
compile-time parameter and we plan to study how to adaptively
choose the number of daemons at runtime in the fu-
ture. We can also create more instances of non-block send
daemons or make it be responsible for messages send operations
initiated by MPI nodes. This could be beneficial when
there is a sudden surge of outgoing data and we do not want
to block the sender threads because of that. In the current
small scale settings, none of these configurations is neces-
sary. Our point here is that the TMPI design is scalable to
accommodate large clusters with fat SMP nodes.
4. EXPERIMENTS
In this section, we evaluate the e#ectiveness of the proposed
optimization techniques for threaded MPI implementation
versus a process-based implementation. We choose MPICH
as a reference process-based implementation due to its wide
acceptance. We should emphasize that the experiments are
not meant to illustrate whether there are any flaws in the
MPICH design. Instead, as discussed in the previous sec-
tions, we want to show the potential advantages of threaded
MPI execution in an SMP cluster environments and in fact
most of the optimization techniques in TMPI are only possible
with a multi-threaded MPI implementation.
We have implemented a prototype system for Linux SMP
clusters. It includes 45 MPI functions (MPI 1.1 standard) as
listed in Appendix A. It does not yet support heterogeneous
architectures 4 , nor does it support user-defined data types
or layouts. The MPICH version we use contains all functions
in MPI 1.1 standard and provides partial support for MPI-2
standard. However, adding those functions is a relatively
independent task and should not a#ect our experimental
results.
All the experiments are conducted on a cluster of six quad-
Xeon 500MHz SMPs, each with 1GB main memory. Each
cluster node has two fast Ethernet cards connected with
Lucent Canjun switch. The operating system is RedHat
Linux 6.0 running Linux kernel version 2.2.15 with channel-
bonding enabled 5 .
4.1 Micro-BenchmarkPerformance for Point-
to-Point Communication
In this section, we use micro-benchmarks to make fine-grain
analysis of point-to-point communication sub-systems in both
TMPI and MPICH.
Ping-pong test. First, we use the ping-pong benchmark
to access the performance of point-to-point communication.
We vary the data size from 4 Bytes to 1 Mega-bytes. As
a common practice, we choose di#erent metrics for small
and large messages. For messages with size smaller than 1
4 Note that MPICH detects whether the underlying system
is homogeneous at the start-up time and no data conversion
overhead is incurred if it is.
5 Channel-bonding allows a single TCP connection to utilize
multiple network cards, which could improve network
bandwidth but does not help to reduce network delay. In
our case, the achievable raw bandwidth will be 200Mbps
between each pair of cluster nodes.
Kilo-bytes, we report the performance in terms of round-trip
delay; and for messages lager than 1 Kilo-bytes, we
report the transfer-rate defined as total amount of bytes
sent/received divided by the round-trip time.
Round
Trip
Time
(-s)
(a) Ping-Pong Short Message
TMPI
MPICH
Transfer
Rate
(b) Ping-Pong Long Message
TMPI
MPICH
Figure
9: Inter-Cluster-Node Ping-Ping Performance.
Figure
9 shows the ping-pong performance for two MPI
nodes on di#erent cluster nodes. As we can see, when the
message size is small, TMPI performs slightly better than
MPICH, except when messages are very small, in which
case TMPI's saving from inter-process data transfer overhead
(such as system calls and context switches) becomes
evident. When the message size becomes larger, TMPI has
about a constant 2MB bandwidth advantage over MPICH
due to the saving from data copying.
Round
Trip
Time
(-s)
(a) Ping-Pong Short Message
TMPI
Transfer
Rate
(b) Ping-Pong Long Message
TMPI
Figure
10: Intra-Cluster-Node Ping-Pong Performance.
Two versions of MPICH are used: MPICH with shared
memory (MPICH1) and MPICH without shared memory
To access the impact of shared memory for both thread- and
process-based MPI systems, Figure 10 shows the ping-pong
performance for two MPI nodes on the same cluster node.
We compare the performance for three MPI systems: TMPI,
MPICH with shared memory (MPICH1) and MPICH without
using shared memory (MPICH2). It is evident that
ignoring the underlying shared-memory architecture yields
much worse performance for MPICH2 comparing with the
other two systems. TMPI's advantage over MPICH1 mainly
comes from the saving of intermediate memory copying (for
long messages) and fast synchronization among threads (for
short messages). As a result, TMPI performance nearly
doubled that of MPICH with shared memory. Note that
both TMPI and MPICH1's performance drops after reaching
a peak transfer rate, which is likely caused by underlying
memory contention.
One-way message pipelining. The second benchmark is
a simple one-way send-recv test, in which a sender keeps
sending messages to the same receiver. Through this bench-
mark, we examine the impact of synchronization among MPI
nodes. We compare the average time it takes for the sender
to complete each send operation for short messages and still
use the transfer rate metrics for large messages.
Message Size (bytes)
Single
Op
Time
(a) One-Way Send-Recv Short Message
TMPI
MPICH
Message Size (KB)
Transfer
Rate
(b) One-Way Send-Recv Long Message
TMPI
MPICH
Figure
One-Way Send-Recv
Performance.
Figure
11 shows the one-way send-recv performance for two
MPI nodes on di#erent cluster nodes. Theoretically speak-
ing, both thread- and process-based MPI implementations
should yield similar performance for short messages because
the average time for each send operation should be equal
to the single trip time divided by the number of outstanding
messages on the fly, which is basically limited by the
network bandwidth. However, as mentioned before, in a
process-based MPI implementation, inter-cluster-node message
passing needs to travel through a local daemon and a
remote daemon. Each time data need to be copied to these
daemon's local bu#er. Unless the sender, the receiver and
the daemons are precisely synchronized, either bu#er could
be full or empty and cause the stall of the pipeline even
though there might still be bandwidth available. As can be
seen from Figure 11, MPICH performance is very unstable
when message size is small due to the di#culty of synchronization
among the processes. When message size becomes
large, there are fewer outstanding messages and the performance
is less sensitive to synchronization. Additionally, we
can also see that TMPI has a 30-s advantage over MPICH
for short messages and a 2MB bandwidth advantage for large
messages due to the saving from extra data copying needed
for a process-based MPI system.
As a comparison, we repeat the one-way send-recv test for
two MPI nodes on the same cluster node. The result is
shown in Figure 12. We again compare among three MPI
systems with the same notation as in Figure 10. As we
expected, MPICH1 performs almost identical with TMPI for
small messages and only slightly poorer than TMPI for large
messages, because there is no intermediate daemon processes
along the data path in either system and that the cost of an
extra data copying in MPICH1 is amortized among multiple
outstanding requests. On the other hand, MPICH2 shows
some irregular behavior for a certain range of message sizes.
Note that in Figure 12 (a), we have to use a di#erent scale to
accommodate the performance curve of MPICH2. When the
message size is below 800 bytes, a single send operation takes
3-4ms to complete, but when the message size goes beyond
Op
Time
(TMPI
and
Message Size (bytes)
(a) One-Way Send-Recv Short Message
TMPI
Op
Time
Transfer
Rate
(b) One-Way Send-Recv Long Message
TMPI
Figure
12: Intra-Cluster-Node One-Way Send-Recv
Performance. Two versions of MPICH are used:
MPICH with shared memory (MPICH1) and MPICH
without shared memory (MPICH2)
that, it falls to a normal range between 50-s and 150-s.
We are not able to identify the exact source of the problem,
but we think it might have something to do with resource
contentions in the OS level. Regardless of this glitch, the
performance of MPICH2 is much worse than the other two
MPI systems, again due to its ignorance of the underlying
shared memory.
4.2 Micro-Benchmark Performance for Collective
Communication
To compare the performance of the collective communication
primitives, we run three micro-benchmarks, each of
which calls MPI_Reduce, MPI_Bcast, or MPI_Allreduce a
number of times respectively and we compare the average
time for each operation. The data volume involved in these
benchmarks are very small, so the cost mainly comes from
synchronization. We run these benchmarks with three different
settings: 4 MPI nodes on the same cluster node (4-1),
nodes scattered on 4 di#erent cluster nodes (1-4),
and nodes scattered on 4 MPI nodes (4-4). For
MPI_Bcast and MPI_Reduce, we further use three di#erent
variations with regard to the root nodes in di#erent iterations
in each test: always stay the same (same); rotate
among all the MPI nodes (rotate); or repeat using the same
root for a couple of times and then do a root shift (combo).
A number of conclusions can be drawn from the experiment
results shown in Figure 13:
1. In most cases, MPICH2 performs the worst among the
three MPI systems (except for the 1-4 case, in which
MPICH1 and MPICH2 performance are almost the
same), which signifies the importance of taking advantage
of shared memory for an MPI system in an SMP
cluster environment.
2. TMPI is up to 71 times faster for MPI_Bcast and 77
times faster for MPI_Reduce than MPICH1 which exploits
shared memory within each SMP. Among other
factors including address space sharing and hierarchy-
aware algorithms, the performance gain mainly comes
from the separation of collective and point-to-point
communication channels, Theoretically speaking, an
MPI_Bcast or MPI_Reduce operation should always be
faster than an MPI_Allreduce operation. However, our
unit: -s MPI Reduce MPI Bcast MPI Allreduce
Node Distr. Root TMPI MPICH1 MPICH2 TMPI MPICH1 MPICH2 TMPI MPICH1 MPICH2
same
rotate
same
rotate
same
rotate
combo16125322242856656622204102462054
736 1412 19914
Figure
13: Collective Communication Performance. The numbers shown in the table are the average time (-s) for
each operation. We run each benchmark on three MPI systems: TMPI, MPICH with shared memory (MPICH1) and
MPICH without shared memory (MPICH2). For MPI Reduce and MPI Bcast, we test cases when the root is always the
same (same), when the root rotates among all the MPI nodes (rotate), or when we fix the root for a number of times
then do a rotate (combo). The notation a- b used in the node distribution means we use b cluster nodes, each of which
has a MPI nodes.
experiments show that there are cases for MPICH that
an MPI_Bcast or MPI_Reduce operation performs worse
than an MPI_Allreduce operation. Such an anomaly
is caused by MPICH's design of collective communication
on top of point-to-point communication. Messages
for collective communication still have to go through
daemons, get stored in message queues, and be matched
by traversing the queues. When there are many outstanding
requests, the cost of queue operations become
expensive due to contentions, and the daemons could
become a bottleneck. MPI_Allreduce test does not
su#er from this problem because all the MPI nodes
are synchronized and there is at most one outstanding
request. On the other hand, by separating the communication
channels of point-to-point and collective com-
munication, TMPI does not show such an anomaly.
3. For TMPI, the "same root" tests perform much better
than "rotating root" tests for the 1-4 and 4-4 cases.
This means that TMPI can take better advantage of
message pipelining due to address space sharing and
its elimination of intermediate daemons.
4.
Figure
also evidences the advantage of hierarchy-
aware communication design. For example, in MPI_Bcast
test, the 4-4 performance is roughly equal to the summation
of the 4-1 case and the 1-4 case, since in
TMPI, a broadcast in the 4-4 case is a 1-4 broadcast
followed by a 4-1 broadcast on each individual cluster
node. On the other hand, without using the two-level
spanning tree, MPICH1's 4-4 performance is about
10%-60% worse than the summation of the 4-1 case
and the 1-4 case. Similar conclusions also hold for
MPI_Reduce and MPI_Allreduce.
4.3 Macro-Benchmark Performance
In this section, we use two application kernel benchmarks to
further access the e#ectiveness of TMPI optimizations. The
kernel benchmarks we use are Matrix-Multiplication (MM)
and Gaussian-Elimination (GE) which perform computation-intensive
linear algebra computation. MM consists mostly
MPI_Bsend and MPI_Recv, and GE MPI_Bcast. We run MM
with 1 to 16 MPI nodes and GE with 1 to 24 MPI nodes.
The detailed node distribution is shown in Figure 14.
MM GE
9 3-3 4 2-2
Figure
14: Distribution of MPI Nodes. a - b means we
use b cluster nodes, each of which has a MPI nodes. We
ensure that the number of cluster nodes and the number
of MPI nodes on each cluster node will not decrease with
the increase of total number of MPI nodes.
We compare TMPI with MPICH which uses shared memory
on each SMP. The performance results are depicted in
Figure
15. As we can see, when the number of MPI Nodes is
small (#4), TMPI and MPICH perform similarly. However,
when the number of MPI nodes becomes large and when
there are more cluster nodes involved, TMPI shows better
scalability than MPICH. For MM, TMPI has a constant
150MFLOP advantage over MPICH, which mainly comes
from the saving of intermediate copying. For GE, neither
system can keep linear speed-up and MPICH performance
even degradates after reaching the peak at 12 MPI nodes.
TMPI outperforms MPICH more than 100% when there are
4 to 6 cluster nodes involved. This indeed verifies the advantages
of TMPI in an SMP cluster environment.
The reason that TMPI gains much more advantage in the
GE case compared to the MM case is because GE calls
the MPI broadcasting function while MM only uses point-
to-point communication. As we have demonstrated in the
previous section, TMPI outperforms MPICH more substantially
in collective communication.
5. CONCLUDING REMARKS
In this paper, we have presented the design and implementation
of a thread-based MPI system for SMP clusters. Our
contributions include a novel network device abstraction interface
tailored for threaded MPI execution on SMP clusters
and hierarchy-aware communication. We proposed a number
of optimization techniques including the separation of
Number of MPI Nodes
MFLOP
(a) Matrix Multiplication
TMPI
MPICH
25100300500700900Number of MPI Nodes
MFLOP
(b) Gaussian Elimination
TMPI
MPICH
Figure
15: Macro-Benchmark Performance.
point-to-point and collective communication channels, adaptive
bu#ering, and event-driven synchronization by taking
advantage of multi-threading. Through micro- and macro-
benchmark testing, our experiments show that TMPI can
outperform MPICH substantially.
It should be noted that TMPI optimization is targeted at
a class of C programs while MPICH is designed for general
MPI programs. The experiments confirm that even
in a cluster environment for which inter-node network latency
is relatively high, exploiting thread-based MPI execution
on each SMP can deliver substantial performance gains
for global communication through fast and light-weight syn-
chronization. Our experiments focus on a dedicated cluster
and our future work is to study performance in a multiprogrammed
environment for which thread-based synchronization
may achieve more performance gain.
6.
ACKNOWLEDGEMENTS
This work was supported in part by NSF ACIR-0082666,
ACIR-0086061 and CCR-9702640. We would like to thank
Lingkun Chu for his help on cluster administration. We
would also like to thank Kai Shen and the anonymous referees
for their valuable comments.
7.
--R
A case for networks of workstations: NOW.
BEOWULF: a parallel workstation for scientific computation.
LAM: An Open Cluster Environment for MPI.
Parallel Computer Architecture A Hardware/Software Approach.
TPVM: distributed concurrent computing with lightweight processes.
An Abstract Device Definition to Support the Implementation of a High-Level Point-to-Point Message Passing Interface
Delivering Network Performance to Numerical Applications.
MPI's collective communication operations for clustered wide area systems.
MPI-FM: Higher Performance MPI on Workstation Clusters
"http://www.lsc.nd.edu/lam/"
Forum, <Year>1999</Year>. http://www.
MPI-SIM: using parallel simulation to evaluate MPI programs.
Adaptive two-level thread Management for fast MPI execution on shared memory machines
Optimization of MPI Collectives on Clusters of Large-Scale SMP's . In Proceedings of ACM/IEEE SuperComputing '99
Program Transformation and Runtime Support for Threaded MPI Execution on Shared Memory Machines.
"http://now.cs.berkeley.edu/Fastcomm/MPI/"
LPVM: a step towards multithread PVM.
--TR
A high-performance, portable implementation of the MPI message passing interface standard
MPI-FM
MPI-SIM
MagPIe
Optimization of MPI collectives on clusters of large-scale SMP''s
Adaptive two-level thread management for fast MPI execution on shared memory machines
Program transformation and runtime support for threaded MPI execution on shared-memory machines
MPI-StarT
Multi-protocol active messages on a cluster of SMP''s
Parallel Computer Architecture
--CTR
Jian Ke , Martin Burtscher , Evan Speight, Runtime Compression of MPI Messanes to Improve the Performance and Scalability of Parallel Applications, Proceedings of the 2004 ACM/IEEE conference on Supercomputing, p.59, November 06-12, 2004
Rohit Fernandes , Keshav Pingali , Paul Stodghill, Mobile MPI programs in computational grids, Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming, March 29-31, 2006, New York, New York, USA
Lingkun Chu , Hong Tang , Tao Yang , Kai Shen, Optimizing data aggregation for cluster-based internet services, ACM SIGPLAN Notices, v.38 n.10, October
Weirong Zhu , Yanwei Niu , Guang R. Gao, Performance portability on EARTH: a case study across several parallel architectures, Cluster Computing, v.10 n.2, p.115-126, June 2007 | SMP clusters;multi-threading;communication optimization;MPI |
377896 | Demonstrating the scalability of a molecular dynamics application on a Petaflop computer. | The IBM Blue Gene project has endeavored into the development of a cellular architecture computer with millions of concurrent threads of execution. One of the major challenges of this project is demonstrating that applications can successfully exploit this massive amount of parallelism. Starting from the sequential version of a well known molecular dynamics code, we developed a new application that exploits the multiple levels of parallelism in the Blue Gene cellular architecture. We perform both analytical and simulation studies of the behavior of this application when executed on a very large number of threads. As a result, we demonstrate that this class of applications can execute efficiently on a large cellular machine. | INTRODUCTION
Now that several Teraflop-scale machines have been deployed in
various industrial, governmental, and academic sites, the high performance
computing community is starting to look for the next
big step: Petaflop-scale machines. At least two very different approaches
have been advocated for the development of the first generation
of such machines. On one hand, projects like HTMT [10]
propose the use of thousands of very high speed processors (hun-
dreds of gigahertz). On the other hand, projects like IBM's Blue
Gene [2] advance the idea of using a very large number (millions)
of modest speed processors (hundreds of megahertz). These two
approaches can be seen as the extremes of a wide spectrum of
choices. We are particularly interested in analyzing the feasibility
of the latter, Blue Gene style, approach.
Massively parallel machines can be built today, in a relatively
straightforward way, by adopting a cellular architecture: a basic
building-block containing processors, memory, and interconnect
support (preferably implemented on a single silicon chip) is replicated
many times following a regular pattern. Combined logic-
memory microelectronics processes will soon deliver chips with
hundreds of millions of transistors. Several research groups have
advanced processor-in-memory designs that rely on that technol-
ogy, and can be used as building blocks for cellular machines.
Examples include the Illinois FlexRAM [5, 12] and Berkeley
projects. Nevertheless, because of sheer size, building
a cellular machine to deliver a Petaflop/s - or 10 15 floating-point
operations per second - is quite a challenge. This enterprise can
only be justified if it can be demonstrated that applications can execute
efficiently on such a machine.
In this paper, we report on the results of analytical- and simulation-based
studies on the behavior of a computational molecular dynamics
application. We analyze the behavior of this application on
a cellular architecture with millions of concurrent threads of ex-
ecution, which is a design for IBM's Blue Gene project. This is
one example of a class of applications that can indeed exploit the
multiple levels of massive parallelism offered by a Petaflop-scale
cellular machine, as discussed in [11]. This parallel application
was derived from the serial version of a molecular dynamics code
developed at the University of Pennsylvania [6]. The application
was rewritten with new partitioning techniques to take advantage
of multiple levels of parallelism.
We developed an analytical model for the performance of our application
and compared it with direct simulation measurements. The
quantitative results for our molecular dynamics code demonstrate
that this class of applications can successfully exploit a Petaflop-
scale cellular machine. In terms of absolute performance, simulations
indicate that we can achieve 0.14 Petaflop/s (0:14 10 15
floating-point operations per second) and 0.87 Petaop/s (0:87
operations per second) of sustained performance. In terms of
speed of molecular dynamics computation, we integrate the equations
of motion for a typical problem with 32,000 atoms at the rate
of one time-step every 375s. The same problem, using the sequential
version of the code, can be solved on a Power 3 workstation
with peak performance of 800 Mflop/s in 140 seconds per
time step. Thus, on a machine with 1,250,000 times the total peak
floating-point performance of a uniprocessor, we achieve a speedup
of 368,000.
The rest of this paper is organized as follows: Section 2 gives a brief
overview of the cellular machine considered, at the level necessary
to understand how the molecular dynamics application can be parallelized
for efficient execution. Section 3 introduces the parallel
molecular dynamics algorithm we used. It also presents an analytical
performance model for the execution of that algorithm on our
cellular architecture. Section 4 describes the simulation and experimental
infrastructure. We use this infrastructure in deriving experimental
performance results which are presented, together with the
results from the analytical model, in Section 5. Section 6 discusses
some of the changes to the Blue Gene architecture that were motivated
by this work. Finally, Section 7 presents our conclusions.
2. A PETAFLOP CELLULAR MACHINE
We are interested in investigating how a machine like IBM's Blue
Gene would perform on a class of molecular dynamics applica-
tions. The fundamental premise in the architecture of Blue Gene
is that performance is obtained by exploiting massive amounts of
parallelism, rather than the very fast execution of any particular
thread of control. This premise has significant technological and
architectural impacts. First, individual processors are kept simple,
to facilitate design and large scale replication in a single silicon
chip. Instead of spending transistors and watts to enhance single-thread
performance, the real estate and power budgets are used to
add more processors. The architecture can also be characterized as
memory centric. Enough threads of execution are provided in order
to fully consume all the memory bandwidth while tolerating the latency
of individual load/store operations. Because a single silicon
chip consists of many replicated units, it is naturally fault toler-
ant. The replication approach also improves the yield of fabrica-
tion, even when large chips are used, thus reducing cost. Moreover
it allows the machine to gracefully degrade but continue to operate
when individual units fail.
The building block of our cellular architecture is a node. A node
is implemented in a single silicon chip and contains memory, processing
elements, and interconnection elements. A node can be
viewed as a single-chip shared-memory multiprocessor. In our de-
sign, a node contains 16 MB of shared memory and 256 instruction
units. Each instruction unit is associated with one thread of exe-
cution, giving 256 simultaneous threads of execution in one node,
and executes instructions strictly in order. Each group of 8 threads
shares one data cache and one floating-point unit. The data cache
configuration for Blue Gene is 16 KB, 8-way set associative. We
call this group of threads, floating-point unit, and cache, a proces-
sor. Such simplifications are necessary to keep instruction units
simple, resulting in large numbers of them on a die. The floating-point
units (32 in a node) are pipelined and can complete a multiply
and an add on every cycle. With a 500 MHz clock cycle, this translates
into 1 Gflop/s of peak performance per floating-point unit, or
Gflop/s of peak performance per node. Each node also has six
channels of communication, for direct interconnection with up to
six other nodes. With 16-bit wide channels operating at 500 MHz,
a communication bandwidth of 1 GB/s per channel on each direction
is achieved. Nodes communicate by streaming data through
these channels.
Larger systems are built by interconnecting multiple nodes in a regular
pattern. For the system design, we use the six communication
channels in each node to interconnect them in a three-dimensional
mesh configuration. A node is connected to two neighbors along
each axis, X , Y , and Z. Nodes on the faces or along the edges
of the mesh have fewer connections. We use a mesh topology because
of its regularity and because it can be built without any additional
hardware. We directly connect the communication channels
of a node to the communication channels of its neighbors. With a
three-dimensional mesh of nodes, we build a system
of 32,768 nodes. Since each node attains a peak computation rate
of Gflop/s, the entire system delivers a peak computation rate of
approximately 1 Petaflop/s. (See Figure 1)
An application running on this Petaflop machine must exploit both
inter- and intra-node parallelism. First, the application is decomposed
into multiple tasks and each task assigned to a particular
node. As discussed previously, the tasks can communicate only
through messages. Second, each task is decomposed into multiple
threads, each thread operating on a subset of the problem assigned
to the task. The threads in a task interact through shared-memory.
3. THE MOLECULAR DYNAMICS ALGORITH
The goal of a molecular dynamics algorithm is to determine how
the state of a molecular system evolves with time. Given a molecular
system with a set A of n atoms, the state of each atom i at time
t can be described by its mass m i , its charge q i , its position ~x i (t)
and its velocity ~v i (t). The evolution of this system is governed by
the equation of motion
subject to initial conditions ~x i
each atom i, where ~
F i is the force acting on atom i and ~x 0
are the initial position and velocity, respectively, of atom i. The notation
represents the set of positions of all atoms at time t.
Equation 1 can be integrated numerically for a particular choice of
time step t. The positions of the atoms fx j (t)g at time t are used
to compute the forces ~
F i at time t. Those forces are then used to
compute the accelerations at time t. Velocities and accelerations at
time t are finally used to compute the new positions and velocities
at time t t, respectively. In the particular molecular dynamics
approach we chose, the system to be simulated is shaped in the
form of a box, which is replicated indefinitely in all three dimen-
sions, giving rise to a periodic system. This is illustrated in two
dimensions in Figure 2.
r r
r
r r
r r
r
r r
r
r r
r r
r
r r
r
r r
r r
r
r r
r
r r
r r
r
r r
r
r r
r r
r
r r
r
r r
r r
r
r r
r
r r
r r
r
r r
r
r r
r r
r
r r
r
r r
r r
r
Figure
2: The molecular system we simulate has a periodic
structure. In principle, we need to consider the interactions
among an infinite number of atoms.
threads,
500 Mhz)
(32 chips,
(4 boards,
(256 racks,
(32 processors)
System
Rack
Processor 1 Petaflop
Gigaflops
Board
Chip
Figure
1: Blue Gene's cellular architecture hierarchy. The entire system consists of 32,768 nodes (1 million processors) connected in
a three-dimensional mesh.
3.1 A molecular dynamics algorithm
The force ~
applied on an atom i at time t
is the vector sum of the pairwise interactions of that atom i with
all the other atoms in the system. Those pairwise interactions can
take several forms, as described in [1]. We divide them into two
main groups: intra-molecular forces and inter-molecular forces.
Intra-molecular forces occur between adjacent or close-by atoms
in the same molecule. In the model we use, there are three different
types of intra-molecular forces: bonds, bends, and torsions. Intermolecular
forces occur between any pair of atoms, and take two
forms: Lennard-Jones (or van der Walls) forces, and electrostatic
(or Coulombic) forces. The Lennard-Jones forces decay rapidly
with distance. Therefore, for any given atom i, it is enough to consider
the Lennard-Jones interactions with the atoms inside a sphere
centered at atom i. The radius of this sphere is called the cutoff
radius. We note that the sphere for atom i may include atoms in a
neighboring (replicated) box.
When computing electrostatic forces, one has to take into account
the periodic nature of the molecular system. One of the most
commonly used approaches is the Ewald summation [4]. In this
method, the computation of electrostatic forces is divided into two
fast converging sums: a real space part and a reciprocal space, also
called k-space, part:
~
F r
For the real space part ( ~
F r
(t)), interactions are computed between
an atom i and all the atoms inside a sphere centered at atom i, as
for Lennard-Jones forces (although the cutoff radius could be dif-
ferent). The pairs of atoms whose distance is within these cutoff
are commonly stored in a linked list, also known as the Verlet
list. The generation of this list is usually performed once every
several time-steps because atoms move slowly. The frequency
of generation is an optimization problem. Since its computational
cost is amortized over several time-steps, we do not further discuss
generation of Verlet lists in this paper.
For the computation in reciprocal space ( ~
first a set K of
reciprocal space vectors is computed. Then, for each ~ k 2 K, we
compute ~ k , the Fourier transform of the point charge distribution
(structure factor) for that particular value of ~ k, as
where n is the number of atoms, q i and ~x i are the charge and position
of atom i, respectively. The ~ k terms are also called k-factors
in this paper. The contribution of the reciprocal space part to the
electrostatic force acting on each atom can then be computed with
a summation for all values of ~ k:
~
~
The ~
are also called k-forces in this paper. The exact formula
for function ~
f k () is not important for structuring the paral-
lelization. (The precise expression is specified in [1]). We note that
for each atom we need to perform a summation over all ~ k 2 K,
and that the value of any ~ k depends on the position of all atoms in
the system.
3.1.1 Inter-node parallelization
An efficient parallel molecular dynamics algorithm requires the
various forces to be distributed evenly among compute nodes. Our
molecular dynamics application uses an extension of the decomposition
of inter-molecular forces described in [8], and a new parallel
formulation for the k-space decomposition. Although our cellular
machine is a three-dimensional mesh of nodes, we view its p 3
nodes as a logical two-dimensional mesh of c columns and r rows.
In particular, we use c . The two- to three-dimensional
mesh embedding we use is described later. For now, we consider
the two-dimensional logical mesh.
We partition the set of atoms A in two ways: a target partition of A
into r sets fT0 ; 1g, each of size
l jAj
r
, and a source
partition of A into c sets fS0 ;
l jAj
c
. We
replicate the target sets across rows of the two-dimensional
every node in row I contains a copy of the atoms in target set TI .
Similarly, we replicate the source sets across columns: every node
in column J contains a copy of the atoms in source set SJ . With
this configuration, node nI;J (row I and column J of the logical
mesh) is assigned source set SJ and target set TI , as shown in Figure
3.
K0
K1
K0
K1
K0
K1
Figure
3: Force decomposition adopted for the molecular dynamics
application.Each target set TI is replicated along all
nodes in row I . Each source set SJ and reciprocal space vector
set KJ is replicated along all nodes in column J .
Using the decomposition described above, node nI;J can compute
locally and with no communication the Lennard-Jones and real
space electrostatic forces exerted by atoms in source set SJ over
atoms in target set TI . As previously mentioned, only the interactions
between atoms that are closer than a certain cutoff radius are
actually computed. In other words, let ~
F LJ
l;m be the Lennard-Jones
force between atoms l and m, and let ~
F r
l;m be the real part of the
electrostatic force between atoms l and m. Then, node nI;J computes
~
F LJ
l;m and ~
F r
such that the distance
between atoms l and m is less than the cutoff radius for Lennard-Jones
and electrostatic forces, respectively. Each node nI;J also
computes the partial vector sum of the Lennard-Jones and real part
electrostatic forces that act upon the atoms in its target set:
~
F LJ
8m2S J
~
F LJ
~
F r
8m2S J
~
F r
To compute the reciprocal space electrostatic forces, the set
K of reciprocal space vectors is partitioned into c sets
g, each of size
l jKj
c
. We replicate each
set KJ along all nodes in column J of the logical mesh. This is
also shown in Figure 3. The actual computation is performed in
three phases. In the k-factors phase, every node nI;J computes the
contributions of the atoms in TI to all ~ k such that ~ k 2 KJ . That
is, node nI;J computes
8l2T I
Computing the sum of these contributions along the columns of the
two-dimensional mesh constitutes the second phase. The sum is
computed as an all-reduce operation: every node obtains the value
of the sum of contributions. After this reduction, every node nI;J
will have the value of ~ k for all ~ k
The number of items (k-factors) for which a reduction is computed
along column J is equal to the size of the reciprocal space vector
set KJ . In the third and final phase, also called the k-forces phase,
the ~ k values are used in each node to compute the contribution to
the ~
described in Equation (4), for each of the atoms in
set TI . That is, node nI;J computes
~
~
Once all inter-molecular forces are computed, we perform an all-
reduction of forces along the rows of the two-dimensional logical
mesh. That is, all nodes nI;J in row I obtain the value
~
F r
F LJ
As a result, every node in row I obtains the resulting forces over all
its atoms in TI . We note that the number of items (force vectors)
for which a reduction is computed along row I is equal to the size
(number of atoms) of target set TI .
The computation of intra-molecular forces (bonds, bends, torsions)
is replicated on every column of the logical mesh. Note that every
column has an entire set of atoms, formed by the union of its T sets.
We partition the atoms among the TI sets so that adjacent atoms in a
molecule are assigned to the same or adjacent target sets. The only
communication required to compute the intra-molecular forces is
to update the positions of the atoms in the same molecule that are
split between adjacent rows of the two-dimensional logical mesh.
Once we have computed all the forces over the atoms in TI we can
compute their new positions and velocities. We finally update the
positions of the atoms in SJ from the atoms in TI by performing
a broadcast operation along the columns of the two-dimensional
mesh. In this broadcast phase each node nI;J sends the positions of
the atoms in TI \SJ to all the nodes in column J . Correspondingly,
it receives the positions of the atoms in SJ TI \ SJ from other
nodes in that column.
Interactions between atoms that are close together need to be evaluated
more often than interactions between atoms that are far
apart. For this reason, we use a multi-timestep algorithm similar
to RESPA [13, 14] to reduce the frequency of inter-molecular
forces computation. We illustrate this concept in Figure 4, where
a computational time step is divided into a sequence of short, intermediate
and long steps. More precisely, in our configuration,
a computation time step consists of ten components: four short
steps, followed by an intermediate step, followed by four short
steps, followed by a long step. In a short step, only intra-molecular
bonds and bends forces are computed. In an intermediate step, all
intra-molecular forces (bonds, bends, and torsions) are computed,
together with Lennard-Jones and real space electrostatic forces
within a short cutoff radius. The computation of inter-molecular
forces requires a reduction along rows. Finally, in a long step, all
intra- and inter-molecular forces, including k-space electrostatic,
are computed. The long step requires a reduction along columns
(for the k-factors) and along rows (for the inter-molecular forces).
Within steps, computation threads in the same node wait in a barrier
(indicated by "B" in Figure 4) until the positions of all atoms
are received from other nodes and updated. The threads then compute
the forces required in the step in parallel and synchronize the
update of a shared vector of forces using critical regions. Threads
wait in a second barrier until all the forces in the step have been
computed before updating the positions and velocities of the atoms.
Figure
4 also presents summary performance numbers, which will
be discussed in Section 5.
We now focus on the embedding of the two-dimensional logical
mesh on the three-dimensional mesh. We consider a two-dimensional
mesh of 128 rows and 256 columns. Each column
of the logical mesh (128 nodes) is mapped onto one physical plane
of the three-dimensional cellular machine, as shown in Figure 5.
Note that 8 logical columns are embedded in one physical plane.
The broadcast of positions and the reduction of k-factors in reciprocal
space in each column of the logical mesh do not interfere
with broadcasts and reductions in other logical columns and
use only the X and Y dimensions of the three-dimensional
mesh. The reduction of forces along each row of the
two-dimensional mesh is performed between adjacent planes of the
three-dimensional physical mesh and uses disjoint wires in the Z
dimension. The nodes of each logical row assigned to the same
plane also need to perform a reduction but only one wire is needed
for each row along the X dimension.
3.1.2 Intra-node parallelization
Intra-node parallelism for the molecular dynamics code is exploited
through multithreading. For simplicity, each set of forces (bonds,
bends, torsions, Lennard-Jones, and electrostatic) to be computed
in a node is statically partitioned among the threads in that node.
That is, if there are N force forces of a certain type to be computed,
and N thread compute threads in a node, then each thread computes
l N force
thread
forces. Movement of atoms is partitioned in the same
way. If there are Natom atoms in the target set of a node, then each
thread is responsible for moving
l N atom
thread
atoms.
As each force that acts on an atom in the target set of a node is
computed, it is added to a force accumulator for that atom. Each
Lennard-Jones or real electrostatic force computed acts on only one
target atom. Each bond, bend, and torsion (intra-molecular forces)
computed however, acts on 2, 3, or 4 atoms of the target set, respec-
tively. Before an update on a force accumulator can be performed
by a thread, that thread needs to acquire a lock for the accumula-
tor. Therefore, for each Lennard-Jones or real electrostatic force
computed, the thread has to acquire one lock. For each bond, bend,
or torsion computed, the thread needs to acquire 2, 3, or 4 locks,
respectively.
Parallelization of the k-space computations is different, and performed
in four steps inside a node. We start by first computing
the contribution of each atom in the target set to all k-factors assigned
to the node. This computation is statically scheduled: each
thread performs
l N atom
thread
computations. To evaluate the k-factors
we have to sum the contribution of each atom. This is
done with a binary reduction tree. There is a third step which completes
the computation of the k-factors within a node, in preparation
for the reduction across columns. Finally (after the column reduction
is performed), the k-forces on each atom are computed. These
computations are statically scheduled, and each thread computes
l N atom
thread
forces of the form ~
~
3.2 A performance model for the molecular
dynamics computation
Based on the previous discussion of the molecular dynamics algo-
rithm, in this section we derive an analytical performance model
for the behavior of that algorithm on a cellular architecture like
IBM's Blue Gene. In Section 5, we compare results from this performance
model with direct measurements from simulations. We
will also use the model to evaluate the impact of certain architectural
features.
Let Ts be the execution time of a short step, let T i be the execution
time of an intermediate step, and let T l be the execution time of a
long step. Then, the total execution time T of one computational
time step (as shown in Figure 4) can be computed as
where s is the number of short steps in one computational time step.
We can decompose the time for each step into two parts: a computation
time
and an exposed communication time T m
[s;i;l] . The
exposed communication time is the part of the communication time
not overlapped with computation.
3.2.1 Computation time modeling
Let N bond and N bend be the total number of bonds and bends, re-
spectively, that need to be computed by a node during a short step.
Let N thread be the number of computation threads available in the
node. Let T bond and T bend be the time it takes to compute one bond
and one bend, respectively, and let Tmove be the time needed to
update an atom's position and velocity. Let Natom be the number
of target atoms assigned to the node. Finally, let T barrier (n) be the
time it takes to perform a barrier operation on n threads. The time
for a short step can be expressed as T p
s as shown in Equation 12 of
Figure
6.
Let N torsion be the number of torsions that need to be computed by
a node during an intermediate step. Let N LJshort and N ESshort be the
number of Lennard-Jones and (real part) electrostatic forces within
a short cutoff, respectively, that need to be computed in a node
during an intermediate step. Let T LJ and T ES be the time it takes to
compute one Lennard-Jones and one (real part) electrostatic force,
respectively. The time for an intermediate step can be expressed as
i as shown in Equation 13 of Figure 6.
Let N LJlong and N ESlong be the number of Lennard-Jones and (real
part) electrostatic forces within a long cutoff, respectively, that need
to be computed in a node during a long step. Let N k-factor be the
number of ~ k terms in the node, that is, the size of the K set in the
node. Let T eta (N k-factor ) be the time it takes to compute one atom's
contribution to the ~ k terms in the node. Let T redux (N k-factor ) be
broadcast
of
positions
put
adjacent bonds
bends
move
atoms
4 times
put
adjacent bonds
bends
move
atoms
4 times
broadcast
of
positions
move
atoms
move
atoms
bytes transmitted
bonds
bends
real elect.
(short cutoff)
Lennard-Jones
(short cutoff)
torsions
force
reduction
bends
bonds
Lennard-Jones
(long cutoff)
(long cutoff)
real elect.
force
reduction
k-space
forces
torsions
k-factors
reduce
k-factors
(250 compute threads)
computation (inst/thread)
computation (inst/thread)
19,000
long step
short step short step
repeat until done
intermediate step
Figure
4: Flowchart for the molecular dynamics computation for a complete long time step and computational performance of the
molecular dynamics application.4128
Z
three-dimensional physical mesh0000000000000000000000000000000011111111111111111111111111111111110000000000000000111111111111111111
2D logical mesh
(128 rows x 256 cols)
vertical broadcast horizontal reduction
Figure
5: Mapping columns of the logical mesh to the physical machine (left). Communication patterns for vertical broadcast of
positions and horizontal reduction of forces (right).
the time it takes to add two sets of contributions to the ~ k terms. Let
T k-factor (N k-factor ) be the time it takes to finalize computing the ~ k
terms in a node. Finally, let T k-force be the time it takes to compute
the k-force on one atom. The time for a long step can be expressed
as
l as shown in Equation 14 of Figure 6.
3.2.2 Communication time modeling
The dominant communication operations in our molecular dynamics
code are the force reductions along the rows and the k-factor
reductions and position broadcasts along the columns of the
two-dimensional logical mesh. Reductions and broadcasts are implemented
in our cellular machine by organizing the nodes within a
logical row or a logical column according to a spanning binary tree,
as shown in Figure 7. In each of those figures, the root of the tree
is marked by a black circle. Referring to Figures 5 and 7 (a, b) ,
we note that each 128-node column of the logical mesh consists of
four adjacent strands of 32-nodes in the physical mesh. Referring
to
Figures
5 and 7 (c, d), we note that each 256-node row of the
logical mesh consists of eight strands of 32-nodes in the physical
mesh, and those strands are 4 nodes apart. The communications of
atom positions to evaluate intra-molecular forces make only a small
contribution to the total communication and are always performed
between neighboring nodes along columns of the logical mesh.
We want to compute the time for four communication operations,
which operate on vectors of elements. Let T positions be the time
to broadcast positions along columns, T k-factors redux the time to
reduce k-factors along columns, T forces the time to reduce forces
along rows and T put the time to put a new position in a neighboring
row. The time for each of these operations can be decomposed into
a latency time and a transfer time. The latency time is the time to
complete the operation for the first element of a vector. The transfer
time is the rate at which each element is processed times the
thread
thread
Natom
thread
thread
thread
Natom
thread
Tmove
thread
thread
thread
thread
thread
Natom
thread
Tmove
thread
N LJlong
thread
N ESlong
thread
Natom
thread
thread
Natom
thread
k-force (N k-factor
Figure
Analytical expressions for the computation time of short, intermediate, and long steps.
r nodes
r nodes
4 nodes
r nodes
r nodes
8 nodes
(a) column fan-in tree (b) column fan-out tree (c) row fan-in tree (d) row fan-out tree
Figure
7: Fan-in and fan-out trees for reduction and broadcast operations along columns (a and b) and for reduction operations
along rows (c and d).
number of elements in the vector. In equation form:
latency
positions
k-factors latency
k-factors redux
latency
latency
where N source and N k-factor are the number of source atoms and
k-factors assigned to the column, respectively, and N target is the
number of target atoms assigned to the row. N put is the number of
puts to neighbors that a node has to perform. T triplet and T complex
are the time to transfer one triplet (force or position) or one complex
number (k-factor), respectively. Each position and force is
bytes long (three double-precision floating-point numbers) and
each k-factor is long (two double-precision floating-point
hop c be the number of hops (nodes) between a node i in a
column and the root of the fan-in and fan-out trees for intra-column
operations. Let T hop be the time to go through a hop (node) in the
cellular machine interconnect. Then
latency
positions
hop c )T hop ; (19)
where the factor of two accounts for the round trip. Let N i
add c be
the number of additions that need to be performed to reduce an item
originating at a node i in the column until it gets to the root of the
fan-in tree. Let T add be the time to add one item. Then
latency
k-factors
hop c )T hop
add c )T add : (20)
The analysis for the force reduction along the rows is similar. Let
hop r be the number of hops between a node i in a row and the root
of the fan-in and fan-out trees for intra-row operation. Let N i
add r
be the the number of additions that need to be performed to reduce
an item originating at a node i in the row until it gets to the root for
the fan-in tree. Then
latency
hop r )T hop
add r )T add : (21)
We can derive worst case estimates for the exposed communications
times
[s;i;;] . In case there is no overlap between computation
and communication,
The upper bound on the exposed communication time is
where the factor of 8 comes from 8 short steps within a computational
time step.
4. SIMULATION ENVIRONMENT
To complement the analytical modeling described above, we executed
the molecular dynamics application on an instruction-level
simulator for Blue Gene. Our machine has a proprietary instruction
set, which is a typical RISC (load/store architecture with three-
register instructions) model. The application was coded in C and
compiled with the gcc (version 2.95.2) compiler, properly modified
to generate code for our instruction set. Thread creation and management
inside a node is performed at the application level using
calls to a pthread-compatible library. Inter-node communication
is accomplished through a proprietary communication library that
implements put (one-sided communication), broadcast, and reduce
operations.
Each node of the machine runs a resident system kernel, which
executes with supervisor privileges. The purpose of the kernel is
twofold: first, to protect machine resources (threads, memory, communication
channels) from accidental corruption by a misbehaving
application program, so that resources can be used reliably for error
detection, debugging, and performance monitoring; and second,
to isolate application libraries from details of the underlying hardware
and communications protocols, so as to minimize the impact
of evolutionary changes in those areas. The actual application runs
with user privileges and invokes kernel services for input/output
and inter-node communication.
The instruction-level simulator is architecturally accurate, executing
both kernel and application code. It models all the features of
a node and the communication between nodes. Each instance (pro-
cess) of the simulator models one node, and multiple instances are
used to simulate a system with multiple nodes. Internally, the multiple
simulator instances communicate through MPI. As a result of
executing an application, the instruction-level simulator produces
detailed traces of all instructions executed. It also produces histograms
of the instruction mix. One trace and one histogram are
produced for every node simulated.
The instruction-level simulator does not have timing information
and, therefore, it does not directly produce performance estimates.
Instead, we use the traces produced by the simulator to feed two
other performance tools. One of these tools is a cache simulator and
event visualizer that provides measurements of cache behavior. The
other tool is a trace analyzer that detects dependences and conflicts
in the instruction trace, producing an estimate of actual machine
cycles necessary to execute the code. The trace analyzer does not
execute the instructions, but it models the resource usage of the
instructions. In the trace analyzer, each instruction has a predefined
execution latency, and instructions compete for shared resources.
The threads in a node execute instructions in program order. Thus,
if resources for an instruction are not available when the thread
tries to issue that instruction, the issue is delayed, and the thread
stalls until all resources become available. For memory operations
the trace analyzer uses a probabilistic cache model, with a 90%
hit rate. This value was validated by running the trace through the
cache simulator.
The performance parameters for the simulated architecture are
shown in Table 1. These parameters are early estimates and may
change when the low-level logic design is completed. The table is
interpreted as follows: the delay for an instruction is decomposed
in two parts, execution cycles and latency cycles. The execution
unit is kept busy for the number of execution cycles and can not issue
another instruction. The result is available after the number of
execution+latency cycles, but the resources can be utilized by other
instructions during the latency period. For example, a floating point
add has execution of 1 cycle and latency 5 cycles. That means that
the FPU is busy for 1 cycle executing this instruction (other instructions
may be already in the pipeline), and the next cycle can
execute another instruction. However, the result of the addition is
available only after 6 (1+5) cycles. As another example, an integer
division takes 33 execution cycles and 0 latency cycles. That means
that it occupies exclusively the integer unit for 33 cycles and the result
is available immediately after the execution period completes.
Threads can issue an instruction every cycle, unless the stall on a
dependence from the result of a previous instruction.
For memory operations, the latency of the operations depend how
deep in the memory hierarchy we have to go to fetch the result. The
memory is distributed within the chip, such that accessing the local
memory is faster than accessing the memory across the chip. The
memory is shared: threads within a chip can address each other's
local memories. The latency of the operation depends on the physical
location of the memory. Memory accesses are determined to
be local or global based on the effective address. However, because
the Blue Gene processors use scrambling, a global access may be
in the local memory. Again, we model the scrambling probabilis-
tically, with a 1=P probability that the global access is in local
memory, where P is the number of processors in a chip
in our configuration).
Simulating a Petaflop machine is no easy task. Straightforward
simulation of the entire machine would require 32,768 simulation
processes, one for each simulated node. However, the molecular
dynamics application has a structure that allows us to simulate
completely only one node in the entire machine and extrapolate the
performance results for the machine. As presented previously, the
molecular dynamics code runs on a two-dimensional logical mesh
of nodes. Each node simulates atomic interactions between two
sets of atoms: a target set and a source set. Because of the particular
decomposition method used, communication between nodes
occurs intra-row and intra-column only. Thus, the simulation of
one row provides information on the behavior of all other rows in
the logical two-dimensional mesh. Similarly, the simulation of one
column provides information on the behavior of all other columns
in the logical two-dimensional mesh. This reduces the number of
nodes that we need to simulate to 383 (256 in one row J plus 128
in one column I , minus one instance of node (I; J)). This is illustrated
in Figure 8. We still need to provide correct values for
incoming messages at the boundary of the subsystem we simulate.
To do so, we modified our communication layer to "fake" all communication
except that between nodes in row J or column I . With
a balanced work distribution, node (I; J) performs a set of operations
similar to that performed by all other nodes in the system.
Therefore, the performance of node (I; J) can be extrapolated to
Table
1: Estimates of instruction performance parameters for Blue Gene ISA. Each type of instruction is characterized by the
number of cycles it keeps the execution unit busy (column "execution") and the latency for it to complete (column "latency").
instruction type execution latency
Branches
Integer multiply and divide 33 0
Floating point add, multiply and conversions 1 5
Floating point divide and square root 54 0
Floating point
Prefetching
Memory operation (cache hit) 1 2
Memory operation (shared local) 1 20
Memory operation (shared remote) 1 27
All other operations 1 0
obtain the performance of the entire system. We use the instruction
level simulator to simulate all nodes in column I and row J , but we
collect and analyze trace information only for node (I; J).011001111
I
J nodes256 nodes
chips to
simulate information
detailed
Figure
8: Strategy for performance estimation by simulating
only one row and column of the logical mesh.
5. EXPERIMENTAL RESULTS
As a test case for our code, we assembled a molecular system with
the human carbonic anhydrase (HCA) enzyme [9], which plays an
important role in the disease glaucoma. The HCA enzyme was
solvated in 9,000 water molecules, for a total of 32,000 atoms in
the system. We used our molecular dynamics code, running on
the simulator described above, to compute the evolution of this
system. The starting experimental coordinates are taken from the
NMR structure of carbonic anhydrase, as described in [9]. The
molecular system was prepared by taking the crystallographic configuration
of the HCA enzyme (Protein Data Bank identification
label 1AM6 - http://www.rcsb.org/pdb/) and solvating
it in a box of water of size 70 -
A. The CHARMM22 [3] force
field was used to treat the interactions between the atoms in this
protein/water system. Newtonian dynamics was generated under
standard conditions, room temperature and 1 atm pressure, through
the use of the standard Verlet algorithm solver [15].
Table
2 lists the values for the various parameters used in the analytical
performance model in Section 3. The table shows the number
of forces computed, per computational time step, by a serial version
of the code. It also shows the number of forces computed by a
single node participating in a parallel execution with a 256 128
mesh. The "scaling factor" column shows the ratio of those two
values. Table 3 lists the number of instructions required for each
force computation. Some forces require the use of locks, as discussed
in Section 3.1.2. The number of instructions to acquire and
release the locks is shown in Table 3. The number of instructions
for a tree-based multi-threaded barrier, as a function of the number
of threads participating in the barrier is shown in Figure 9. The data
from
Tables
2 and 3, and Figure 9 provide the information needed
for estimating performance from the analytical performance model
in Section 3. Table 4(a) summarizes the values of the various primitive
communications parameters. They are obtained from intrinsic
characteristics of the architecture and application. Table 4(b) contains
the resulting values for the derived parameters, obtained from
the equations above.
30050150250Number of computation threads
Instruction
cycles
Barrier time
Figure
9: Cost of a barrier, as a function of the number of
threads participating.
Table
5 summarizes total instruction counts per node for various
thread configurations, as obtained from the architectural simulator.
These configurations differ in the number of threads that are allocated
to perform computations. The number of floating-point units
that are allocated to the computation is
l N threadm
. Additional
threads, not included in those numbers, are allocated to perform
communication and other system services. The table lists the
total number of loads, stores, branches, integer instructions, logical
instructions, system calls, and floating-point instructions executed
by all threads on a node. In the case of floating-point instructions
we detail the number of multiply-add (FMAD) and multiply-
Table
2: Number of forces computed by each node in one time step of the simulation of the HCA protein, with a 14 -
A long real cutoff
and a 7 -
A short cutoff. We show results for a single node execution and for a 256 (columns) 128 (rows) logical mesh.
single node one node in 256 128 mesh
# of items # of items scaling factor
target atoms 31,747 249 128
electrostatic forces (real part, long cutoff) 36,157,720 1,297 27,878
Lennard-Jones forces (long cutoff) 36,241,834 1,313 27,602
electrostatic forces (real part, short cutoff) 4,430,701 127 34,887
Lennard-Jones forces (short cutoff) 4,440,206 127 34,962
k-factors 8,139 23 354
sines and cosines in k-space 12,652,290 494 25,612
bonds 22,592 259 87
bends 16,753 456 37
torsions 11,835 748
Table
3: Measured parameters for the computation of various forces.
parameter # of instructions description
force locks total
Tmove 50 50 compute new position and velocity of an atom
computation of a bond force (2 atoms)
T bend 250 90 340 computation of a bend force (3 atoms)
computation of a torsion force (4 atoms)
computation of a Lennard-Jones force
computation of real-part electrostatic force
T eta 2,000 2,000 computation of one atom contribution to k-factor
T redux 500 500 computation of one reduction step for k-factor
T k-factor 1,000 1,000 final stage of computing k-factors
k-force
3,000 3,000 computation of Fourier-space electrostatic force
subtract instructions. Each of those instructions performs
two floating-point operations. The total number of floating-point
operations is shown in row "Flops" and the total number of instructions
in row "Total". We note that, as expected, the number
of floating-point instructions does not change significantly with the
number of threads. On the other hand, the number of loads and
branches does increase significantly with the number of threads, as
the threads spend more time waiting for barriers and locks.
Table
6 identifies the major sources of overhead. We note that most
of the additional instructions executed in multithreaded belong to
two operations, barrier and lock. The additional lock instructions
represent contention for locks, that increase with a larger number
of threads. The increase in number of instructions spent in barriers
has two sources. First, as the number of threads increases, the total
time through a barrier increases. This is illustrated in Figure 9. In
addition, a larger number of threads typically makes load balancing
more difficult, and the faster threads have to wait more time
in the barrier for the slower threads. The "total overhead" row in
Table
6 is equal to the difference between the number of instructions
for multithreaded and single-threaded execution. The effect
on overall performance achieved if the barrier overhead and/or the
lock overhead is avoided is discussed in Section 6.
Table
7 summarizes additional performance results for all the configurations
(different numbers of computational threads) we tested.
For each configuration we list the number of instructions/thread for
each short step, intermediate step, and long step (See Figure 4).
We show the number of instructions/thread for the k-factor and k-
force components of a long step. We also show the total number of
instructions/thread and computation cycles per time step, as determined
by the architectural simulator and trace analyzer. The CPI
(clocks per instruction) is computed as the ratio of those two last
numbers. The CPF (clocks per floating-point instruction) is a measure
of the average number of clocks per floating-point instruction,
from the perspective of the floating-point units. We compute it as
cycles dN thread =8e
where N float is the total number of floating-point instructions,
N cycles is the number of computation cycles, and dN thread =8e is the
number of floating-point units utilized (8 threads share one floating-point
unit).
The number of machine cycles for inter-node communication in
Table
7 is obtained from Equation (25), and it is independent of
the number of threads. The total number of cycles per time step
is obtained by adding computation and communication cycles. We
compute the multithreaded speedup as the ratio of total cycles for
single- and multi-threaded execution. Finally, the efficiency is computed
as the ratio between speedup (relative to single-threaded exe-
cution) and number of threads. The low CPI/high CPF numbers for
one thread indicates good thread unit utilization, but low floating-point
unit utilization. The other configurations have eight active
threads per floating-point unit and perform roughly two times more
work, per floating-point unit, than a single thread does.
Table
4: Summary of communication cost parameters.
(a) primitive parameters
parameter value description
source 125 number of source atoms assigned to a column
target 249 number of target atoms assigned to a row
k-factor 23 number of k-factors assigned to a column
number of puts to nearest neighbor
machine cycles to transfer 24 bytes through the interconnect
machine cycles to transfer 16 bytes through the interconnect
hop 6 machine cycles to cross one node in the cellular interconnect
machine cycles to complete one floating-point addition
hop c ) 20 maximum number of hops inside a column
add c ) 28 maximum number of adds inside a column
hop r ) 48 maximum number of hops inside a row
add r ) 38 maximum number of adds inside a row
(b) derived parameters
parameter machine cycles description
latency
positions 240 latency to broadcast first position
latency
k-factors redux 464 latency to reduce first k-factor
latency
forces 880 latency to reduce first force
latency
put latency to memory of nearest neighbor
positions 1,740 total time to broadcast positions
k-factors redux 648 total time to reduce k-factors
T forces 3,838 total time to reduce forces
T put 186 total time to put positions in nearest neighbor
l 6,256 communication time for long step
s 186 communication time for short step
upper bound on exposed communication time per time step
The curves in Figures 10 and 11 indicate the number of instruc-
tions/thread derived from the analytical performance model for the
execution of different components of the molecular dynamics ap-
plication. The markers in the figures indicate measurements made
on the simulator. The values plotted are accumulated, that is, the
value of each component is added to the previous components.
Therefore, the top curve of Figure 10 also represents the total number
of instruction cycles for a computational time step. The top
curve of Figure 11 represents the total number of instruction cycle
for k-space computation. Figure 10 shows the contribution of the
short, intermediate, reciprocal space (k-space) and finally the long
range (real part) electrostatic and Lennard-Jones forces to the total
computational time step. Figure 11 shows the contribution of
k-factor and k-force components to the total k-space computation.
There is a very good fit between analysis and simulation, indicating
that the analytical models do indeed capture the behavior of the
application.
The time charts at the bottom of Figure 4 summarize the computational
performance of the molecular dynamics application. The
first two time lines show the number of instructions/thread for a
250-thread execution. The entire computational time step takes approximately
39,000 instructions/thread. Each short step takes 1,700
instructions. The intermediate step takes 5,000 instructions. The
long step takes 19,000 instructions. The next time line summarizes
communication behavior, as obtained from the simulator, showing
the total number of bytes transmitted by the simulated node along
each of the axes of the physical three-dimensional mesh. We note
that much more data is transmitted along the X and Z axes than
along the Y axis. This is a result of our particular embedding of
the two-dimensional logical mesh into the three-dimensional physical
mesh. The bottom line shows the total number of cycles and
the estimated time for one iteration step.
Results from the cache simulator are shown in Figure 12. We plot
the average cache miss rate for different cache sizes and set associativity
values, for a configuration with 250 compute threads. The
results show that our probabilistic model (90% data cache hit rate)
is valid for 4- and 8-way set-associative caches of size 8 KB, and
for any associativity with a 16 KB or larger cache.
The results of this section indicate that a molecular dynamic simulation
for 32,000 atoms can be run on a full Petaflop cellular
machine with good performance. It is possible to exploit 32,768
nodes, each with 250 threads or more, and run a full computational
time step (Figure 4) in 375 s (187,000 cycles at 2 ns/cycle).
This code runs at 0.87 Petaop/s, and 0.14 Petaflop/s. However, we
should state that the results presented here are only approximate, as
the simulator is not a detailed cycle level simulator. This was not
feasible, both because of the slow performance of such a simulator
and because of the lack of detailed logic design for our machine.
Rather, the trace analyzer uses estimated information on the depth
of various pipelines and uses a queuing model for congestion at key
shared resources.
We expect that changes in software, algorithms, and mathematical
Table
5: Instruction counts for the sample node (I; J).
instruction class 1 thread 50 threads 100 threads 150 threads 180 threads 200 threads 250 threads
Loads 932,617 1,380,652 2,013,751 2,367,753 3,000,583 3,253,284 3,277,406
Stores 314,545 350,290 386,740 423,190 445,060 459,640 495,778
Branches 392,998 810,678 1,412,827 1,735,879 2,350,139 2,590,460 2,584,124
Integer ops 779,227 815,525 852,525 889,452 911,562 926,302 962,535
Logical ops 910,489 1,006,189 1,080,147 1,164,083 1,208,168 1,247,712 1,334,017
System 46,835 51,686 56,636 61,586 64,556 66,536 71,486
Floating-point ops 1,206,253 1,211,545 1,216,945 1,222,345 1,225,585 1,227,745 1,232,891
FMAD 288,217 288,805 289,405 290,005 290,365 290,605 291,169
Flops 1,537,291 1,543,171 1,549,171 1,555,171 1,558,771 1,561,171 1,566,881
Total 4,582,964 5,626,565 7,019,571 7,864,288 9,205,653 9,771,679 9,958,237
instructions 26% 22% 17% 15% 13% 13% 12%
Table
instructions for sample node (I; J).
function 1 thread 50 threads 100 threads 150 threads 180 threads 200 threads 250 threads
barrier
lock 52,874 190,280 274,926 347,642 363,564 391,848 446,194
barrier+lock 52,874 715,610 1,755,842 2,251,036 3,391,990 3,800,344 3,657,628
total overhead 0 1,043,601 2,436,607 3,281,324 4,622,689 5,188,715 5,375,273
50 100 150 200 250 3001030507090110Number of computation threads
Instruction
cycles
(thousands)
Node performance of 32,000 atom problem on 256x128 mesh
short
intermediate
k-space
long (total)
Figure
10: Performance of components of molecular dynamics
code. The solid lines represent values from the analytical
model. The markers are measurements from simulation.
methods will significantly further improve the performance of the
code we simulated. On the other hand, we can also expect many
surprises and challenges as we proceed from simulations to actual
system.
6. ARCHITECTURAL IMPACT
Since we completed the work described in this paper, the Blue Gene
architecture has evolved. Some changes were motivated by silicon
real estate constraints. For example, the number of threads
per floating-point unit was reduced from 8 to 4, for a total of 128
threads/node. The memory was also reduced from 16 MB to 8
MB/node. Other important changes, however, were motivated by
100 150 200 250 30039152127Number of computation threads
Instruction
cycles
(thousands)
Node performance of 32,000 atom problem on 256x128 mesh
k-factor
k-force
k-space (total)
Figure
11: Performance of components of k-space computa-
tion. The solid lines represent values from the analytical model.
The markers are measurements from simulation.
the results from our performance evaluation work. We discuss
some of those changes.
Table
3 and Figure 9 show the direct cost of locks and barriers,
respectively. To evaluate the impact of those in the final performance
of the molecular dynamics code, we measured what would
be the performance if the code were free of locks and barriers. Ob-
viously, such code would not execute correctly. The results are
shown in Figure 13. It shows that for a large number of threads
(250), the instruction count could be reduced by up to 10,000, or
25%, if there were no locks or barriers. As a result, the Blue Gene
architecture now includes fast hardware barriers and atomic mem-
Table
7: Instruction and cycle counts for one compute thread in sample node (I; J). CPI is the average number of machine cycles
per instruction. CPF is the number of machine cycles per floating-point instruction.
thread 50 threads 100 threads 150 threads 180 threads 200 threads 250 threads
short
intermediate 656,383 15,191 9,226 6,533 6,409 5,683 4,703
long 2,460,789 56,954 34,867 26,061 25,221 23,713 18,791
k-factor 332,654 11,704 9,368 9,002 8,625 8,608 7,826
k-force 661,631 14,024 8,491 5,738 5,738 5,738 2,987
instructions/thread 4,582,740 111,975 69,585 51,790 50,502 48,228 39,162
computation cycles 10,847,196
CPI 2.37 5.41 4.99 4.80 4.53 4.44 4.44
inter-node
communication cycles 13,352 13,352 13,352 13,352 13,352 13,352 13,352
total cycles 10,860,548 620,882 360,149 262,354 242,020 227,505 187,248
efficiency 1.00 0.35
Cache size (bytes)
Miss
rate
Cache miss rate x cache size and associativity (32 bytes/line)
Figure
12: Data cache simulation results for the traces collected
from executing our molecular dynamics code.
ory updates. The hardware barriers allow synchronization of all
threads in a node in less than 10 machine cycles.
Another architectural enhancement motivated by our work was the
B-switch. As shown in Table 7, the overall impact of communication
cycles is small, less than 10% of the total. However, to achieve
that goal, it is necessary to perform the reductions and broadcasts
with a minimal software overhead. That motivated the development
of the B-switch: a microprogrammed data streaming switch
that operates at double word (64-bit) level. At each cycle, the B-
switch can route data from any input to any combination of outputs.
It can also perform floating-point operations on the streams and direct
streams of data to/from the memory in the node. A diagram of
the B-switch is shown in Figure 14.
7. CONCLUSIONS
We have estimated the execution of a molecular dynamic code for
a system of 32,000 atoms on a full Petaflop cellular system, on the
50 100 150 200 250 3001030507090110Number of computation threads
Instruction
cycles
(thousands)
Node performance of 32,000 atom problem on 256x128 mesh
locks and barriers
no locks, barriers
no locks, no barriers
locks, no barriers
Figure
13: Impact of barriers and locks in the performance of
the molecular dynamics code. Different lines represent different
configurations of locks and barriers.
scale envisioned by IBM's Blue Gene project. A sequential version
of the application executed at 140 s/time step in an 800 Mflop/s
workstation. The parallel version executed at 375s/time step in
our Petaflop machine. This corresponds to a parallel speedup of
368,000 in a machine 1,250,000 times faster, for an efficiency of
30%. This result is in agreement with the estimates derived in [11].
Our exercise demonstrates that a class of molecular dynamics applications
exhibits enough parallelism to exploit millions of concurrent
threads of execution with reasonable efficiency. It demon-
strates, in broad lines, the validity of a massively parallel cellular
system design as one approach to achieving one Petaflop of computing
power. It also provides us with a clear understanding of a
representative molecular dynamic application code, for which we
now have an accurate performance model.
We still have much work to do to refine and improve the results
FPU
input fifo 4
input fifo 5
output fifo 0
output fifo 1
output fifo 2
output fifo 3
output fifo 4
output fifo 5
memory fifo
memory fifo
control
input fifo 0
input fifo 1
input fifo 2
input fifo 3
register file
B-switch
Figure
14: The B-switch is a device for streaming data from the
input ports (and memory) to the output ports (and memory).
The B-switch incorporates an FPU to perform arithmetic and
logic operations directly on the data stream.
presented here: the simulators have to be upgraded to represent a
more detailed hardware design; the performance models need to be
upgraded and validated against cycle faithful simulations; the communication
analysis needs to reflect overlap between computation
and communication; node failures in a machine of this scale have
to be addressed; and algorithms and methods will continue to be
improved.
8.
--R
Computer Simulation of Liquids.
The Blue Gene project.
A program for macromolecular energy minimization
Die Berechnung optischer und elektrostatischer Gitterpotentiale.
FlexRAM: Toward an advanced intelligent memory system.
CM3D CMM MD Code.
A case for intelligent RAM: IRAM.
Fast parallel algorithms for short-range molecular dynamics
Novel binding mode of hydroxamate inhibitors to human carbonic anhydrase II.
Hybrid technology multithreaded.
Parallel molecular dynamics: Implications for massively parallel machines.
Toward a cost-effective DSM organization that exploits processor-memory integration
Molecular dynamics in systems with multiple time scales.
Reversible multiple time scale molecular dynamics.
Computer experiments on classical fluids.
--TR
Computer simulation of liquids
Fast parallel algorithms for short-range molecular dynamics
Parallel molecular dynamics
--CTR
Jeffrey S. Vetter , Frank Mueller, Communication characteristics of large-scale scientific applications for contemporary cluster architectures, Journal of Parallel and Distributed Computing, v.63 n.9, p.853-865, September
Jeffrey Vetter, Dynamic statistical profiling of communication activity in distributed applications, ACM SIGMETRICS Performance Evaluation Review, v.30 n.1, June 2002
Ching-Tien Ho , Larry Stockmeyer, A New Approach to Fault-Tolerant Wormhole Routing for Mesh-Connected Parallel Computers, IEEE Transactions on Computers, v.53 n.4, p.427-439, April 2004
George Almsi , Clin Cacaval , Jos G. Castaos , Monty Denneau , Derek Lieber , Jos E. Moreira , Henry S. Warren, Jr., Dissecting Cyclops: a detailed analysis of a multithreaded architecture, ACM SIGARCH Computer Architecture News, v.31 n.1, March
S. Kumar , C. Huang , G. Zheng , E. Bohm , A. Bhatele , J. C. Phillips , H. Yu , L. V. Kal, Scalable molecular dynamics with NAMD on the IBM Blue Gene/L system, IBM Journal of Research and Development, v.52 n.1, p.177-188, January 2008 | massively parallel computing;performance evaluation;blue gene;cellular architecture;molecular dynamics |
377938 | Analysis of the Xedni Calculus Attack. | The xedni calculus attack on the elliptic curve discrete logarithm problem (ECDLP) involves lifting points from the finite field{\Bbb F}_p to the rational numbers {\Bbb Q} and then constructing an elliptic curve over {\Bbb Q} that passes through them. If the lifted points are linearly dependent, then the ECDLP is solved. Our purpose is to analyze the practicality of this algorithm. We find that asymptotically the algorithm is virtually certain to fail, because of an absolute bound on the size of the coefficients of a relation satisfied by the lifted points. Moreover, even for smaller values of p experiments show that the odds against finding a suitable lifting are prohibitively high. | Introduction
At the Second Elliptic Curve Cryptography Workshop (University of Waterloo, September 14-16,
1998), Joseph Silverman announced a new attack on the elliptic curve discrete logarithm problem
(ECDLP) over a prime field F p . He called his method "xedni calculus" because it "stands index
calculus on its head." 1
Recall that the ECDLP is the problem, given two points on an elliptic curve over F p ,
of finding an integer w such that briefly, Silverman's idea was to take r random
At about the same time, some similar ideas were developed independently in Korea [4].
linear combinations of the two points (most likely 4, 5 or 6), and
then consider points P i with rational coordinates that reduce modulo p to these r points and
elliptic curves E over the rational number field Q that pass through all of the P i and reduce
mod p to the original curve over F p . If those "lifted" points P i are linearly dependent, then the
ECDLP is solved. The probability of dependence is almost certainly very low, but Silverman
had an idea of how to increase this probability, possibly by a dramatic amount. Namely, he
imposes on the P i and E a set of auxiliary conditions modulo l for several small primes l. These
conditions guarantee that the elliptic curves will have fewer-than-expected points modulo l, and
this presumably decreases the likelihood that the r Q-points P i will be independent. (More
details will be given in x3 below.)
Silverman's algorithm, which had been circulating in manuscript form for about two weeks
before the conference, created a stir for several reasons. In the first place, this was the first time in
about seven years that a serious attack had been proposed on an important class of elliptic curve
cryptosystems. In the second place, Silverman's approach involved some sophisticated ideas
of arithmetic algebraic geometry - most notably, the heuristics of the Birch-Swinnerton-Dyer
Conjecture - that had never before had any practical application. In the third place, because
of the subtlety of the mathematics being used, even people who had computational experience
with elliptic curves were completely baffled in their initial attempts to estimate the running time
of the xedni calculus. No one, for example, could say with absolute certainty that it would not
turn out to give a polynomial-time algorithm!
If it were practical, the xedni calculus would not only break elliptic curve cryptosystems
(ECC). As Koblitz showed, it can easily be modified to attack (1) the Digital Signature Standard
(i.e., the discrete logarithm problem in the multiplicative group of F p ), and (2) RSA (i.e., the
integer factorization problem). Thus, essentially all public-key cryptography that's in widespread
use in the real world was threatened.
Of course, most people, including Silverman himself, thought that it was highly unlikely
that the algorithm would turn out to be so efficient that it would render ECC, DSS, and RSA
insecure. However, it is not enough to have a "gut feeling" about such matters. One needs to find
solid mathematical arguments that enable one to evaluate the efficiency of the xedni calculus.
That is the purpose of this paper.
Acknowledgments
. The authors wish to thank the faculty and staff of the Centre for Applied
Cryptographic Research at the University of Waterloo for their help during the work reported
on in this paper. We are especially grateful to Prof. Alfred Menezes for his support and encouragement
Background
2.1 The Hasse-Weil L-Function
Let E be an elliptic curve defined over the field Q of rational numbers, and let N l
denote the number of points on the reduction of E modulo l. 2 For each l we have the associated
whose value at
polynomial is the numerator of the zeta-function of E mod l. By Hasse's Theorem, ff l is a
complex number of absolute value
l.
The Hasse-Weil L-function of the curve E is defined by analogy with the Riemann zeta-
function
primes l1\Gammal \Gammas : Namely, we take L(E; s) to be the product over l of the following
"Euler
It is easy to verify that the infinite product converges for Re(s) ? 3=2 (just as the Euler product
for the Riemann zeta-function converges for Re(s) ? 1). By the "critical value" we mean the
value of L(E; s) at Just as one has to analytically continue the Riemann zeta-function
a distance 1/2 to the left in order to reach the "critical line," similarly one has to analytically
continue L(E; s) a distance 1/2 to the left in order to reach the critical value.
However, analytic continuation of L(E; s) is not nearly so simple as in the case of i(s); and,
in fact, it has been proven only in the case when E is "modular" in the following sense. If we
expand the Euler product, we can write L(E; s) in the form
P a n \Delta n \Gammas . 3 We now introduce a
new complex variable z, and in each term we replace n \Gammas by e 2-inz . The result is a Fourier series
P a n e 2-inz that converges in the complex upper half-plane. We say that E is "modular" if this
Fourier series is a modular form, that is, if it satisfies a simple transformation rule when z is
replaced by (az d) for any integer matrix
c d
of determinant 1 with c j 0 (mod
is the "conductor" of the curve E; it is closely related to the curve's discriminant
D. 4 In order to know unconditionally that analytic continuation is possible and the critical value
L(E; 1) is defined, we need the curve E to be modular.
We're assuming that E has ``good reduction'' at l, i.e., that l does not divide the denominators of the
coefficients or the discriminant D of the curve. For brevity, we shall not discuss the modifications needed for the
"bad" primes l.
l is prime, then an is our earlier a l ; for composite n it is not hard to express an in terms of a l for
ljn.
In particular, N jD, and both N and D have the same prime divisors.
2.2 The Taniyama Conjecture
The Taniyama Conjecture is the assertion that all elliptic curves E over Q are modular. One
reason for its importance is that it guarantees that the Hasse-Weil L-function of E can be
analytically continued, and its behavior near can be studied.
It is for a different reason that most people have heard of this conjecture, namely, its connection
to Fermat's Last Theorem. In 1985 Gerhard Frey suggested that if A p +B were
a counterexample to Fermat's Last Theorem, then the elliptic curve
would have a very surprising property. Its discriminant would be
so every prime factor in this discriminant would occur to a very large power. Frey thought that
it would then have to violate the Taniyama Conjecture. K. Ribet was able to prove that Frey's
hunch was correct [24]; then, working intensively for many years, A. Wiles (partly in joint work
with R. Taylor) [37, 36] proved that no such curve can violate the Taniyama Conjecture, and
hence there can be no counterexample to Fermat's Last Theorem.
Wiles proved the Taniyama Conjecture for a broad class of curves - the "semi-stable" ones,
i.e., the ones whose conductor N is squarefree - but not for all curves. What he proved was
enough for Fermat's Last Theorem. But for a small class of curves that are not semi-stable it
is still a conjecture rather than a theorem that the Hasse-Weil L-function can be analytically
continued.
2.3 The Conjecture of Birch and Swinnerton-Dyer
As before, let E be an elliptic curve defined over Q, and let N l denote the number of mod-l
points. As l increases, suppose that we want to get an idea of whether or not N l tends to be
toward the right end of the Hasse interval [l
l], that is, whether or not
there tend to be more-than-average points on the curve. We might expect that if our original
curve over Q has infinitely many points - that is, if its rank r is positive - then these rational
points would be a plentiful source of mod-l points, and N l would tend to be large; whereas if
would straddle both sides of l This is the intuitive idea of the (weak)
Birch-Swinnerton-Dyer Conjecture [1, 2, 3].
To measure the relative size of N l and l as l varies, let us form the product
l
l
l
. Because
1, we can write this as
Y
l
l
Y
which is formally equal to the value at of the Euler product for L(E; s). We say "formally,"
because that product diverges, and the critical value is found by analytic continuation, not by
evaluating an infinite product.
Nevertheless, let us suppose that it makes sense to talk about this infinite product as if
it converged. One might expect that it would converge to zero if N l has a tendency to be
significantly larger than l, and would converge to a nonzero value if N l is equally likely to
be above or below l. And, indeed, the Birch-Swinnerton-Dyer Conjecture states that L(E; s)
vanishes at only if the rank r of the group of E over Q is greater than zero, and
that, moreover, its order of vanishing at is equal to r. The conjecture further says that
the leading coefficient in the Taylor expansion at can be expressed in terms of certain
number-theoretic invariants of E. Starting in 1977, a series of important partial results have
been proved in support of this fundamental conjecture (see [5, 6, 25]), but in its most general
form it remains a very difficult unsolved problem.
2.4 Heights
Let E be an elliptic curve (in Weierstrass form) over the field Q of rational numbers. Let
be a rational point on E (not the point at infinity). The logarithmic height of P is
defined by the formula h(P is written as a fraction in lowest
terms. The logarithmic height is closely related to the point's size in the computer-science sense
(i.e., its bit-length). 5
It can be shown that, if P is a point of infinite order, then h(nP ) grows quadratically with
n. That is, if you write out a list of the multiples of P , one on each line, the lengths of the lines
will increase proportionally to n 2 and so form a parabola. (For a picture of this in the case of
the elliptic curve Y and the point page 143 of [11].)
The logarithmic height, which has a roughly quadratic behavior, can be modified (this was
done by N'eron [23] and later simplified by Tate) in such a way that the resulting canonical
logarithmic height -
h(P ) is precisely a quadratic form. Namely, define
h(nP
The values of - h(P ) and 1
are close to one another - in fact, it can be shown (see p. 229 of
[29]) that their difference is bounded by a constant depending only on E - but it is the function
- h rather than h that has the nicer properties.
Suppose that the group E(Q) has rank r, i.e., the quotient group E(Q)=E(Q) tors is isomorphic
5 It would not make much difference if, instead, the logarithmic height were defined as log max(jaj; jbj; jcj; jdj),
lowest terms, or even as log 2
jabcdj, which really is (essentially) the number of bits needed to
to Z r . Let r be a set of generators. The formula
defines a positive definite inner product on the r-dimensional real vector space V obtained from
E(Q)=E(Q) tors by formally allowing the P i to have real (rather than just integer) coefficients.
This vector space can also be defined using the tensor product notation:
E(Q)\Omega R. Note
that E(Q)=E(Q) tors is a full lattice in V .
The regulator of E is defined as follows:
It is the square of the volume of a fundamental parallelepiped of the lattice E(Q)=E(Q) tors with
respect to our inner product. The real number R is an important constant attached to the
elliptic curve. In the Birch-Swinnerton-Dyer Conjecture, it appears as one of the factors in the
first non-zero Taylor coefficient of the expansion of L(E; s) at
3 Summary of the Algorithm
3.1 Simplified Version
We want to find an integer w such that
Working in projective coordinates, we choose two points e
P and e
Q with integer coordinates
whose residues modulo p are our points choose an elliptic curve E(Q)
that passes through e
P and e
Q and that reduces modulo p to the curve E(F p ).
Now suppose that e
P and e
turn out to be dependent in E(Q), that is,
e
e
in which case n 1 and n 2 can easily be found. If that happens, working modulo p we get
modulo the order of P in E(F p ); from this we can easily find w.
However, in general the probability that e
P and e
are dependent is very, very small. Silver-
man's idea is to increase this probability by imposing some conditions of the following type:
l
- that is, the reduction modulo l of E(Q) has relatively few points for all primes l,
(where
This idea was suggested by J. F. Mestre's success in obtaining curves of higher than expected
rank by imposing conditions in the opposite direction, i.e.,
l:
Both strategies (for obtaining either higher-than-expected or lower-than-expected rank) are based
on the heuristic argument for the conjecture of Birch and Swinnerton-Dyer (see x2.3), which says
that the rank of E(Q) is equal to the order of vanishing of L(E; s) at s = 1. Mestre's method
is to force the first several terms in the formal infinite product for L(E; 1) to be as small as
possible, whereas Silverman wants them to be as large as possible.
3.2 The Algorithm
We now describe the steps in the xedni algorithm [33].
Step 1. Choose an integer r with 2 - r - 9 (most likely 4 - r - 6), and integers L 0 - 7 and
Y
l prime; L 0 -l-L 1
l:
Also, decide whether you will be working with elliptic curves in general cubic form or in Weierstrass
form. In the first case, for any r-tuple of projective points
denote the (r \Theta 10)-matrix whose i-th row is
Then the r points lie on a given cubic curve with coefficients u only if the
column-vector u is in the kernel of the matrix B(P If, on the other hand, the elliptic
curve is given in the Weierstrass form 6
a
then we take B(P to be the (r \Theta 7)-matrix whose i-th row is
In this case the r points lie on the curve if and only if the vector (a 0 a 1 a 3 \Gamma a 0
is in the kernel of B(P
6 Since it is customary to write a i for the coefficients of the general Weierstrass equation, we shall also adhere
to this notation and hope that it does not lead to confusion with the use of a l (l prime) to denote l
which is also customary. Also note that usually one takes a however, we
want integer rather than rational coefficients, so it is useful to introduce a 0 and a 0
Step 2. For each ljM , choose r points P l;i in the projective plane over F l such that the matrix
r. Let PM;i denote a point modulo M that reduces to P l;i modulo l for
each ljM ; such a point can be found by the Chinese Remainder Theorem. If r - 4 and you're
working with the general form of a cubic (rather than Weierstrass form), for convenience and
slightly greater efficiency choose the first four points to be (1; 0; 0), (0;
Also choose a mod-M coefficient vector using Weierstrass form,
(a
that is in the kernel of the B-matrix for each ljM .
Choose the coefficient vector so that for each ljM the resulting cubic curve is an elliptic curve
(i.e., the discriminant is nonzero) with the fewest possible F l -points:
which is the smallest integer in the Hasse interval. This equality is called the "reverse-Mestre
condition" at l.
Remark 1. In some circumstances it might be better to allow a weaker reverse-Mestre condition,
and instead require only that
2.
Remark 2. Note that the condition that B have rank r implies that the P l;i must be distinct
points, and hence N must be chosen large enough so that this
inequality does not contradict the (weak) reverse-Mestre condition. For example, if
then one can choose L
Remark 3. When constructing the P l;i and coefficient vectors for the different small primes l,
some care has to be taken so as not to inadvertently cause the lifted points in Step 6 below to
automatically be independent. In cases when N l and N l 0 have a common factor - , there has to
be a certain compatibility between the images of the P l;i in the quotient group E(F l )=-E(F l )
and the images of P l 0 ;i in E(F l 0 )=-E(F l 0 ).
To illustrate in a simple situation, let us take suppose that N
in accordance with the reverse-Mestre conditions. Suppose that P
where a and b are integers modulo 7 and 21, respectively. (Here we are supposing that P 13;1 is not
the point at infinity, and P 31;1 is not a point of order 3.) Unless a j b (mod 7), the lifted points
are forced to be independent. To see this, suppose that we had a nontrival relation
of the form n 1 our lifted curve will almost certainly have no torsion points
(in particular, no points of order 7), we may suppose that 7 does not divide both n 1 and n 2 . If
we reduce this relation modulo 13 and 31, we obtain (n 1 +n 2 a)P
Hence and so a j b (mod 7).
Remark 4. The reason for requiring that the B-matrix have rank r for each ljM is that this is
precisely the condition that is needed in order to ensure that one can find coefficients for an
elliptic curve over Q that both passes through the lifted points and reduces modulo the primes
l and p to the curves E(F l ) (for ljM) and E(F p ) that we already have (see Step 7 below). This
is proved in Appendix B of [33]. Here we shall motivate the rank-r condition for the B-matrix
by giving an example in a simpler setting.
Suppose that r = 2, and we're working with straight lines in the projective plane, rather than
elliptic curves, so that the B-matrix is just
. Let l = 3. Suppose that we have
ignored the rank-r condition and over F 3 have chosen points P
and the straight line Suppose that we have lifted the points to Q as follows:
\Gamma1). We now want to find a lifted line (1
that reduces to and that passes through P 1 and P 2 . A simple calculation
shows that this is impossible.
Step 3. Let be the points in the discrete log problem; that is,
unknown integer w. Choose r random integer linear combinations of the two points
Our entire purpose in the algorithm is to find a linear dependency among the
If we succeed, then we immediately obtain the following congruence modulo the order of the
From this we can almost certainly solve for w (recall that in cryptographic applications the order
of P is usually a large prime).
Step 4. If r - 4, and if you want to look for a lifted elliptic curve in general cubic form (so that
have more coefficients to work with), then make a linear change of variables in the projective
plane over F p so that the first four points become
1). In that case we let u p;i , denote the coefficients of the resulting
equation for E(F p ).
Step 5. Use the Chinese Remainder Theorem to find coefficients u 0
modulo Mp that reduce
to u p;i modulo p and to uM;i modulo M , (Do the analogous thing with the a i
coefficients if you are working in Weierstrass form.)
Step 6. Lift the r points to the projective plane over the rational numbers. That is, for
choose points P coordinates that reduce to P p;i modulo
p and to PM;i modulo M . If r - 4 and you are working with the general form of a cubic, then
take the first four points to be
Step 7. Using the r points P i from Step 6, form the matrix B(P Find an integer
vector
(or an analogous vector of
a i 's if you've been working with curves in Weierstrass form). The rank-r condition on the mod-l
B-matrices ensure that we can do this. Try to find u so that the u i are as small as possible.
Step 8. If you've been working with the general equation of a cubic, make a linear change of
variables to bring it into Weierstrass form.
Steps 9-10 (optional). Modify the solution u in Step 7 by adding or subtracting vectors of the
form Mpv, where the vectors v are chosen from a basis of solutions to
coordinates. Choose a new solution u such that the discriminant of the curve with coefficients
small as possible. (Go through the analogous procedure with the a i if you've
been working with curves in Weierstrass form).
Also, let L be a constant of order about 200. For each curve compute the sum
l-L; l
a l
log l
l
If this sum is smaller than a pre-determined quantity (that is arrived at experimentally), discard
the curve and start over again with Step 2 or Step 3. Otherwise, continue to Step 11.
is based on an analytic formula for the rank of a modular curve that was proved by
Mestre [20]. (Notice that his formula can be used because of the Taniyama Conjecture, which
says that all elliptic curves over Q are modular.) In Mestre's formula the above sum appears as
a crucial term. Heuristically, it is plausible that the more negative this sum is, the more likely
the curve is to have large rank. Since we want smaller-than-expected rank, we might want to
throw out curves for which the sum is highly negative.
Step 11. Finally, test the points for dependence. There are at least two efficient methods of
doing this (see [33]). If they are independent, return to Step 2 or Step 3. If they are dependent,
it is not hard to find the coefficients of a relation. As explained in Step 3, it is then very easy to
find the discrete logarithm x. This completes the description of the algorithm.
4 Asymptotic Failure of the Algorithm
The purpose of this section is to prove
Theorem 4.1. Under certain plausible assumptions (see the lemma below), there exists an absolute
constant C 0 such that the probability of success of the xedni algorithm in finding a discrete
logarithm on an elliptic curve over F p is less than C 0 =p.
Unfortunately, C 0 is rather large, so this result does not immediately resolve the question of
practicality of the algorithm. We address that question in the next section.
Recall the notion of the canonical logarithmic height - h(P ) (see x2.4). Given an elliptic curve
having infinitely many rational points, let m denote the minimum of - h(P ) for all non-
torsion points P 2 E(Q). Let D denote the discriminant of E. Then a conjecture of Lang (see
p. 92 of [12] or p. 233 of [29]) states that there exists a positive absolute constant C 3 such that
log jDj. This conjecture was proved for a large class of curves in [27, 8], but it has not
yet been proved unconditionally for all curves over Q.
Lemma 4.1. Assume that log jDj - C 1 for the lifted curves in the xedni algo-
rithm, where D is the discriminant of the lifted curve, P i are the lifted points, -
h is the canonical
logarithmic height, and C 1 is a positive absolute constant. 7 Then, under Lang's conjecture, if the
lifted points are dependent, then they satisfy a nontrivial relation with coefficients bounded from
above by an absolute constant C 2 .
Proof. Following [34], we estimate the number of points of E(Q) - more precisely, the number of
points in the subgroup E 0 spanned by the lifted points whose canonical logarithmic
height is bounded by a constant B. Suppose that the P i are dependent, and let r
the rank of E 0 . Let T 0 denote the number of torsion points in E 0 . (In practice, almost certainly
and by a famous theorem of Mazur [16] always
0\Omega R, and let
R 0 denote the regulator of E 0 , i.e., R
r 0 are a basis for
tors . Finally, we define
To estimate N(B), one uses standard results from the geometry of numbers. According to
Theorem 7.4 of Chapter 5 of [13],
is the volume of the r 0 -dimensional unit ball:
7 Roughly speaking, this condition says that the discriminant of the lifted curve is greater than the C 1 -th power
of the maximum absolute value of the numerators and denominators of the coordinates of the lifted points, for
some absolute constant C 1 ? 0. This is a reasonable assumption, since the discriminant is a polynomial function
of the coefficients of the curve, and the coefficients tend to grow proportionally to a power of the integer projective
coordinates of the points through which the curve must pass.
It follows from Corollary 7.8 of Chapter 5 of [13] that
where, as before, m denotes the smallest positive value of - h on E(Q) (actually, we could replace
m by the smallest positive value of - h on E 0 ). If we combine these relations and denote
we obtain
Now let M denote the maximum of - h(P i
- h is a metric, the height of
any integer linear combination of the P i with coefficients n i bounded by 1
will be
chosen later) is bounded as follows:
If we substitute
2 M in our inequality for N(B), we find that the number of points
that
i.e., the number of points that satisfy the above inequality for the height, is
less than
But the number of linear combinations
very close to C r
2 . If
then there must be two different linear combinations that are equal, and so the points P i satisfy
a nontrivial linear relation with coefficients bounded by C 2 .
We now use the assumptions in the lemma. By Lang's conjecture, m - C 3 log jDj. Since we
also assumed that log jDj - C 1 M for some positive absolute constant C 1 , we have
Dividing the previous inequality through by C r 0
2 , and using the fact that r 0 - r \Gamma 1, we find that
it suffices to have
and there are only finitely many possibilities for r, namely, 2 - r - 9, this is an
absolute constant. The lemma is proved.
We now show how the theorem follows from the lemma. The point is that any relation among
the lifted points P i can be reduced modulo p to get a relation with the same coefficients among
the original r points P p;i that were constructed at random in Step 3. However, it is extremely
unlikely that r random points on E(F p ) will satisfy a linear relation with coefficients less than a
constant bound. In fact, using a pigeon-hole argument, one can show that the smallest value of
that is likely to occur for the coefficients in a relation is of order O(p 1=r ). If the points
P p;i in Step 3 do not satisfy a relation with coefficients less than the bound in the lemma, then no
amount of work with Mestre conditions is going to enable one to lift them to dependent points.
To make the argument more precise, consider the map from r-tuples of integers less than
absolute value to E(F p ) given by (n . The image is
a set of - (2C 2 ) r randomly distributed points. The probability that the image contains 0 is
approximately This proves the theorem with
Unfortunately, the certain failure of the algorithm for large primes p does not rule out its
practicality for p of an "intermediate" size, such as After examining about 10000 curves,
Silverman [27] was able to bound the constant C 3 in Lang's conjecture as follows: C 3
That circumstance alone contributes a factor of at least 2000 (r\Gamma1)=2 to the constant C 2 in the
lemma, and at least 2000 r(r\Gamma1)=2 to the constant C 0 in Theorem 4.1. In any case, it is now clear
that Silverman was correct to choose r ? 2. If r were equal to 2 (as in the "simplified version"
in x3.1), then C 2 could be chosen much smaller, and our theorem would apply to p of more
moderate size.
This situation is very unusual. We know, subject to various reasonable conjectures, that
for sufficiently large p the xedni algorithm must be repeated at least O(p) times (with different
choices of r points in Step 3) in order to find a discrete logarithm. In other words, asymptotically
it is far slower than square-root attacks. However, because of the constants involved, this result
does not necessarily imply that the algorithm is inefficient for p in the range that arises in
practical cryptography.
4.1 Estimate of the Constant in Theorem 4.1
In order to get a very rough estimate for the constant C 0 in Theorem 4.1, we shall make the
following assumptions:
ffl The constant C 3 in Lang's conjecture is no less than 1=10 of the upper bound in [27], i.e.,
ffl For one uses the Weierstrass form of the equation of the elliptic curve with
8 On the other hand, it is known (see [8, 27]) that in order to get a very small value of C 3 , it is necessary
that the discriminant D be divisible by many primes to fairly high powers. However, from the way they are
constructed, the xedni curves tend to have discriminants that are square-free or almost square-free.
7 variable coefficients. We suppose that the ratio of length of the coefficients to length of the
coordinates of the r points is given by a formula derived from Siegel's Lemma, as in Appendix
J of [33], namely, 1 r). We further suppose that the length of the discriminant is 12
times the length of the coefficients.
ffl For one uses the general equation of a cubic, which has 10 variable coefficients.
We suppose that the ratio of lengths of coefficients to coordinates is now 1
Appendix
J of [33]). In accordance with computations of Silverman (see Appendix C of [33]), we
also assume that the length of the discriminant is 110 times the length of the coefficients.
ffl The curves over Q have no nontrivial torsion points, as one expects to happen in the vast
majority of cases.
We now use the bound in the proof of Theorem 4.1:
are determined according to the four assumptions above, i.e.,
respectively. Here is the result:
r very rough value for C 0
We conclude that for Theorem 4.1 rules out the use of the algorithm with r - 5,
but not necessarily with 9. Nevertheless, in our experimental work, where the primes
were much smaller, we took in order to investigate the probability of dependence, the
effect of reverse-Mestre conditions, and other issues.
Note that when 50 we can expect to be working with elliptic curves over Q whose
discriminants have at least 10000 decimal digits when 9. This
obviously casts doubt on the feasibility of the computations in the algorithm. We shall explore
the practicality question in more detail in the next section.
Remark 5. Our estimate for C 1 might be too high, because sometimes one can obtain smaller
coefficients and discriminants using lattice-basis reduction and other methods. On the other
hand, the value we are using for C 3 is almost certainly too low; so it is reasonable to hope that
our value for the product C 1 C 3 is about right.
5 Empirical Analysis in the Practical Range
To get a practical estimate of the probability of success of the xedni algorithm, we did several
experiments, including an implementation of the algorithm itself. All experiments were carried
out using the computer algebra systems LiDIA [14] and SIMATH [38]. We began with a couple of
preliminary computations. The purpose of this was to obtain some insight into which parameters
have an impact on the probability of dependence. Our strategy and the size of parameters were
chosen with the aim of producing a significant number of dependencies. We tried to keep the size
of the curve coefficients, and hence the size of the discriminant, as small as possible. We worked
with points through which the curve was made to pass, and we did not impose
any reverse-Mestre conditions. The data obtained through these experiments already suggested
that most likely the xedni algorithm has a negligible probability of success. However, to be more
confident of this statement, we implemented the algorithm. It turned out that the probability
of success was small even for 8-bit primes.
5.1 A First Approach
5.1.1 The experiment
For each value curves were generated as follows. First, r affine points
randomly selected with integers jx i j and jy i j bounded by 40 when
4, such that the points had pairwise distinct x-coordinates and
none of them was the point at infinity. The points were discarded if any three of the r points
were collinear. Note that if three points P , Q and R are collinear and E is an elliptic
curve passing through these points, then P independently of E. Second,
the five coefficients a i of a curve in standard Weierstrass form (with a
were selected so that the curve passed through the r points and the coefficients were small. If
there was no solution with integer coefficients, the points were discarded. Third, the curve (and
points) were discarded if the curve had the same j-invariant as an earlier curve. Fourth, the
same was done if any of the r points were torsion points or if the curve had nontrivial 2-torsion.
Finally, in the cases the discriminant was greater than 2 80 , that case was also dis-
carded. The reason for this was that in preliminary experiments we were unable to find a single
case of dependency with discriminant greater than 2 72 , and we wanted to avoid a lot of fruitless
computation.
In all cases we computed the discriminant and the number of mod-l points for 7 - l - 97. A
2-descent (see [33], Appendix D) was used to check dependence. When the points were dependent,
the dependency relation with smallest coefficients was determined.
w.dep.pts.
#dep:
total#
R 3 R 4 R 5
50 3079 133 0.043 4493.75 250.22 13.93
Table
1: 4: Probability of dependence
5.1.2 Results
Among the 200000 examples considered for each 4, we found 2895, 21165 and 10698
dependent cases, respectively. For each value of each bit-length of the discriminant
D, the proportion of dependent cases (i.e., the probability of dependence) was tabulated and
compared with various fractional powers of the discriminant. The data suggest that when
the probability of dependence is bounded, respectively, by 5jDj \Gamma1=4 , 66jDj \Gamma1=4 , 322jDj \Gamma1=4 .
Some explicit results for are given in Table 1. Here column A is the bit-length of the
discriminant; to keep the table small, we restrict ourselves to listing the data for discriminants
of bit-length 5k, k - 1, and for the largest discriminants. Column B is the number of example
curves having discriminant of bit-length A. Column C is the number of these curves for which the
four points are dependent. The fourth column is the proportion C=B of dependencies. The last
three columns show the values of R 5. Thus, R e is approximately
equal to the e-th root of the discriminant times the fraction of examples where the points were
The average value of
a l log l
l for respectively, \Gamma4:401, \Gamma6:163, \Gamma8:108
for all curves and \Gamma2:227, \Gamma4:336, \Gamma6:597 for the dependent cases. In other words, very roughly
it was equal on average to \Gamma2(rank of curve).
We also looked at the reverse-Mestre conditions for 7 - l - 97. Of the 22 values of l, no
curve satisfied more than 3 reverse-Mestre conditions. The dependent cases had significantly
more likelihood than the independent cases of satisfying these conditions - but still not a large
probability. When 4, for example, 17 out of the 10698 dependent cases (about 0:16%) satisfied
2 or 3 reverse-Mestre conditions, whereas only 156 out of the 189302 independent cases (about
0:08%) did. In both cases, this proportion was far less than one expects for a random curve.
The reason is that, since the curves were constructed to pass through r points, they generally
had higher rank, and hence in most cases more mod-l points, than an average curve. We also
compiled statistics on the number of 'reverse-Mestre+1' and `reverse-Mestre+2' conditions (i.e.,
N l is l
l] +1 or l
the results were similar to what we found
for the pure reverse-Mestre conditions. For example, when 4, out of the 10698 dependent
cases there were 83 cases when 2 or 3 reverse-Mestre +1 conditions held (none with ? 3), and
there were 148 cases when 2 or 3 reverse-Mestre +2 conditions held (none with ? 3). Out of the
independent cases there were 703 cases with 2 or 3 reverse-Mestre +1 conditions (none
with ? 3) and 1555 with 2 or 3 reverse-Mestre +2 conditions (2 with ? 3).
Most remarkably, the coefficients in the dependency relations were very small. When
over 98% of the coefficients were 4 or less in absolute value, and no coefficient was greater than
8. When 99:75% of the coefficients were 3 or less in absolute value, and no coefficient
was greater than 13. When r = 4, over 99% of the coefficients were 2 or less in absolute value,
and no coefficient was greater than 8. This is much less than the theoretical bound C 2 derived
in the previous section.
5.2 A Second Approach
While doing experiments similar to those described above, we found an interesting effect when
we tried to mix our bounds on the coordinates. Namely, at one point (with we tried to
add to our sample 100000 examples for which the absolute values of the coordinates of the 3
points were between 31 and 50 (rather than between 0 and 30). The large proportion of cases
that led to large discriminants were discarded, leaving only the examples with smaller-than-
average discriminants. In that situation there was a significant increase in the probability of
dependence (roughly by a factor of 4) for fixed bit-length of the discriminant. This suggests
that the probability depends not only on the size of the discriminant, but also on how this size
relates to the logarithmic heights - h(P i ) of the lifted points. In particular, the probability of
dependence seems to be significantly greater for curves whose discriminants are much smaller
than the median.
In a second series of experiments we took advantage of this phenomenon. Here we also were
interested in the distribution of the discriminants of curves forced to go through r random points
whose coordinates were chosen to lie within certain ranges.
5.2.1 The Experiment
In this series of experiments we worked with whose coordinates x
so that
where
Initially, we planned to take but we ended up working with
For each such value of k,
100000 curves were generated in the way described above. Besides the modified bounds on
the coefficients, the only difference was that we used the homogeneous Weierstrass form with 7
coefficients, computed an LLL-reduced basis ~v of the kernel of the matrix B(P
and then chose a solution vector ~u from the set fe 1 such that the
discriminant of the corresponding curve is minimal. For each k, out of the 100000 curves only
the 1000 with smallest discriminant were examined for dependency. Thus, about 8 million curves
were generated, and 1% of them were examined for dependency. For each k, we also looked at
the distribution of the 100000 discriminants.
5.2.2 Results
The distribution of the bit-length of the discriminant was very similar for different ranges of k.
It was not exactly a normal distribution - in particular, the mode was a few bits larger than
the median, which was a few bits larger than the mean. The ratio of the standard deviation to
the mean was 0:22 for all k - 11 and between 0:25 and 0:23 for 1 - k - 10. As a function
of k, the median was very close to 23 log 30. The largest bit-length of discriminant for the
bottom 1% was consistently 48% or 49% of the median bit-length, i.e., about 11:5 log
For example, for the smallest 1% of the curves had discriminants of bit-length between
22 and 92, while for the range was 24 to 81 bits, and for 3000 the range was 63 to
bits.
In general, there was a much greater probability of dependence than in the previous exper-
iment. For example, for the probability of dependence was about 30% for
discriminants of - 40 bits, it was about 5% for discriminants in the 60-bit range, and it dropped
off gradually to about 1% for discriminants of ? 90 bits. For the larger values of k, where most
of the smallest 1% of discriminants had more than 100 bits, we also found many dependent
cases. For example, for there were 35 dependent cases among the 998 curves with
discriminants of ? 100 bits, the largest of which was for a 151-bit discriminant. This contrasts
dramatically with the earlier data, when the coordinates of the P i were much smaller and the
discriminants of ? bits came from the middle and high range of discriminants; in that case we
did not find a single dependency among the vast number of cases of discriminant ? 2 72 . More-
over, the probability of dependence was no longer bounded by const \Delta jDj \Gamma1=4 . Hence, having
1000 1000
3000 1000 36 1 143 -
Table
2: The coefficients of the dependency relations
smaller than expected discriminant helps force the points to be dependent.
However, when we examined the sizes of the coefficients in the dependency relations, we
realized that it was the very small size of these coefficients, rather than the small probability
of dependence for large jDj, that would be the most serious obstacle to the xedni calculus.
These coefficients tended to be as small or smaller than in the previous experiment. Moreover,
the chance of finding a dependency coefficient other than 1; \Gamma1; 0 drops significantly as the
discriminant grows. For example, for k ? 32 we encountered no coefficients of absolute value
greater than 3. In Table 2 we give the distribution of the dependency coefficients. The first
column is the range of k-values; the second column is the number of curves examined (i.e., 1000
times the number of k-values in the range); the third column is the number of dependent cases.
The i-th column after the double line is the number of dependency coefficients of absolute value
(thus, the sum of all of these columns is equal to 4 times the third column).
Out of the 27 dependent cases for relations were of the form P 1
For out of 41 relations were of this form, and for was the case for 31 out
of 36 relations. Note that the probability of getting this relation is simply the probability that,
when one passes a curve through the four points, it also passes through the point of intersection
of a line through two of the points with the line through the other two. Although there is a
significant chance of this happening even when k is large, this type of relation with coefficients
\Sigma1 is not useful for solving the ECDLP, where the coefficients will be large.
We also wanted to see if the data could have been affected by the particular way we generated
the points (especially, the narrow range of jx i j and the fact that jy i j was so close to jx i j 3=2 ). So
we returned to a range roughly similar to
In each case we generated
100000 examples and examined the bottom 1%. This time the discriminants were much larger
than before (up to 162 bits in the first case and up to 125 bits in the second case), presumably
because LLL had been able to find much smaller coefficients when jx very close to jy
Out of 1000 curves there were, respectively, 14 and 50 dependencies, of which ten and eight were
of the form Once again there were no coefficients other than 1; \Gamma1; 0.
5.3 Preliminary Conclusions
our experiments showed the following. First, the probability of dependence drops off
with increasing bit-length of the discriminant, but this drop-off depends on more than just the
bit-length. Another factor is the ratio of the actual size of the discriminant to the expected size.
Second, reverse-Mestre conditions are more likely to be satisfied in the dependent cases
than in the independent cases. What is the probability of dependence given that reverse-Mestre
conditions hold for a few small primes? Such data cannot be extracted from our experiments.
For example, in the first experiment (with 200000 curves) and in the second experiment (with
10000 curves of relatively small discriminant and we checked for reverse-Mestre
conditions and reverse-Mestre +1 conditions for l = 7, 11 and 13. We found that in none of the
cases, dependent or independent, were any two such conditions satisfied simultaneously.
Third, the small sizes of the dependency coefficients seemed to cast doubt on the practicality
of the xedni algorithm. At this point we did not yet have data reflecting the situation of
ECDLP, where we deal with points whose smallest relation is necessarily fairly large. What is
the probability that are dependent, given that we know a priori that any relation they
satisfy must have moderately large coefficients?
5.4 Experiments with the Xedni Algorithm
To answer the questions raised above, we implemented the xedni algorithm. The size of the
parameters was chosen so that we had a reasonable chance of finding some dependent cases.
For different experiments, which can be classified as
follows: (A) no reverse-Mestre conditions imposed; (B) reverse-Mestre conditions imposed for
two small primes whose product M is of approximately the same size as p; (C) instead of p work
with conditions imposed but with p 0 taken to be of the same
magnitude as the product Mp in (B).
In the context of an actual ECDLP, this means that both Experiments A and B would be used
to solve the same ECDLP but with different strategies. That is, the reverse-Mestre conditions
in Experiment B would presumably contribute to a greater likelihood of dependency, but at the
expense of much larger discriminants (which would work against dependency). Experiment C, on
the other hand, would be used to solve an unrelated instance of ECDLP, but the discriminants
in Experiment C are of similar size to those in Experiment B. Comparing Experiments A and C
with Experiment B, we should be able to judge whether the reverse-Mestre conditions are helpful
enough to compensate for the larger discriminants.
Let us describe Experiment B with in detail. We chose a 28. Then the
curve points. We chose as a generator for
E(F p ). Next, we chose and we chose PM;i , 4, to be the four points
on the mod-M curve y 8. Note that
the numbers of points mod 7 and 11 are, respectively, 5 and 6. In each case the B-matrix has
rank 4; and since the numbers of points for different l are relatively prime there is no worry
about incompatibility and forced independence. Using the Chinese Remainder Theorem, we
then compute a; b with \Gamma77p=2 ! a; b ! 77p=2 to be congruent to a and congruent to
77. Hence a = 1541 and the steps are repeated 100000
times.
1. For any vector n 2 FN 4 define knk 2 to be
, where the coordinates n i of n are taken in
the interval \GammaN=2
is chosen so that -
orthogonal to -. This means that we do not allow the P i to satisfy a relation with all
coefficients 0; 1; \Gamma1.
2. For each use the Chinese Remainder Theorem to choose to be congruent
to the coordinates of P p;i mod p and to those of PM;i mod 77. Now choose
in projective coordinates by finding a short vector in the lattice generated by the columns
of the matrix 0
subject to the condition that Z i is not divisible by 7, 11, or p.
3. Solve for small integers u i such that the curve E(Q) with equation
4, and has minimal discriminant. Here we use
the techniques described in Steps 7 and 9 and Appendix B of [33], including the Havas-
Majewski-Matthews Hermite normal form algorithm [7].
4. Finally, check whether the P i are dependent. In case of dependency, compute the dependency
relation with smallest coefficients.
Experiment A differs from Experiment B only in that 1. For the corresponding
Experiment C, we chose 946). The curve
dependent cases
bits 131 bits 73 bits 317 23 bits 91 bits 61 bits
bits 273 bits 182 bits 3 140 bits 151 bits 144 bits
bits 257 bits 148 bits 153 59 bits 170 bits 114 bits
Table
3: Experiments A, B, C (100000 examples of
For Experiments A and B with chose a 2. Then the
curve points. We chose We
worked with chose PM;1 and PM;2 to be the two points (5; \Sigma2) on the
mod-M curve y 14. The number of points is 3 both mod 3 and mod 5. The
fact that P guarantees that we do not force the lifted points to be
independent. Chinese Remaindering gives coefficients \Gamma119, 121 and 104. Hence, in Experiment
A we work with the curve y while in Experiment B we work with the
curve 255. For Experiment C we chose
20). The curve y points. Note
that since we work with only two points, the vectors n and - of Step 1 above are vectors in FN 2 .
The only conditions imposed on P p;i are that they are not the point at infinity and
5.4.1 Results
Among the 6 series of 100000 executions of Steps only in 3 series did we obtain any
dependencies. This was in Experiment A with and in Experiment B with
are shown in Table 3.
The data show that, given an instance of the ECDLP - i.e., a fixed value of p - we are
more likely to produce dependent cases if we do not impose reverse-Mestre conditions. When
deps.
Table
4: Experiments A: coefficients
we work with discriminants of approximately the same size - i.e., with variable p but fixed size
of Mp - the different outcomes of Experiment B when Experiment C when
might be interpreted as evidence that imposing reverse-Mestre conditions has a
significant impact. However, the three relations in Experiment B are all of the form P
Notice that once one of the two points mod p is chosen, there are N \Gamma 3 possibilities
for the other one, and the probability that the two points satisfy a dependency with coefficients
in Experiment C. (Note that in
Experiment B the coefficients n 1 , n 2 must satisfy the congruence
that is why the numerator above N \Gamma 3 is 2 rather than 4 in
Experiment B.) Our experience has been that it is much more likely that a relation of the form
can be lifted than that a relation with larger coefficients can be lifted. Thus,
the greater likelihood of dependency in Experiment B than in Experiment C might have little or
nothing to do with the reverse-Mestre conditions. 9
Looking at the relations in the Experiments A, we find that the great majority have coefficients
\Sigma2. The sizes of the coefficients are shown in Table 4. As in Table 2, the i-th
column after the double line shows the number of dependency coefficients of absolute value
We see that the coefficients are very small.
Furthermore, 301 out of the 317 relations for were of the form P
. Out of the remaining 16 relations, only 6 have both coefficients larger than 1. For
out of the 153 relations were of the form P were of the form P
and 9 were of the form P . Out of the remaining 14 relations, six have two
coefficients larger than one.
9 There is a reason unrelated to the heuristics of the Birch-Swinnerton-Dyer Conjecture why, among the
conditions that one might impose modulo l, ljM , the reverse-Mestre conditions are the ones that are most likely
to produce dependencies. Note that the mod-l conditions lead to congruences that the dependency coefficients
must satisfy. These congruences are likely to be more restrictive if N l = #E(F l ) is larger. For example, we saw
that the reverse-Mestre conditions in Experiment B led only to the constraint that
has a small nontrivial solution \Gamma2. Suppose that we had instead chosen our mod-l curves and points
so that N are "average" rather than reverse-Mestre values) and P
Then any dependency coefficients must satisfy 7). One can check that
the smallest (in the sense of knk) nonzero solution to these congruences is \Gamma4. It is far, far harder
to find dependencies with both than it is to find dependencies with
6 Conclusion
Xedni calculus is impractical for p in the range used in elliptic curve cryptography. In the first
place, the basic properties of the canonical logarithmic height, along with a pigeon-hole argument,
show that the coefficients in a dependency relation among the lifted points are bounded by an
absolute constant. This implies an asymptotic running time of at least O(p). In a sense, xedni
fails asymptotically for much the same reason that index calculus is infeasible (see [21, 34]). In
the second place, even if liftings exist with dependency among the points, the probability of
finding such a lifting decreases as the discriminant grows, and it becomes very low by the time p
reaches the practical range. In the third place, empirical data show that the theoretical bounds
on the size of the dependency coefficients are far too generous compared to what happens in
practice; and for high discriminants it is virtually impossible to find dependencies where the
coefficients cannot be taken to be of trivial size (usually \Sigma1). Finally, although, in the absense
of other considerations, the reverse-Mestre conditions do increase the likelihood of dependency,
they also cause the discriminant to increase substantially, and so most likely the net effect is to
do more harm than good.
--R
Notes on elliptic curves I and II
Diophantine equations with special reference to elliptic curves
Analogue of the index calculus for elliptic discrete logarithm
On the conjecture of Birch and Swinnerton-Dyer
On the Birch and Swinnerton-Dyer conjecture
Extended GCD and Hermite normal form algorithms via lattice basis reduction
The canonical height and integral points on elliptic curves
Introduction to Elliptic Curves and Modular Forms
Algebraic Aspects of Cryptography
Diophantine Analysis
Fundamental of Diophantine Geometry
LiDIA Group
Specializations of finitely generated subgroups of abelian varieties
Modular curves and the Eisenstein ideal
Elliptic Curve Public Key Cryptosystems
Handbook of Applied Cryptography
Construction d'une courbe elliptique de rang - 12
Formules explicites et minoration de conducteurs de vari'et'es alg'ebriques
Use of elliptic curves in cryptography
Propri'et'es arithm'etiques et g'eom'etriques attach'es
On modular representations of Gal(Q
Nonsingular plane cubic curves
Lower bound for the canonical height on elliptic curves
Divisibility of the specialization map for families of elliptic curves
The Arithmetic of Elliptic Curves
Computing heights on elliptic curves
Advanced Topics in the Arithmetic of Elliptic Curves
Computing canonical heights with little (or no) factorization
The xedni calculus and the elliptic curve discrete logarithm problem
Elliptic curve discrete logarithms and the index calculus
Rational Points on Elliptic Curves
Annals of Math.
SIMATH Manual
--TR
Nonsingular plane cubic curves over finite fields
Computing canonical heights with little (or no) factorization
Algebraic aspects of cryptography
The Xedni Calculus and the Elliptic Curve Discrete Logarithm Problem
Handbook of Applied Cryptography
Use of Elliptic Curves in Cryptography
Elliptic Curve Discrete Logarithms and the Index Calculus
--CTR
Joseph H. Silverman, The Xedni Calculus and the Elliptic Curve Discrete LogarithmProblem, Designs, Codes and Cryptography, v.20 n.1, p.5-40, April 2000
Neal Koblitz , Alfred Menezes , Scott Vanstone, The State of Elliptic Curve Cryptography, Designs, Codes and Cryptography, v.19 n.2-3, p.173-193, March 2000
Andrew Odlyzko, Discrete Logarithms: The Past and the Future, Designs, Codes and Cryptography, v.19 n.2-3, p.129-145, March 2000 | xedni calculus;discrete logarithm;elliptic curve |
377993 | Weak alternating automata are not that weak. | Automata on infinite words are used for specification and verification of nonterminating programs. Different types of automata induce different levels of expressive power, of succinctness, and of complexity. Alternating automata have both existential and universal branching modes and are particularly suitable for specification of programs. In a weak alternating automata the state space is partitioned into partially ordered sets, and the automaton can proceed from a certain set only to smaller sets. Reasoning about weak alternating automata is easier than reasoning about alternating automata with no restricted structure. Known translations of alternating automata to weak alternating automata involve determinization, and therefore involve a double-exponential blow-up. In this paper we describe a quadratic translation, which circumvents the need for determinization, of Bchi and co-Bchi alternating automata to weak alternating automata. Beyond the independent interest of such a translation, it gives rise to a simple complementation algorithm for nondeterministic Bchi automata. | INTRODUCTION
Finite automata on innite objects were rst introduced in the 60's. Motivated
by decision problems in mathematics and logic, Buchi, McNaughton, and Rabin
developed a framework for reasoning about innite words and innite trees [Buchi
1962; McNaughton 1966; Rabin 1969]. The framework has proved to be very pow-
erful. Automata, and their tight relation to second-order monadic logics were the
key to the solution of several fundamental decision problems in mathematics and
logic [Thomas 1990]. Today, automata on innite objects are used for specication
First author's address: O. Kupferman, School of Computer Science and Engineering, Hebrew Uni-
versity, Jerusalem 91904, Israel. Email: orna@cs.huji.ac.il, URL: http://www.cs.huji.ac.il/ orna.
Second author address: M. Y. Vardi, Department of Computer Science, Rice University, Houston
Partly
supported by NSF grants CCR-9700061 and CCR-9988322 and by a grant from the Intel Corporation
Permission to make digital/hard copy of all or part of this material without fee for personal
or classroom use provided that the copies are not made or distributed for prot or commercial
advantage, the ACM copyright/server notice, the title of the publication, and its date appear, and
notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish,
to post on servers, or to redistribute to lists requires prior specic permission and/or a fee.
c
ACM Transactions on Computational Logic, Vol. TBD, No. TBD, TBD TBD, Pages 111{131.
Kupferman and Moshe Y. Vardi
and verication of nonterminating programs. The idea is simple: when a program
is dened with respect to a nite set P of propositions, each of the program's states
can be associated with a set of propositions that hold in this state. Then, each
of the program's computations induces an innite word over the alphabet 2 P , and
the program itself induces a language of innite words over this alphabet. This
language can be dened by an automaton. Similarly, a specication for a pro-
gram, which describes all the allowed computations, can be viewed as a language
of innite words over 2 P , and can therefore be dened by an automaton. In the
automata-theoretic approach to verication, we reduce questions about programs
and their specications to questions about automata. More specically, questions
such as satisability of specications and correctness of programs with respect to
their specications are reduced to questions such as nonemptiness and language
containment [Vardi and Wolper 1986; Kurshan 1994; Vardi and Wolper 1994]. The
automata-theoretic approach separates the logical and the combinatorial aspects of
reasoning about programs. The translation of specications to automata handles
the logic and shifts all the combinatorial di-culties to automata-theoretic problems.
As automata on nite words, automata on innite words either accept or reject an
input word. Since a run on an innite word does not have a nal state, acceptance
is determined with respect to the set of states visited innitely often during the run.
There are many ways to classify an automaton on innite words. One is the type of
its acceptance condition. For example, in Buchi automata, some of the states are
designated as accepting states, and a run is accepting i it visits states from the
accepting set innitely often [Buchi 1962]. Dually, in co-Buchi automata, a run is
accepting i it visits states from the accepting set only nitely often. More general
are Muller automata. Here, the acceptance condition is a set of sets of states,
and a run is accepting i the set of states visited innitely often is a member of
[Muller 1963].
Another way to classify an automaton on innite words is by the type of its
branching mode. In a deterministic automaton, the transition function - maps
a pair of a state and a letter into a single state. The intuition is that when the
automaton is in state q and it reads a letter , then the automaton moves to state
-(q; ), from which it should accept the su-x of the word. When the branching
mode is existential or universal , - maps q and into a set of states. In the existential
mode, the automaton should accept the su-x of the word from one of the states
in the set, and in the universal mode, it should accept the su-x from all the states
in the set. In an alternating automaton [Chandra et al. 1981], both existential and
universal modes are allowed, and the transitions are given as Boolean formulas over
the set of states. For example, -(q; means that the automaton
should accept the su-x of the word either from state q 1 or from both states q 2 and
It turns out that dierent types of automata have dierent expressive power.
For example, unlike automata on nite words, where deterministic and nondeterministic
(existential) automata have the same expressive power, deterministic
Buchi automata are strictly less expressive than nondeterministic Buchi automata
[Landweber 1969]. That is, there exists a language L over innite words such that
L can be recognized by a nondeterministic Buchi automaton but cannot be recognized
by a deterministic Buchi automaton. It also turns out that some types of
Weak Alternating Automata Are Not That Weak 113
automata may be more succinct than other types. For example, though alternating
Buchi automata are as expressive as nondeterministic Buchi automata (both
recognize exactly all !-regular languages), alternation makes Buchi automata exponentially
more succinct. That is, translating an alternating Buchi automaton to
a nondeterministic one might involve an exponential blow-up (see [Drusinsky and
Harel 1994]).
Since the combinatorial structure of alternating automata is rich, translating
specications to alternating automata is much simpler than translating them to
nondeterministic automata. Alternating automata enable a complete partition between
the logical and the combinatorial aspects of reasoning about programs, and
they give rise to cleaner and simpler verication algorithms [Vardi 1996]. The ability
of alternating automata to switch between existential and universal branching
modes also makes their complementation very easy. For example, in order to complement
an alternating Muller automaton on innite words, one only has to dualize
its transition function and acceptance condition [Miyano and Hayashi 1984; Lindsay
1988]. In contrast, complementation is a very challenging problem for nondeterministic
automata on innite words. In particular, complementing a nondeterministic
Buchi automaton involves an exponential blow-up [Safra 1988; Michel 1988].
In [Muller et al. 1986], Muller et al. introduced weak alternating automata.
In a weak alternating automaton, the automaton's set of states is partitioned into
partially ordered sets. Each set is classied as accepting or rejecting. The transition
function is restricted so that in each transition, the automaton either stays at the
same set or moves to a set smaller in the partial order. Thus, each run of a
alternating automaton eventually gets trapped in some set in the partition.
Acceptance is then determined according to the classication of this set. The special
structure of weak alternating automata is re
ected in their attractive computational
properties and makes them very appealing. For example, while the best known
complexity for solving the membership problem for Buchi alternating automata is
quadratic time, we know how to solve the membership problem for weak alternating
automata in linear time [Kupferman et al. 2000].
Weak alternating automata are a special case of Buchi alternating automata.
Indeed, the condition of getting trapped in an accepting set can be replaced by a
condition of visiting states of accepting sets innitely often. The other direction, as
it is easy to see, is not true. In fact, it is proven in [Rabin 1970; Muller et al. 1986],
that, when dened on trees, a language L can be recognized by a weak alternating
automaton i both L and its complement can be recognized by Buchi nondeterministic
automata. Nevertheless, when dened on words, weak alternating automata
are not less expressive than Buchi alternating automata, and they can recognize all
the !-regular languages. To prove this, [Muller et al. 1986; Lindsay 1988] suggest a
linear translation of deterministic Muller automata to weak alternating automata.
Using, however, the constructions in [Muller et al. 1986; Lindsay 1988] in order
translate a nondeterministic Buchi or co-Buchi automaton A into a weak alternating
automaton, one has no choice but to rst translate A into a deterministic Muller
automaton. Such a determinization involves an exponential blow-up [Safra 1988].
Even worse, if A is an alternating automaton, then its determinization involves a
doubly-exponential blow-up, and hence, so does the translation to weak alternating
automata. Can these blow-ups be avoided?
114 Orna Kupferman and Moshe Y. Vardi
In this paper we answer this question positively. We describe a simple quadratic
translation of Buchi and co-Buchi alternating automata into weak alternating au-
tomata. Beyond the independent interest of such a translation, it gives rise to a
simple complementation algorithm for nondeterministic Buchi automata. The closure
of nondeterministic Buchi automata under complementation plays a crucial
role in solving decision problems of second order logics. As a result, many eorts
have been put in proving this closure and developing simple complementation al-
gorithms. In [Buchi 1962], Buchi suggested a complementation construction, which
indeed solved the problem, yet involved a complicated combinatorial argument and
a doubly-exponential blow-up in the state space. Thus, complementing an automaton
with n states resulted in an automaton with 2 2 O(n)
states. In [Sistla et al.
1987], Sistla et al. suggested an improved construction, with only 2 O(n 2 ) states,
which is still, however, not optimal. Only in [Safra 1988], Safra introduced an optimal
determinization construction, which also enabled a 2 O(n log n) complementation
construction, matching the known lower bound [Michel 1988]. Another 2 O(n log n)
construction was suggested by Klarlund in [Klarlund 1991], which circumvented the
need for determinization.
While being the heart of many complexity results in verication, the optimal
constructions in [Safra 1988; Klarlund 1991] are complicated. In particular, the
intricacy of the algorithms makes their implementation di-cult. We know of no
implementation of Klarlund's algorithm, and the implementation of Safra's algorithm
[Tasiran et al. 1995] has to cope with the involved structure of the states in
the complementary automaton. The lack of a simple implementation is not due to
a lack of need. Recall that in the automata-theoretic approach to verication, we
check correctness of a program with respect to a specication by checking containment
of the language of the program in a language of an automaton that accepts
exactly all computations that satisfy the specication. In order to check the latter,
we check that the intersection of the program with an automaton that complements
the specication automaton is empty. Due to the lack of a simple complementation
construction, verication tools have to restrict the specication automaton or
improvise other solutions. For example, in the verication tool COSPAN [Kurshan
1994], the specication automaton must be deterministic (it is easy to complement
deterministic automata [Clarke et al. 1993]). In the verication tool SPIN [Holz-
mann 1991], the user has to complement the automaton by himself; thus, together
with the program, SPIN gets as input a nondeterministic Buchi automaton, called
the Never-Claim, which accepts exactly all computations that do not satisfy the
specication.
The complementary automaton constructed in our procedure here is similar to the
one constructed in [Klarlund 1991], but as our construction involves alternation, it
is simpler and easily implementable. Consider a nondeterministic Buchi automaton
B. We can easily complement B by regarding it as a universal co-Buchi automaton.
using our construction, we translate this complementary automaton to a
alternating automaton W . By [Miyano and Hayashi 1984], weak alternating
automata can be translated to nondeterministic Buchi automata. Applying their
(exponential yet simple) translation to W , we end up with a nondeterministic Buchi
automaton N that complements B. For B with n states, the size of N is 2 O(n log n) ,
Weak Alternating Automata Are Not That Weak 115
meeting the known lower bound [Michel 1988] and the complicated constructions
suggested in [Safra 1988; Klarlund 1991].
2. ALTERNATING
Given an alphabet , an innite word over is an innite sequence
of letters in . We denote by w l the su-x l l+1 l+2 of w. An automaton on
innite words is is the input alphabet, Q is a nite
set of states, is an initial state, and
is an acceptance condition (a condition that denes a subset of Q
(q; ) is the set of states that A can move into when it is in state q and it reads the
letter . Since the transition function of A may specify many possible transitions
for each state and letter, A is not deterministic. If is such that for every q 2 Q
and 2 , we have that j(q; then A is a deterministic automaton.
A run of A on w is a function r in (i.e., the run starts
in the initial state) and for every l 0, we have r(l (i.e., the run
obeys the transition function). In automata over nite words, acceptance is dened
according to the last state visited by the run. When the words are innite, there
is no such thing \last state", and acceptance is dened according to the set Inf (r)
of states that r visits innitely often, i.e.,
for innitely many l 2 IN; we have
As Q is nite, it is guaranteed that Inf (r) 6= ;. The way we refer to Inf (r) depends
on the acceptance condition of A. Several acceptance conditions are studied in the
literature. We consider here two:
|Buchi automata, where Q, and r accepts w i Inf (r) \ 6= ;.
|co-Buchi automata, where Q, and r accepts w i Inf (r) \
Since A is not deterministic, it may have many runs on w. In contrast, a deterministic
automaton has a single run on w. There are two dual ways in which
we can refer to the many runs. When A is an existential automaton (or simply
a nondeterministic automaton, as we shall call it in the sequel), it accepts an input
word w i there exists an accepting run of A on w. When A is a universal
automaton, it accepts an input word w i all the runs of A on w are accepting.
Alternation was studied in [Chandra et al. 1981] in the context of Turing machines
and in [Brzozowski and Leiss 1980; Chandra et al. 1981; Miyano and Hayashi 1984]
for nite automata. In particular, [Miyano and Hayashi 1984] studied alternating
automata on innite words. Alternation enables us to have both existential and
universal branching choices.
For a given set X , let (X) be the set of positive Boolean formulas over X (i.e.,
Boolean formulas built from elements in X using ^ and _), where we also allow the
formulas true and false. For Y X , we say that Y satises a formula
i the truth assignment that assigns true to the members of Y and assigns false to
the members of X n Y satises . For example, the sets fq both
satisfy the formula (q 1 while the set fq 1 does not satisfy this formula.
Consider an automaton A as above. We can represent (Q). For ex-
ample, a transition (q; of a nondeterministic automaton A can be
written as (q; If A is universal, the transition can be written as
Kupferman and Moshe Y. Vardi
While transitions of nondeterministic and universal automata
correspond to disjunctions and conjunctions, respectively, transitions of alternating
automata can be arbitrary formulas in B (Q). We can have, for instance, a transition
meaning that the automaton accepts a su-x
w i of w from state q, if it accepts w i+1 from both q 1 and q 2 or from both q 3 and
q 4 . Such a transition combines existential and universal choices.
Formally, an alternating automaton on innite words is a tuple
in , and are as in automata, and
function. While a run of a nondeterministic automaton is a function r : IN ! Q, a
run of an alternating automaton is a tree r : T r ! Q for some T r IN . Formally,
a tree is a (nite or innite) nonempty prex-closed set T IN . The elements of
are called nodes, and the empty word " is the root of T . For every x 2 T , the
nodes x c 2 T where c 2 IN are the children of x. A node with no children is a
leaf . We sometimes refer to the length jxj of x as its level in the tree. A path of a
tree T is a set T such that 2 and for every x 2 , either x is a leaf, or there
exists a unique c 2 IN such that x c 2 . Given a nite set , a -labeled tree is a
maps each node of T to a letter in .
A run of A on an innite word is a Q-labeled tree hT r ; ri such that
the following hold:
in .
There is a (possibly empty) set
such that S satises and for all 1 c k, we have x c 2 T r
and r(x c) = q c .
For example, if -(q in ; 0 possible runs of A on w have a
root labeled q in , have one node in level 1 labeled q 1 or q 2 , and have another node in
level 1 labeled q 3 or q 4 . Note that if = true, then x need not have children. This
is the reason why T r may have leaves. Also, since there exists no set S as required
for false, we cannot have a run that takes a transition with
A run hT r ; ri is accepting i all its innite paths, which are labeled by words
in the acceptance condition. A word w is accepted i there exists an
accepting run on it. Note that while conjunctions in the transition function of A
are re
ected in branches of hT r ; ri, disjunctions are re
ected in the fact we can
have many runs on the same word. The language of A, denoted L(A), is the set of
innite words that A accepts. Thus, each word automaton denes a subset of ! .
We denote by L(A) the complement language of A, that is the set of all words in
In [Muller et al. 1986], Muller et al. introduce weak alternating automata (WAAs).
In a WAA, the acceptance condition is Q, and there exists a partition of Q into
disjoint sets, Q i , such that for each set Q i , either Q i , in which case Q i is an
accepting set, or Q i \ ;, in which case Q i is a rejecting set. In addition, there
exists a partial order on the collection of the Q i 's such that for every q 2 Q i and
occurs in -(q; ), for some 2 , we have Q
transitions from a state in Q i lead to states in either the same Q i or a lower one. It
follows that every innite path of a run of a WAA ultimately gets \trapped" within
some Q i . The path then satises the acceptance condition if and only if Q i is an
accepting set. Thus, we can view a WAA with an acceptance condition as both a
Weak Alternating Automata Are Not That Weak 117
Buchi automaton with an acceptance condition , and a co-Buchi automaton with
an acceptance condition Q n . Indeed, a run gets trapped in an accepting set i it
visits innitely many states in , which is true i it visits only nitely many states
in Q n .
3. USEFUL OBSERVATIONS ON RUNS OF ALTERNATING CO-B
UCHI
Consider a co-Buchi alternating automaton ri be an
accepting run of A on a word w. For two nodes x 1 and x 2 in T r , we say that
x 1 and x 2 are similar i jx We say that the run
ri is memoryless i for all similar nodes x 1 and x 2 , and for all y 2 IN , we
have that x 1 y Intuitively, similar
nodes correspond to two copies of A that have the same \mission": they should
both accept the su-x w jx1 j from the state r(x 1 ). In a memoryless run, subtrees
of hT r ; ri with similar roots coincide. Thus, same missions are fullled in the same
way. It turns out that when we consider runs of co-Buchi automata, we can restrict
ourselves to memoryless runs. Formally, we have the following theorem.
Theorem 3.1. [Emerson and Jutla 1991] If a co-Buchi automaton A accepts a
word w, then there exists a memoryless accepting run of A on w.
We note that [Emerson and Jutla 1991] proves a stronger result, namely the
existence of memoryless accepting runs for parity alternating automata. Since the
co-Buchi acceptance condition is a special case of the parity acceptance condition,
the result cited above follows.
It is easy to see that for every run hT r ; ri, every set of more
than n nodes of the same level contains at least two similar nodes. Therefore, in a
memoryless run of A, every level contains at most n nodes that are roots of dierent
subtrees. Accordingly, we represent a memoryless run hT r ; ri by an innite dag
(directed acyclic
QIN is such that hq; li 2 V i there exists x 2 T r with
For example, hq in ; 0i is the only vertex of G r in Q f0g.
l0 (Q flg) (Q fl+ 1g) is such that E(hq; li; hq there exists
Thus, G r is obtained from hT r ; ri by merging similar nodes into a single vertex. We
say that a vertex hq 0 ; l 0 i is a successor of a vertex hq; li i E(hq; li; hq 0 ; l 0 i). We say
that is reachable from hq; li i there exists a sequence hq
of successive vertices such that hq; li there exists i 0 such that
Finally, we say that a vertex hq; li is an -vertex i q 2 . It
is easy to see that hT r ; ri is accepting i all paths in G r have only nitely many
-vertices.
Consider a (possibly nite) dag G G r . We say that a vertex hq; li is endangered
in G i only nitely many vertices in G are reachable from hq; li. We say that a
vertex hq; ii is safe in G i all the vertices in G that are reachable from hq; li are
not -vertices. Note that, in particular, a safe vertex is not an -vertex.
Given a memoryless accepting run hT r ; ri, we dene an innite sequence G 0
dags inductively as follows.
Kupferman and Moshe Y. Vardi
li j hq; li is endangered in G 2i g.
li j hq; li is safe in G 2i+1 g.
Lemma 3.2. For every i 0, there exists l i such that for all l l i , there are at
most n i vertices of the form hq; li in G 2i .
Proof. We prove the lemma by an induction on i. The case where
from the denition of G 0 . Indeed, in G r all levels l 0 have at most n vertices of
the form hq; li. Assume that the lemma's requirement holds for i, we prove it for
1. Consider the dag G 2i . We distinguish between two cases. First, if G 2i is
nite, then G 2i+1 is empty, G 2i+2 is empty as well, and we are done. Otherwise, we
claim that there must be some safe vertex in G 2i+1 . To see this, assume, by way
of contradiction, that G 2i is innite and no vertex in G 2i+1 is safe. Since G 2i is
innite, G 2i+1 is also innite. Also, each vertex in G 2i+1 has at least one successor.
Consider some vertex hq 0 ; l 0 i in G 2i+1 . Since, by the assumption, it is not safe,
there exists an -vertex hq 0
reachable from hq be a successor
of hq 0
i. By the assumption, hq 1 ; l 1 i is also not safe. Hence, there exists an -
vertex
reachable from hq 1 be a successor of hq 0
By the
assumption, is also not safe. Thus, we can continue similarly and construct
an innite sequence of vertices
i such that for all i, the vertex hq 0
an -vertex reachable from hq j ; l j i, and hq j+1 ; l j+1 i is a successor of hq 0
Such a
sequence, however, corresponds to a path in hT r ; ri that visits innitely often,
contradicting the assumption that hT r ; ri is an accepting run.
So, let hq; li be a safe vertex in G 2i+1 . We claim that taking l
the lemma's requirement. That is, we claim that for all j l, there are at most
vertices of the form hq; ji in G 2i+2 . Since hq; li is in G 2i+1 , it is not
endangered in G 2i . Thus, there are innitely many vertices in G 2i that are reachable
from hq; li. Hence, by Konig's Lemma, G 2i contains an innite path hq; li;
:. For all k 1, the vertex hq k ; l + ki has innitely many vertices
reachable from it in G 2i and thus, it is not endangered in G 2i . Therefore, the path
exists also in G 2i+1 . Recall that hq; li is safe. Hence,
being reachable from hq; li, all the vertices ki in the path are safe as well.
Therefore, they are not in G 2i+2 . It follows that for all j l, the number of vertices
of the form hq; ji in G 2i+2 is strictly smaller than their number in G 2i . Hence, by
the induction hypothesis, we are done.
By Lemma 3.2, G 2n is nite. Hence the following corollary.
Corollary 3.3. G 2n+1 is empty.
Each vertex hq; li in G r has a unique index i 1 such that hq; li is either endangered
in G 2i or safe in G 2i+1 . Given a vertex hq; li, we dene the rank of hq; li, denoted
rank(q; l), as follows.
2i If hq; li is endangered in G 2i .
li is safe in G 2i+1 .
For denote the set f0; denote the set of
odd members of [k]. By Corollary 3.3, the rank of every vertex in G r is in [2n].
Weak Alternating Automata Are Not That Weak 119
Recall that when hT r ; ri is accepting, all the paths in G r visit only nitely many -
vertices. Intuitively, rank(q; l) hints how di-cult it is to get convinced that all the
paths of G r that visit the vertex hq; li visit only nitely many -vertices. Easiest
to get convinced about are vertices that are endangered in G 0 . Accordingly, they
get the minimal rank 0. Then come vertices that are safe in the graph G 1 , which
is obtained from G 0 by throwing away vertices with rank 0. These vertices get the
rank 1. The process repeats with respect to the graph G 2 , which is obtained from
G 1 by throwing away vertices with rank 1. As before, we start with the endangered
vertices in G 2 , which get the rank 2. We continue with the safe vertices in G 3 ,
which get the rank 3. The process repeats until all vertices get some rank. Note
that no -vertex gets an odd rank.
In the lemmas below we make this intuition formal.
Lemma 3.4. For every vertex hq; li in G r and rank i 2 [2n], if hq; li 62 G i , then
Proof. We prove the lemma by an induction on i. Since G , the case
immediate. For the induction step, we distinguish between two cases.
For the case i +1 is even, consider a vertex hq; li 62 G i+1 . If hq; li 62 G i , the lemma's
requirement follows from the induction hypothesis. If hq; li 2 G i , then hq; li is safe
in G i . Accordingly, rank(q; meeting the lemma's requirement. For the case
consider a vertex hq; li 62 G i+1 . If hq; li 62 G i , the lemma's requirement
follows from the induction hypothesis. If hq; li 2 G i , then hq; li is endangered in
G i . Accordingly, rank(q; meeting the lemma's requirement.
Lemma 3.5. For every two vertices hq; li and hq reachable
from hq; li, then rank(q
Proof. Assume that rank(q; We distinguish between two cases. If i is
even, in which case hq; li is endangered in G i , then either hq 0 ; l 0 i is not in G i , in
which case, by Lemma 3.4, its rank is at most i 1, or hq 0 ; l 0 i is in G i , in which
case, being reachable from hq; li, it must be endangered in G i and have rank i. If
i is odd, in which case hq; li is safe in G i , then either hq 0 ; l 0 i is not in G i , in which
case, by Lemma 3.4, its rank is at most i 1, or hq 0 ; l 0 i is in G i , in which case, being
reachable from hq; li, it must by safe in G i and have rank i.
Lemma 3.6. In every innite path in G r , there exists a vertex hq; li with an odd
rank such that all the vertices in the path that are reachable from hq; li have
Proof. By Lemma 3.5, in every innite path in G r , there exists a vertex hq; li
such that all the vertices in the path that are reachable from hq; li have
We need to prove that the rank of hq; li is odd. Assume,
by way of contradiction, that the rank of hq; li is some even i. Thus, hq; li is
endangered in G i . Then, the rank of all the vertices in the path that are reachable
from hq; li is also i. By Lemma 3.4, they all belong to G i . Since the path is
innite, there are innitely many such vertices, contradicting the fact that hq; li is
endangered in G i .
We have seen that if a co-Buchi alternating automaton has an accepting run
on w, then it also has a very structured accepting run on w. In the next section
Kupferman and Moshe Y. Vardi
we employ this structured run in order to translate Buchi and co-Buchi alternating
automata to weak alternating automata. In [Loding and Thomas 2000], Loding and
Thomas use the structured runs in order to a priori dene runs of weak alternating
automata as dags of bounded width. This enables them to prove the appropriate
determinacy result directly. In [Piterman 2000], Piterman uses the structured runs
in order to extend linear temporal logic with alternating word automata.
The ranks dened in this section are closely related to the progress-measures
introduced in [Klarlund 1990] and to their properties studied in Section 3 there.
Progress measures are a generic concept for quantifying how each step of a program
contributes to bringing a computation closer to its specication. Progress measures
are used in [Klarlund 1991] for reasoning about automata on innite words. The
ranks dened above also measure progress: they indicate how far the automaton
is from satisfying its co-Buchi acceptance condition. When we use these ranks, we
consider, unlike [Klarlund 1991], alternating automata. Consequently, we do not
need to follow a subset construction and to consider several ranks simultaneously.
Thus, much of the complication in [Klarlund 1991] is handled by the rich structure
of the automata. In Section 5 we will get back to this point and see that once
alternation is removed, the two approaches essentially coincide.
4. FROM B
UCHI AND CO-B
UCHI TO WEAK ALTERNATING
In this section we present a translation of Buchi and co-Buchi alternating automata
to weak alternating automata. We rst describe a quadratic construction and then
suggest a pre-processing that reduces the blow-up in the average case.
4.1 The construction
Theorem 4.1. Let A be an alternating co-Buchi automaton. There is a weak
alternating automaton A 0 such that and the number of states in A 0
is quadratic in that of A.
Proof. Let
where
Intuitively, when the automaton is in state hq; ii as it reads
the letter l (the l'th letter in the input), then it guesses that in a memoryless
accepting run of A on w, the rank of hq; li is i. An exception is the initial state
in explained below.
That is, q in is paired with 2n, which is an upper bound on the
rank of hq in ; 0i.
by means of a function
release
Given a formula (Q), and a rank i 2 [2n], the formula release(; i) is
obtained from by replacing an atom q by the disjunction
example,
Weak Alternating Automata Are Not That Weak 121
for a state hq; ii 2 Q 0 and 2 , as
follows.
or i is even.
false If q 2 and i is odd.
That is, if the current guessed rank is i then, by employing release , the run can
move in its successors to any rank that is smaller than i. If, however, q
and the current guessed rank is odd, then, by the denition of ranks, the current
guessed rank is wrong, and the run is rejecting.
That is, innitely many guessed ranks along each path should
be odd.
We rst show that A 0 is weak. For that, we dene a partition of the states of
A 0 and an order on this partition so that the weakness conditions hold. Each rank
induces the set Q in the partition. Thus, two states hq; ii and
are in the same set i dene the order by Q
It is easy to see that the weakness conditions hold: for every state hq; ii 2 Q 0 and
the states appearing in - 0 (hq; ii; ) belongs to sets Q and every set
contained in or disjoint from . By the denition of 0 , it follows
that the copies of A 0 are allowed to get trapped in sets with odd ranks and are not
allowed to get trapped in sets with even ranks.
We now prove the correctness of the construction. We rst prove that
L(A). Consider a word w accepted by A 0 . Let hT r ; r 0 i be the accepting run of A 0
on w. Consider the Q-labeled tree hT r ; ri where for all x
we have ri projects the labels of hT r ; r 0 i on their Q element.
It is easy to see that hT r ; ri is a run of A on w. Indeed, the transitions of A 0 only
annotate transitions of A by ranks. We show that hT r ; ri is an accepting run. Since
by the denition of 0 , each innite path of hT r ; r 0 i gets
trapped in a set Q fig for some odd i. By the denition of - 0 , no accepting run
can visit a state hq; ii with an odd i and q 2 . Hence, the innite path actually
gets trapped in the subset (Q n ) fig of Q fig. Consequently, in hT r ; ri, all the
paths visits states in only nitely often, and we are done.
It is left to prove that L(A) Consider a word w accepted by A. Let
ri be the accepting run of A on w. Consider the Q 0 -labeled tree hT r ; r 0 i where
is the rank of hr(x); jxji in G r . We claim that hT r ; ri is an accepting run of
A 0 . We rst prove that it is a run. Since in and q 0
2ni, the
root of the tree hT r ; r 0 i is labeled legally. We now consider the other nodes of
be the set of labels of "'s successors in hT r ; ri. As
2n is the maximal rank that a vertex can get, each successor c of " in T r has
2n. Therefore, the set S
Hence, the rst level of hT r ; r 0 i is also labeled legally. For
the other levels, consider a node x 2 T r such that x 6= " and rank(r(x);
be the set of labels of x's successors in hT r ; ri. By Lemma 3.5,
each successor x c of x in T r has rank(r(x c); jx cj) i. Also, by the definition
of ranks, it cannot be that r(x) 2 and i is odd. Therefore, the set
Hence, the tree hT r ; r 0 i is a run of A 0 on w. Finally, by Lemma 3.6, each innite
122 Orna Kupferman and Moshe Y. Vardi
path of hT r ; r 0 i gets trapped in a set with an odd index, thus hT r ; r 0 i is accepting.
Remark 4.2. As explained above, the automaton A 0 being at state hq; ii as it
reads the l'th letter in the input, corresponds to a guess that in a memoryless
accepting run of A on w, the rank of hq; li is i. Accordingly, the function release (and
the transition function - 0 that is based on it) enables the transition from a guessed
rank i to any rank that is smaller than i. As a result, while the number of states
in A 0 is O(n 2 ), a transition - 0 (hq; ii; ) may be n times longer than the transition
leading to - 0 that is O(n 2 ) times larger than -. Nevertheless, since for all
is a subformula of
the formula release(; i), the blow-up described above is not present if we maintain
- 0 as a dag, so that subformulas that are shared by several transitions are not
duplicated. Another way to keep - 0 only O(n) times larger than - is to redene
release(; i) to replace an atom q by the disjunction (q; i) _ (q; i 1) _ (q; i 2).
Thus, instead of a transition to any rank smaller than i, a transition is enabled
only to ranks i, i 1 and i 2. Then, the automaton A 0 being at state hq; ii as
it reads the l'th letter in the input, corresponds to a guess that in a memoryless
accepting run of A on w, the rank of hq; li is at most i. Since we can simulate one
big decrease in the guessed rank by several small decreases (in particular, having
in the transition enables us to \jump over" odd ranks), the correctness proof
given above can easily be adjusted to the new denition of release.
As discussed in [Muller and Schupp 1987], one can complement an alternating
automaton by dualizing its transition function and acceptance condition. Formally,
given a transition function -, let ~
- denote the dual function of -. That is, for every q
and with -(q;
, where ~ is obtained from by switching
_ and ^ and by switching true and false. If, for example,
then ~ The dual of an acceptance condition is a condition
that accepts exactly all the words in Q ! that are not accepted by . In particular,
we have the following.
Theorem 4.3. [Muller and Schupp 1987] For an alternating Buchi automaton
the alternating co-Buchi automaton ~
ises L( ~
The complementation construction in Theorem 4.3 is not only conceptually sim-
ple, but it also involves no blow-up. In addition, complementing a WAA does not
sacrice its weakness. Hence, Theorems 4.1 and 4.3 imply the following theorem.
Theorem 4.4. Let A be an alternating Buchi automaton. There is a weak alternating
automaton A 0 such that and the number of states in A 0 is
quadratic in that of A.
In Section 5, we use the translation described in Theorem 4.1 in order to obtain
a simple complementation construction for nondeterministic Buchi automata. As
we shall note there, the known lower bound on the complexity of the latter then
implies that the quadratic blow-up involved in moving from co-Buchi alternating
automata to WAA cannot be reduced to a linear one.
Weak Alternating Automata Are Not That Weak 123
4.2 Improving the construction
A drawback of our construction is that it never performs better than its worst-case
complexity. Indeed, the quadratic blow-up is introduced in the translation of A
to A 0 regardless of the structure of A and would occur even if, say, A is a weak
automaton. In order to circumvent such an unnecessary blow up, we suggest to
rst calculate the minimal rank required for A (formally dened below), and then
to construct A 0 with respect to this rank. The discussion below assumes that A
is a co-Buchi automaton, yet applies also for the dual case, where A is a Buchi
automaton.
Consider the sequence of dags G . With every G i , we can associate
a maximal width, namely the maximal number of vertices of the form hq; li,
for some xed l, in G i . Following Lemma 3.2, the maximal width of G 2i is n i.
In practice, the transition from G 2i to G 2i+2 often reduces the width by more than
one vertex. We say that is required for A i there exists a word w 2 L(A)
such that for every memoryless run hT r ; ri of A on w, the sequence G
dags with G is such that the width of G 2j is bigger than 0. Note that this
implies that G 2j+1 is not empty.
Let A
i. For every j 2 [n], we dene the weak alternating
automaton A 0
j as follows. Intuitively, A 0
restricts the runs of A 0 to guess only
ranks smaller than 2j. Formally, the state space of A 0
j is Q [2j], its initial state is
hq in ; 2ji, and its transition function and acceptance condition are the restrictions
of - 0 and 0 to the states in Q [2j]. It is easy to see that for every j, the language
of A 0
j is contained in the language of A 0 . On the other hand, the language of A 0
contains only these words in is empty. It follows that the
minimal rank required for A is the minimal j 2 [n] for which L(A)
Theorem 4.5. Let A be an alternating co-Buchi automaton. The problem of
nding the minimal rank required for A is PSPACE-complete.
Proof. Recall that the minimal rank required for A is the minimal j 2 [n]
for which L(A)
the language-containment problem for alternating
co-Buchi automata is in PSPACE, we can nd the minimal rank in polynomial
space by successive language-containment checks. For the lower bound, we do a
reduction from the emptiness problem for alternating co-Buchi automata, whose
PSPACE-hardness follows from the results in [Chandra et al. 1981]. Given an
alternating co-Buchi automaton we prove that A is empty i
the minimal rank required for A is 0. For technical convenience, we assume that
no formula in the range of - is a tautology (since we can replace a transition to
a that is a tautology by a transition to an accepting sink, the emptiness problem
is clearly PSPACE-hard already for automata satisfying this assumption). Assume
rst that A is empty. Then, L(A)
[n], and in particular for
For the other direction, note that the set of states in A 0
0 is Q f0g, and its
transitions coincide with these of A. Also, since 0 is even, the accepting set of A 0
0 is
empty. Hence, as no formula in - 0 is a tautology, A 0
accepts no word. Accordingly,
A is empty.
Since for all i 2 [2n] we have that
the automaton A 0
the minimal rank required for A, is equivalent to A. Hence the following theorem.
124 Orna Kupferman and Moshe Y. Vardi
Theorem 4.6. Let A be an alternating co-Buchi automaton with n states and
let j be the minimal rank required for A. There is a weak alternating automaton
A 0 such that and the number of states in A 0 is 2nj.
We note that while the problem of nding the minimal rank required for A
requires space that is polynomial in A, the automaton A is typically small, and
the bottle-neck of the computation is usually the application of A 0 (e.g., taking its
product with a system with a large state space). Thus, nding the minimal rank j
required for A and using A 0
j instead of A 0 may be of great practical importance.
5. COMPLEMENTING NONDETERMINISTIC B
UCHI
In this section we apply our results in order to complement nondeterministic Buchi
automata. We rst describe, in Section 5.1, a construction that uses alternating
automata. We then describe, in Section 5.2, a construction that uses the analysis
in Section 3 without explicitly using alternating automata.
5.1 Complementation via alternating automata
Unlike the case with alternating automata, complementation of nondeterministic
automata is a complicated problem. Following Theorem 4.3, all one needs in order
to complement a nondeterministic Buchi automaton is some translation of universal
co-Buchi automata to nondeterministic Buchi automata. In [Miyano and Hayashi
1984], Miyano and Hayashi suggest a translation of alternating Buchi automata
to nondeterministic Buchi automata. We present (a simplied version of) their
translation in Theorem 5.1 below.
Theorem 5.1. [Miyano and Hayashi 1984] Let A be an alternating Buchi au-
tomaton. There is a nondeterministic Buchi automaton A 0 , with exponentially
many states, such that
Proof. The automaton A 0 guesses a run of A. At a given point of a run of A 0 ,
it keeps in its memory a whole level of the run tree of A. As it reads the next input
letter, it guesses the next level of the run tree of A. In order to make sure that every
innite path visits states in innitely often, A 0 keeps track of states that \owe" a
visit to . Let
|If O 6= ;, then
q2S
q2O
q2S
The translation in Theorem 5.1, however, does not handle alternating (and in
particular universal) co-Buchi automata, which is what one gets by dualizing a
nondeterministic Buchi automaton. Here is where our construction in Theorem 4.1
Weak Alternating Automata Are Not That Weak 125
becomes essential. Thus, given nondeterministic Buchi automaton B, we suggest
the following complementation construction for B.
(1) Following Theorem 4.3, construct from B its dual co-Buchi universal automaton
~
B. The automaton ~
(2) Following Theorem 4.1, construct from ~
its equivalent weak alternating automaton
W . The automaton W satises
(3) Following Theorem 5.1, construct from W its equivalent nondeterministic Buchi
automaton N . The automaton N satises L(N
If B has n states, then ~
B has n states as well, W has O(n 2 ) states, and N has
states. By [Michel 1988; Safra 1988], however, an optimal complementation
construction for nondeterministic Buchi automata results in an automaton N with
O(n log n) states. Before we describe how we do get, using Theorem 4.1, such an
optimal automaton N , let us note that the above scheme implies that the translation
described in Theorem 4.1 cannot be improved to a linear translation. Indeed, being
able to construct from ~
B to an equivalent WAA W with only O(n) states, we are
also able to construct N with 2 O(n) states, contradicting the 2 O(n log n) lower bound.
In order to get N with 2 O(n log n) states, we exploit the special structure of W as
follows. Consider a state hS; Oi of N . Each of the sets S
and O is a subset of Q [2n]. We say that P Q [2n] is consistent i for every
two states hq; ii and hq We claim the following.
Claim 1. Restricting the states in N to pairs hS; Oi for which S is a consistent
subset of Q [2n] is allowable; that is, the resulting N still complements B.
2. There are 2 O(n log n) consistent subsets of Q [2n].
By the two claims, as O is always a subset of S, it is easy to restrict the state space
of N to 2 O(n log n) states. In order to prove Claim 1, recall that the automaton W
visiting a state hq; ii after reading l letters of an input word w corresponds to a
guess that the rank of hq; li in an accepting and memoryless run of ~
on w is i. We
have seen that if there is an accepting and memoryless run hT r ; ri of ~
B on w, then
a run of W that follows the ranks in G r is accepting. Since every vertex in G r has
a unique rank, the copies of W that are created in each level l in this accepting run
are consistent, in the sense that the set of states visited by copies of W in level l in
the run is consistent. In N , all the states in S correspond to copies of W that read
the same prex of w. Hence, a state hS; Oi for which S is inconsistent corresponds
to a level l in a run of W whose copies are inconsistent. Hence, the automaton N
can ignore states hS; Oi with inconsistent S.
In order to prove Claim 2, observe that we can characterize a consistent set by
the projection of its pairs on Q, augmented by an assignment f
there are 2 n such projections and n log n) such assignments, we are done.
Composing the three constructions is straightforward. Below we dene the automaton
N directly, by means of B's components. Given a nondeterministic Buchi
automaton
such that n. For a set P 2 2 Q[2n] , we say that P is
possible there exists no pair hq; ii in P such that i is odd and q 2 . For two sets
P and P 0 in 2 Q[2n] and a letter 2 , we say that
126 Orna Kupferman and Moshe Y. Vardi
such that the pair hq
in P 0 .
The automaton A
O S, and S is possible and consistent g.
|For a state hS; Oi 2 Q 0 and a letter 2 , we dene - 0
- If O 6= ;, then
O 0 covers hO; i; and S 0 is possible and consistent g:
and S 0 is possible and consistent g:
is possible and consistentg.
As discussed in Section 4.2, we advise to construct the automaton W according to
the minimal rank j required for ~
B. Then, each state of N corresponds to a consistent
set augmented by an assignment f Accordingly, the automaton N has
only 2 O(n+j log n) states.
5.2 Complementation without alternating automata
In this section we give an alternative description of our complementation construc-
tion, which is independent of alternating automata. The ideas behind the construction
are these used in Section 4 for the transformation of alternating co-Buchi
automata to weak alternating automata. We repeat these ideas here for the ben-
et of readers who'd like to see a complementation construction that does not go
through alternating automata. 2 The construction that follows essentially coincides
with the one described in [Klarlund 1991].
be a nondeterministic Buchi automaton with
and let be a word in ! . We dene an innite dag G that embodies
all the possible runs of A on w. Formally,
IN is the union
in g and Q
-(q; l ).
l0 (Q l flg)(Q l+1 fl+1g) is such that E(hq; li; hq
We refer to G as the run dag of A on w. We say that a vertex hq
successor of a vertex hq; li i E(hq; li; hq We say that is reachable from
li i there exists a sequence hq successive vertices such
that hq; li Finally, we
say that a vertex hq; li is an -vertex i q 2 . It is easy to see that A accepts w
i G has a path with innitely many -vertices. Indeed, such a path corresponds
to an accepting run of A on w.
We have found it easier to teach the direct construction. (See
http://www.cs.rice.edu/vardi/av.html.)
Weak Alternating Automata Are Not That Weak 127
A ranking for G is a function that satises the following two
conditions:
(1) For all vertices hq; li 2 V , if f(hq; li) is odd, then q 62 .
(2) For all edges hhq; li; hq li).
Thus, a ranking associates with each vertex in G a rank in [2n] so that the ranks
along paths decreased monotonically, and -vertices get only even ranks. Note
that each path in G eventually gets trapped in some rank. We say that the ranking
f is an odd ranking if all the paths of G eventually get trapped in an odd rank.
Formally, f is odd i for all paths hq G, there is j 0 such
that f(hq j ; ji) is odd, and for all i 1, we have f(hq j+i
Lemma 5.2. A rejects w i there is an odd ranking for G.
Proof. We rst claim that if there is an odd ranking for G, then A rejects w.
To see this, recall that in an odd ranking, every path in G eventually gets trapped
in an odd rank. Hence, as -vertices get only even ranks, it follows that all the
paths of G, and thus all the possible runs of A on w, visit only nitely often.
Assume now that A rejects w. We describe an odd ranking for G. As in Section 3,
we say that a vertex hq; li is endangered in a (possibly nite) dag G 0 G i only
nitely many vertices in G 0 are reachable from hq; li. The vertex hq; li is safe in
all the vertices in G 0 that are reachable from hq; li are not -vertices. Note
that, in particular, a safe vertex is not an -vertex. We dene an innite sequence
dags inductively as follows.
G.
li j hq; li is endangered in G 2i g.
li j hq; li is safe in G 2i+1 g.
Consider the function
2i If hq; li is endangered in G 2i .
li is safe in G 2i+1 .
Recall that A rejects w. Thus, each path in G has only nitely many -vertices.
Therefore, the same arguments used in the proof of Lemma 3.2 can be used here
in order to show that G 2n is nite and G 2n+1 is empty, implying that f above
maps the vertices in V to [2n]. We claim further that f is an odd ranking. First,
since a safe vertex cannot be an -vertex and f(hq; li) is odd only for safe hq; li,
the rst condition for f being a ranking holds. Second, as in Lemma 3.5, for
every two vertices hq; li and hq is reachable from hq; li, then
li). In particular, this holds for that is a successor of hq; li.
Hence, the second condition for ranking holds too. Finally, as in Lemma 3.6, for
every innite path in G, there exists a vertex hq; li with an odd rank such that all the
vertices in the path that are reachable from hq; li have f(hq li).
Hence, f is an odd ranking.
By Lemma 5.2, an automaton A 0 that complements A can proceed on an input
word w by guessing an odd ranking for the run dag of A on w. We now dene
such an automaton A 0 formally. We rst need some denitions and notations.
128 Orna Kupferman and Moshe Y. Vardi
A level ranking for A and w is a function such that if g(q)
is odd, then q 62 . Let R be the set of all level rankings. For two level rankings g
and g 0 , we say that covers g if for all q and q 0 in Q, if g(q) 0 and q 0 2 -(q; ),
then
We dene A
in (q in in in . Thus, the odd
ranking that A 0 guesses maps the root hq in ; 0i of the run dag to 2n.
|For a state hg; P i
covers g, and
there is
covers
Thus, when A 0 reads the l'th letter in the input, for l 1, it guesses the level
ranking for level l in the run dag. This level ranking should cover the level
ranking of level l 1. In addition, in the P component, A 0 keeps track of states
whose corresponding vertices in the dag have even ranks. Paths that traverse
such vertices should eventually reach a vertex with an odd rank. When all the
paths of the dag have visited a vertex with an odd rank, the set P becomes
empty, and is initiated by new obligations for visits in odd ranks according to
the current level ranking. The acceptance condition R f;g then checks that
there are innitely many levels in which all the obligations have been fullled.
Note that the automaton A 0 here is equivalent to the one described in Section 5.
Indeed, each state hg; P i 2 R 2 Q in A 0 above corresponds to the state hS; Oi 2
?g. Clearly, S and O and possible and consistent, and O S.
Similarly, since the sets S and O in the state space of A 0 of Section 5 are possible
and consistent, each state hS; Oi there induces a level ranking and thus corresponds
to a state here.
6. DISCUSSION
We described a quadratic translation of Buchi and co-Buchi alternating automata
to weak alternating automata and showed how our translation yields a simple complementation
algorithm for nondeterministic Buchi automata. Another application
of our translation is the solution of the nonemptiness problem. It is shown in
[Kupferman et al. 2000] that the nonemptiness problem for nondeterministic tree
automata and the nonemptiness problem for alternating word automata over a singleton
alphabet are equivalent and that their complexities coincide. We refer to
both problems as the nonemptiness problem. Recall that the nonemptiness problem
for weak automata can be solved in linear time [Kupferman et al. 2000]. On the
other hand, the best known upper bound for the nonemptiness problem for Buchi
and co-Buchi automata is quadratic time. Using our translation, one can solve the
nonemptiness problem for a Buchi or a co-Buchi automaton A by rst translating
Weak Alternating Automata Are Not That Weak 129
it to a weak automaton A 0 . The size of A 0 is O(nj), where j is the minimal rank
required for A, yielding a nonemptiness algorithm of the same complexity.
In [Kupferman and Vardi 1998b], we extend the ideas of this paper and describe
an e-cient translation of stronger types of alternating automata to weak alternating
automata. This enables us to improve known upper bounds for the nonemptiness
problem. Given an alternating parity automaton [Mostowski 1984; Emerson and
Jutla 1991] with n states and k sets, we construct an equivalent weak alternating
automaton with O(n k ) states. Given an alternating Rabin automaton [Rabin 1969]
with n states and k pairs, we construct an equivalent weak alternating automaton
with O(n 2k+1 k!) states. Our constructions yield O(n k ) and O(n 2k+1 k!) upper
bounds for the nonemptiness problem for parity and Rabin automata, respectively,
matching the known bound for parity automata [Emerson et al. 1993] and improving
the known O(nk) 3k bound for Rabin automata [Emerson and Jutla 1988; Pnueli
and Rosner 1989].
Recall that while weak alternating word automata are not less expressive than
Buchi alternating word automata, weak alternating tree automata are strictly less
expressive than Buchi alternating tree automata. Precisely, when dened on trees,
a language L can be recognized by a weak alternating automaton i both L and its
complement can be recognized by Buchi nondeterministic automata. This result
follows from expressiveness results in second order logic [Rabin 1970], and the
equivalence of weak alternating tree automata and weak second-order logic [Rabin
1970]. In [Kupferman and Vardi 1999], we extend the ideas in this paper to handle
tree automata. Given two nondeterministic Buchi tree automata U and U 0 that
recognize a language and its complement, we construct a weak alternating tree
automaton A equivalent to U . The number of states in A is quadratic in the number
of states of U and U 0 . Precisely, if U and U 0 has n and m states, respectively, the
automaton A has (nm) 2 states. The known linear translation of weak alternating
tree automata to formulas in the alternation-free fragment of -calculus [Kupferman
and Vardi 1998a] then implies a quadratic translation of Buchi automata as above to
alternation-free -calculus, extending the scope of e-cient symbolic model checking
to highly expressive specication formalisms.
Acknowledgment
We thank Nils Klarlund for clarifying the relation between
[Klarlund 1991] and this work, and Wolfgang Thomas for helpful discussions.
--R
Finite automata and sequential networks.
On the power of bounded concurrency I: Finite automata.
Journal of the ACM
The complexity of tree automata and logics of programs.
On model-checking for fragments of -calculus
Design and Validation of Computer Protocols.
Progress measures and
Progress measures for complementation of
Weak alternating automata and tree automata emptiness.
The weakness of self-complementation
An automata-theoretic approach to branching-time model checking
Computer Aided Veri
Decision problems for
On alternating
Testing and generating in
Complementation is more di-cult with automata on in nite words
Regular expressions for in
Alternating automata
Alternating automata on in
Extending temporal logic with
On the synthesis of a reactive module.
Decidability of second order theories and automata on in
Weakly de
On the complexity of
Language containment using non-deterministic omega-automata
Automata on in
An automata-theoretic approach to linear temporal logic
An automata-theoretic approach to automatic program veri- cation
Reasoning about in
Received: June
--TR
Alternation and MYAMPERSANDohgr;-type Turing acceptors
Alternating automata, the weak monadic theory of the tree, and its complexity
The complementation problem for BuMYAMPERSANDuml;chi automata with applications to temporal logic
Alternating automata on infinite trees
On the synthesis of a reactive module
Design and validation of computer protocols
Automata on infinite objects
Progress measures for complementation of MYAMPERSANDohgr;-automata with applications to temporal logic
Tree automata, Mu-Calculus and determinacy
A unified approach for showing language inclusion and equivalence between various types of MYAMPERSANDohgr;-automata
On the power of bounded concurrency I
Reasoning about infinite computations
Computer-aided verification of coordinating processes
An automata-theoretic approach to linear temporal logic
Weak alternating automata and tree automata emptiness
Alternation
An automata-theoretic approach to branching-time model checking
Language containment of non-deterministic omega-automata
Alternating Automata and Logics over Infinite Words
On Model-Checking for Fragments of MYAMPERSAND#181;-Calculus
Freedom, Weakness, and Determinism
Progress measures and finite arguments for infinite computations
--CTR
A. Fellah , S. Noureddine, Some succinctness properties of -DTAFA, Proceedings of the 5th WSEAS International Conference on Software Engineering, Parallel and Distributed Systems, p.97-103, February 15-17, 2006, Madrid, Spain
Orna Kupferman , Moshe Y. Vardi, From complementation to certification, Theoretical Computer Science, v.345 n.1, p.83-100, 21 November 2005
Orna Kupferman , Moshe Y. Vardi, From linear time to branching time, ACM Transactions on Computational Logic (TOCL), v.6 n.2, p.273-294, April 2005
Erich Grdel , Wolfgang Thomas , Thomas Wilke, Literature, Automata logics, and infinite games: a guide to current research, Springer-Verlag New York, Inc., New York, NY, 2002 | weak alternating automata;complementation |
378045 | Fast and Accurate Algorithms for Projective Multi-Image Structure from Motion. | AbstractWe describe algorithms for computing projective structure and motion from a multi-image sequence of tracked points. The algorithms are essentially linear, work for any motion of moderate size, and give accuracies similar to those of a maximum-likelihood estimate. They give better results than the Sturm/Triggs factorization approach and are equally fast and they are much faster than bundle adjustment. Our experiments show that the (iterated) Sturm/Triggs approach often fails for linear camera motions. In addition, we study experimentally the common situation where the calibration is fixed and approximately known, comparing the projective versions of our algorithms to mixed projective/Euclidean strategies. We clarify the nature of dominant-plane compensation, showing that it can be considered a small-translation approximation rather than an approximation that the scene is planar. We show that projective algorithms accurately recover the (projected) inverse depths and homographies despite the possibility of transforming the structure and motion by a projective transformation. | Introduction
This paper extends our previous multi-image structure-from-motion (SFM) algorithms [18][26][16] [24] from
the Euclidean to the projective context. Previously, we assumed that the camera calibration was known and
fixed; the new versions of our algorithms handle sequences with varying and unknown calibrations. We also
adapt our algorithms for the common situation where the calibration is measured up to moderate errors and
is known to be fixed.
As in our previous work, we aim for an approach that is fast and accurate on sequences with small
"signal," i.e., with small (translational) image displacements. The small signal makes the reconstruction
problem difficult, so to be effective an algorithm must exploit all the information in the sequence: we
require an intrinsically multi-image approach [19]. 1 On the other hand, the small displacements simplify
the problem of finding correspondence and limit the effects of correspondence outliers on the reconstruction,
since the correspondence errors cannot be large. Motivated by these factors, as in [18][26][16], we disregard
the correspondence problem and present an algorithm that reconstructs from pre-tracked point features over
a large number of images.
See [21][20] for extensions of our approach to a "direct method" ([5][8]) that reconstructs directly from
the image intensities as well as from pre-tracked point and line data.
For small displacements, most scene points are visible in all the images. This makes it possible to use
fast factorization methods, as we do here [18][26][16][31][30]. It also means that one can use multiple images
to eliminate correspondences outliers. We have found that a good way to get reliable feature tracks is to
compute correspondences first for image pairs and then restrict to features that are tracked consistently over
all images. Eliminating features lowers the signal, so an intrinsically multi-image approach, that effectively
exploits the consistent feature tracks over the entire sequence, becomes even more crucial.
As in our past work, we focus on scenes with a large range of depths from the camera (large perspective
effects), which factorization approaches previous to this work could not handle. For shallow-depth scenes,
one can use algorithms such as [31] to complement our approach. Our assumptions of large depth range and
small image displacements imply that the camera motions are small, and we have designed our approaches
to exploit this.
We present two classes of algorithms. One works only for motions that are "truly" general, i.e., with
camera positions that do not lie on a single plane, but we expect it to be more effective for such motions than
the other approach. In [16], and below, we describe how one can determine self-consistently whether or not
1 By an intrinsically multi-frame approach (which can be either batch or recursive), we mean one that reconstructs directly
from many images, rather than first reconstructing from subsets of a few images and combining the results.
the motion is general enough for this approach to work. The second approach works for any moderate-sized
motions, but it is most effective when the camera moves roughly along a line, and we propose it mainly for
this type of motion. However, our experiments show that it also works well when the camera moves on a
plane or makes unrestricted 3D motions.
Unlike the projective factorization approach of Sturm and Triggs [30], our approach needs no initial guess
for the projective depths. It succeeds in situations where the Sturm/Triggs algorithm [30] fails and usually
gives better results. It factorizes a smaller matrix than that used by Sturm and Triggs and, thus, should be
faster on large problems.
Our discussion illustrates a number of interesting theoretical points. We demonstrate the close analogy
between recovering rotations in the Euclidean context and recovering 2D projective transforms (homogra-
phies) in the projective one. We show that compensating for the dominant plane as in [9][27][11] can be
understood as a small-translation approximation rather than an approximation that the scene is planar. We
show that algorithms can recover the (projected) inverse depths and homographies even in projective SFM.
Some of our experiments focus on the common situation where the camera calibration remains fixed over
the sequence and is approximately known. In experiments with general motion, we show that our algorithm
gives a fast computation of the projective structure which approaches the accuracy of the maximum-likelihood
least-squares estimate. A mixed Euclidean/projective strategy does somewhat better than a
completely projective one. In experiments with linear motions, we find that the Euclidean approach with
a slightly wrong calibration does better than the projective one for intermediate translation directions, but
worse for translations close to the image plane or viewing direction. Our results again compare well with
those of a projective maximum-likelihood estimate. Finally, we show that our approach also gives better
results than the Sturm/Triggs approach when the camera moves on a plane.
1.1
Summary
The next section describes notation and preliminary results that we need to derive our algorithms. Section
2.1 shows that one can accurately recover the "projected" inverse depths despite the freedom to alter the
structure by a projective transform. Section 2.2 characterizes the image displacements caused by the camera's
motion and calibration changes, and Section 2.3 characterizes the flows due to infinitesimal 2D projective
transforms, i.e., infinitesimal homographies.
Section 3 describes our general-motion algorithm in fully projective and mixed Euclidean/projective
versions. Results for this algorithm are presented in Section 3.4. Section 4 describes our algorithm specialized
for the linear-translation case. Section 4.3 gives the results of experiments with it, and Section 4.4 explains
how to extend it to handle any translational motion.
Preliminaries
Notation and Definitions. We use MATLAB notation: a semi-colon separates entries in a column vector,
a comma or space separates entries in a row vector, and a colon indicates a range of indices. We denote the
average value of a quantity X by hXi. If M is a matrix (of any size) with some singular values much bigger
than the rest, we refer to the large singular values as the leading singular values, and to the corresponding
singular vectors as the leading singular vectors.
Given a vector V, define [V] 2 as the length-2 vector consisting of the first two components of V. If V is
a 3D point, define the ideal image point corresponding to V by V j [V] 2 =V z . Note that we take the "ideal
image" to have focal length 1. Define the 3D point v 1 corresponding to the 2d point v 1 by v 1 j
\Theta
If M is a 3 \Theta 3 matrix and v 1 is a 2D image point, let M v 1 denote the image point obtained from v 1
after multiplying by we use the notation v 1 \Theta v 2 to mean
If V is a 3D vector, we have the identity M
\Theta
z
' \Theta
z
(1)
[MV] z
Note that in general
Assume we have N quantities i a indexed by a. We define fig to denote the length-N column vector
whose a-th element is i a . We also use the notation
\Upsilon to indicate a column vector, typically a "long" vector
with length greater than 3. Similarly, if we have quantities ab indexed by a and b; we let denote the
matrix whose (a; b)-th entry is ab .
Let there be N p points tracked over N I images, where we label the images by 1g. We
take the zeroth image as the reference image. Define Pm j
to be the m-th 3D point
in the coordinate system of the zeroth image. Let p i
denote the m-th image point in the i-th
image and let
.
Let K i denote the calibration matrix for the i-th image, where we define these with respect to the ideal
image, e.g.,
(neglecting noise). We take the calibration matrices to have the standard upper-diagonal form
22 K i0
Let T i and R i represent the translation and rotation from the reference image 0 to the i-th image, and
We define the motion of the 3D point P under R i and T i by P
z ). We refer to the T 0i as the 3D epipoles, since the epipoles in the
reference image are given by e
2.1 Projective Inverse Depths
In our experiments, we report results for the (projected) inverse depths. We show that one can recover these
accurately in projective SFM.
In the projective context, one can recover the structure only up to a projective transform \Pi; where \Pi is
a 4 \Theta4 matrix. The structure changes under \Pi by:
where the \Delta m are scalars. Dividing by Z 0
m and Zm , we rewrite this as
~
where Pm are the noise-free ideal image coordinates.
Since we adopt the coordinate system of the zeroth image, we only need to consider \Pi that leave the
image points fixed in this image. Then, for generic scenes, the first three rows of \Pi must have the form
\Theta
, where 1 3 is the 3 \Theta 3 identity matrix and 0 3 is a length-3 column of zeros. This implies
Up to small effects of noise, one can accurately recover the inverse depths up to an additive plane and a
multiplicative scale.
Define a N p \Theta N p projection matrix QL annihilating the three vectors f1g ; fxg, fyg. Then (2) implies
one can accurately recover the projection QL
up to scale and the small effects of noise. In our
experiments, we compare the accuracy of our various algorithms in recovering this projection.
2.2 Image Displacements due to Varying Calibration
Consider two images, e.g., images 0 and i; and neglect the noise. We have
Note that
\Theta
(one can eliminate K i from the denominator on the right since
\Theta
1). Let
represent the image point one would get by a pure translation without changing the original calibration
Write the image displacements as the sum of a pure translational piece and a remaining piece that
contains the rotational effects:
where
Recall that T
z
\Theta
z
and one can rewrite d i
Rm as
Tm
Tm
Thus, the d i
Rm represent the displacements due to a pure 2D projective transform K i R
. The form
of (3) leads naturally to our algorithms described below.
2.3 Infinitesimal 2D Projective Transforms
Our algorithm requires characterizing the form of an infinitesimal 2D projective transform or homography.
Suppose that a set of image points p 0
m is given by a small 2D projective transform M of image points pm .
Write the 3 \Theta 3 matrix M as
F I
is the 3 \Theta 3 identity matrix, is small, F is a 2 \Theta 2 matrix, and I and J are 2 \Theta 1 vectors. For the
2D image displacements,
For each of the 8 parameters in F; I ; J; we define a length-2 vector h (b)
giving the corresponding first-order
displacement due to the infinitesimal homography. Thus
xm ym xm ym y 2
Define the 8 \Theta 2 matrix hm such that the b-th column of h equals h (b)
.
2.4 Additional Definitions for the Algorithms
The homography flow vectors
\Psi (b) . Assuming N p image points, we define eight length-2N p "homog-
raphy flow" vectors
\Psi (b) corresponding to the h (b)
defined above, with
fxyg \Phi
Define the 2N p \Theta 8 matrix \Psi such that its b-th column equals
\Psi (b) .
Translational flow vectors
\Phi (b) . Define the three length-2N p translational-flow vectors
\Phi z ,
corresponding to the translational displacements in (3), by
\Phi
f0g
\Phi
Let
Image displacement matrix. We organize the observed image displacements d i
m into a (N I \Gamma 1) \Theta 2N p
matrix D; by putting all the x and then the y coordinates for a given image on a single row.
Bias Compensation Matrix C. Define a
We use
F
Note that one can compute products of C \Gamma1=2 and C with O (N) computation.
Modified translation vectors
Ca . For a 2 fx;
a g be the length-(N I \Gamma 1) vector whose
elements are the T 0i
a . Define
a
3 General-Motion Algorithm
Assumptions. As in the approach of [16], the algorithm requires that the translational motion not be too
large (e.g., with jT 0 j =Z 1=3) and that the camera positions do not lie in a plane. One can automatically
detect when the camera is moving on a plane or line and use the approach of Algorithm II below [26].
3.1 Algorithm (Proj)
Step P1: Homography compensation. Assuming that the translations are zero, we recover the homographies
separately between the reference image and each subsequent image i. We
compensate for these homographies, defining the compensated image i by p i
. Let the
image displacements d i
m and displacement matrix D now refer to the compensated image points p i
cm .
Step P2a: Homography elimination. Using Householder matrices [25][4], we compute a (2N
annihilates the subspace generated by the 8 vectors
projection (i.e., the rows of H are orthogonal).
Step P2b: Singular value decomposition. We define the modified displacement matrix
and compute its singular-value decomposition (SVD). Let
A (1;2;3) be the three leading right singular vectors
of DCH , and let A j
\Theta
A (2) ;
A (3)
Step P3a: Depth recovery. We recover the depths by solving the linear system
\Theta
\Gamma\Phi
\Phi y
\Gamma\Phi
\Phi z
\Gamma\Phi
for the Z \Gamma1
m and the 3 \Theta 3 matrix U .
Step P3b: Translation recovery. Using the recovered values for the
\Phi a , we solve the linear system
Cx
Cy
Cz
\Phi y
\Phi z
for the
Ca . The fT 0
a g are given by fT 0
a
Ca .
Step P4: Improved homography recovery. Let the T 0i
represent the current estimates of the T 0i .
For each i; we solve the linear system
\Theta d i
\Theta I i
\Theta pmA J iT pm ; (10)
for I i , F i , J i . Our estimate of the residual homography from the reference image to image i is
Step P5: Iteration (optional). We iteratively repeat until convergence the following: 1) Compensate
for the residual homographies newly computed in Step P4; 2) Repeat Steps P2b-P4.
3.2 Discussion
Step P1: Homography compensation. This is a preprocessing step and sometimes unnecessary. Its
purpose is to make the effective motion smaller. One could also apply this step in the Sturm/Triggs approach
[30], since this approach, like ours, works best for small motions.
The errors in recovering the M i scale with the size of the translational image displacements,
O
O
where
fi fi in the correction term denote the average sizes of the image noise, inverse depths, and
translations. Under our moderate-translation assumption, the
fi fi are small.
Compensating for the M i is equivalent to compensating the dominant plane 2 [9] [27][11]. Our formulation
makes it clear that one can understand this compensation as a small-translation approximation rather than
as an approximation that the scene is planar.
2 Though the initial step of our method is the same as in [9], our subsequent algorithm differs in two important respects: it
is linear, and it corrects for the first-order errors in the compensation of the dominant plane.
Steps P2a and b: Homography elimination and SVD. One can compute H and its products with
O (N p ) computation [16]. Our current implementation computes the products of H in O(N 2
LAB's overhead makes our O (N p ) implementation quite slow.
After the homography compensation in Step P1, we have
z
\Theta
z
where
We neglect the denominator in (11) and D, since it causes a second order correction O
. Multiplying
D by H eliminates the first-order corrections due to ffiM i , so up to second order we get a bilinear
expression for
Cx
Cy
Cz
z
Step 2b is based on approximating DCH as bilinear in the structure and motion.
Our use of H to annihilate the residual homographies derives from the optical-flow technique of Jepson
and Heeger [10], as generalized by [25] to apply to arbitrary, rather than regular, image-point configurations.
For sideways translations, the denominator in (11) exactly equals 1. If we can succeed in making the
residual homography ffiM i small, for example by means of the iteration in Step P5, then DCH becomes
exactly bilinear for sideways translations, no matter how big. Thus, our algorithm is capable of giving good
results for large sideways translations. 3 The experiments of [14] with the Euclidean version of our approach
show that iterating typically does make the residual rotations small, even with large translations. Rotations
are the Euclidean equivalent of homographies, so this implies that iterating will also make the ffiM i small,
and that our approach will succeed for large sideways translations. We have confirmed this on a few test
sequences, see below. However, our approach is intended mainly for small or moderate translations.
As discussed in [16], multiplying by C \Gamma1=2 reduces the bias due to singling out the reference image for
special treatment.
This algorithm requires that DCH has 3 singular values that are much larger than the rest. This is the
precise form in which we impose our general-motion assumption. If DCH does not have 3 large singular
values for a given sequence, one should use Algorithm II below rather than the approach of this section.
3 We presented an algorithm specialized for sideways motions in [26].
Step P3a: Depth recovery. (8) reflects the fact that the recovered right singular vectors
A (b) must
generate approximately the same subspace as the three translational flow vectors
\Phi x;y;z .
As discussed in Section 2.1, one can recover the Z \Gamma1
only up to an additive plane and multiplicative
scale. One can see this explicitly from the fact that H annihilates all contributions to the
\Phi x;y;z from the
components of
are length-N p vectors.
One can solve (8) with O (N p ) computation, as discussed in [16]. In our current implementation, we use
a simple O(N 3
Step P4. Improved homography recovery. From (11), we get
\Theta d i
\Theta
\Theta
where
denotes the size of the error in estimating the T 0i and
denotes the average size of T 0i . As
before,
fi fi and j denote the average sizes of the inverse depths and the noise. Define I i ; F i , J i as in (4)
by
Then (13) and (5) give the linear system (10).
The linear system (10) does not determine the residual homographies ffiM i unambiguously. This just
reflects the projective covariance of the equations: one is free to change the projective basis, which changes
the values of the ffiM i , Z \Gamma1
, and the T 0i . As discussed in Section 2.1, one can characterize the projective
transforms that leave the reference image fixed by their effects on the Z \Gamma1
m . We remove the ambiguity in
recovering the ffiM i by setting to zero the part of ffiM i that would add a plane to the inverse depths.
Remark. One could also estimate improved values of the image points as in Step 4 of the algorithm of
[16], but we have not implemented this.
3.3 Variations
The algorithm we have presented is fully projective: it can handle arbitrary changes in the linear calibrations
images. But, most sequences are taken with a single camera, so that the calibration remains
essentially constant over the sequence (with the possible exception of the focal length). We have considered
a few simple modifications of Step P1 that exploit this. One variation, the fixed algorithm, recovers
the homographies between the reference and subsequent images by a least-squares optimization under the
assumption that the calibration K is fixed. (Like the original Step P1, it assumes that the translations
are zero.) The other variations exploit the fact that the calibration error is typically small, so that one
can neglect the homographies K i R(K) \Gamma1 or approximate them by pure rotations R i . The Proj-nocomp
variation of our algorithm does not compensate for the homographies at all, simply eliminating Step P1.
The version Proj-Unrot approximates K by a pure rotation. Instead of Step P1, it computes
and compensates for the best rotations transforming the reference image to the subsequent images [16].
Finally, we have compared our projective algorithms to a Euclidean version of the same algorithm, which
we described in [16].
3.4 General-Motion Experiments
In the following synthetic experiments, the motion, structure, and image noise varied randomly for each
trial. The sequences consisted of 15 images of points with a field of view (FOV). The 3D depths
varied from 20 Z 100. In Experiments 1-4 we randomly chose each translation component (with
respect to the zeroth camera position) such that \GammaT max T x;y;z Tmax and added random rotations up
to a maximum of about 20 ffi . In Experiment 5, the motion consists of a scene rotation by up to about
(This corresponds to very large camera translations in camera-centered coordinates and poses a serious
challenge for our approach, since it is targeted for moderate translations. Other algorithms, e.g., the two-image
or Tomasi/Kanade approach, are more appropriate for this situation [19][31].) The noise was one-pixel
Gaussian, assuming a 512 \Theta 512 image and the specified FOV.
We simulated calibration error by multiplying the images derived as above by a matrix K; with
This corresponds to a calibration error in the focal length of 5-10%.
Table
1 shows the results for several versions of our algorithm in comparison to the maximum-likelihood
estimates. The entries give the mean error in degrees over all trials between the recovered and ground-truth
values of QL
(Section 2.1). Proj refers to our algorithm of Section 3.1, and P-U refers to
Proj-Unrot described above. MLE gives the results for the maximum-likelihood least-squares estimate,
computed by a standard Levenberg-Marquardt steepest-descent approach starting from the ground truth.
Expt #Seqs Proj P-U Euc MLE Tmax
Table
1: Experiments 1-5: general motion. Average errors in degrees in the projected inverse depths
1g.
Euc is our Euclidean algorithm [16].
Proj-Unrot gives the best of the tabulated results. The fixed results (not shown) are comparable, but
not good enough to justify their extra computational cost. For small translations, with our
algorithm should be most accurate, Proj-Unrot does only 11% worse than the MLE. Euc also does well.
For its results are only 12% worse than those of Proj-Unrot. One of the reasons for the good
performance of Proj-Unrot and Euc is that the calibration "error," i.e., the difference between K and
the identity matrix, is moderate.
As expected, our algorithm does relatively less well than the MLE in Experiment 5. However, because of
the large baselines in this experiment, the resulting translational image displacements constrain the reconstruction
so strongly that our algorithms still give good results, i.e., Proj-Unrot averages better than 7
error.
We also computed the results (not shown) for another variation of our algorithm, where instead of Step
P1 we computed all the homographies between the reference and subsequent images in a single optimization.
(One way of doing this is to use the 2D version of the Sturm/Triggs approach [30][15].) Since Step P1
computes the transform between image 0 and image i separately for each i; it overweights the reference
image noise, but by computing all the transforms simultaneously we can avoid this. However, we found that
the single-optimization approach did not improve our results.
Real-Image Sequence. We tested our algorithm on the Castle real image sequence available from CMU.
Figure
1 shows the first image of this sequence. We generated the images for this experiment by multiplying
the correctly calibrated images (for unit focal length) by 12:2 and then shifting by (0:403; 0:093) (to center
the images and scale them to unity). Thus the assumed focal length was incorrect by a factor of 12:2. Our
pure projective approach Proj gave an error for the projected inverse depths of :40 ffi , compared to :04 ffi for
the MLE. Euc gave an error of 28:2 ffi . The largeness of the Euclidean error is due to the very large error in
the assumed focal length.
Since the scene in this sequence has a very shallow depth, an affine approach such as the Tomasi/Kanade
Figure
1: Castle image
algorithm [31] is more appropriate than ours [16].
Comparison to the Sturm/Triggs Approach. The Sturm/Triggs algorithm [30] is a factorization
method that, like ours, deals well with small motions and large perspective effects. We compared our
approach to this algorithm, using an implementation of it that we created previously for other purposes.
We have optimized the code for the Sturm/Triggs algorithm to about the same extent as for ours. In the
Sturm/Triggs implementation, we followed the advice of [32]: before applying the algorithm, we first centered
and scaled each image to a unit box, and then normalized each homogenous image point to unit norm. We
initialized the algorithm by setting all the unknown projective depths equal to one [32][1][7], as is appropriate
for small motions. (Since we aim for a true multi-image technique that works for low signal-to-noise, we do
not compute the projective depths from two or three images as in the original algorithm of [30].) We have
proven in [15] that the iterative version of the Sturm/Triggs algorithm converges, and we also tested our
approach against the iterated version. The implementation of Steps P2 and P3 of our algorithm requires
about 80 lines of MATLAB code, and Steps P4 and P5 require an additional 30. The iterative version of the
Sturm/Triggs approach requires about 80 lines.
We created synthetic sequences using the ground-truth structure from two real-image sequences: the
UMASS/Martin-Marietta rocket-field sequence [3] and the UMASS PUMA sequence [12]. The points in
the rocket-field sequence range from 17 to 67 in depth and cover an effective FOV of 37 ffi , while the PUMA
points range from 13 to 32 in depth and cover an effective FOV of 33 ffi . Table 2 gives the parameters of the
experiments, and Table 3 shows the errors in recovering the projected inverse depths, the epipoles in the
zeroth image, and the homographies. As Table 2 indicates, in most experiments each image had a different,
randomly chosen calibration. Define r (a) to be a random number chosen uniformly in the interval [a; \Gammaa].
For each of the N I images, we selected the calibration matrix via
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
chose r separately for each entry of the matrix.)
Table
2 gives the size of the added noise in pixels, where we define the size of a pixel by taking the
maximum magnitude of the image point coordinates to correspond to 256 pixels. Note that this is after
applying the calibration matrix K: Since the shift in the camera center due to
\Theta
1:2;3
displaces the image
region from the origin, this noise is usually larger than it would be for an image region centered on the origin.
We define the epipole error for a sequence as the average over images
true
true
calc and e i
true are the calculated and true epipoles in the reference image. One can show, as in Section
2.1, that projective transforms that leave the reference image fixed also leave these epipoles fixed.
We obtain the homography error for a sequence by averaging the homography error over all images
with i 1. By the "homography error," we mean the error in recovering the K projective
transform leaves the reference image fixed, one can show that it changes this matrix by
true
(up to noise), where the V i are length-3 vectors. If G i is a 3 \Theta 3 homography matrix for the i-th image,
define the corresponding invariant homography matrix by
invar
where we use the backslash notation of MATLAB, i.e., e i
true is the matrix division of e i
true into G i .
With this definition, invar
is invariant (up to noise) to projective transforms that leave the reference
image fixed.
As before, we denote the recovered homographies by M i . We define the homography error for image
i by the angle in degrees between the length-9 vectors V true and V calc , where these contain the entries,
respectively, of invar
and invar
Expt. Struct Uncalib oe T oe ROT Noise S/T Proj
deg. pix. itermax itermax
Table
2: Parameters for the experiments comparing Proj and the Sturm/Triggs approach (S/T). The 'Struct'
column indicates the structure used to generate the sequences. A 1 in the 'Uncalib' column indicates that we
introduced different calibrations for each image (see text). We choose T x , T y , T z independently as Gaussian
variables. The columns labelled oe T tabulate the standard deviations of the T x;y;z : oe ROT characterizes the
typical size of the rotations. The 'noise' column indicates the standard deviation of the Gaussian noise
added. The remaining two columns indicate the maximum number of iterations allowed, respectively, for
the Sturm/Triggs approach and for Step P5 of the Proj algorithm.
Our results show that our algorithm Proj usually gives better results for the structure and motion
than the Sturm/Triggs approach. One iteration of Proj takes slightly longer than one iteration of the
Sturm/Triggs approach (recall that our current implementation is slower than necessary). However, the
preprocessing Step P1, that is, the initial homography recovery, accounts for most of the computation time.
Table
3 shows that the factorization part of our algorithm, Steps P2-P3, is about four times faster than the
Sturm/Triggs approach. Also, the computation times of Experiment 7 suggest that our approach converges
more quickly than the Sturm/Triggs method; see also Section 4.4.
We also checked the performance of our algorithm on a sequence with large sideways translations. With
the PUMA structure, oe (refer to Table 2 and the
definition of r (a) above), our algorithm gave errors for the projected depths of 1:6 sequence, the
largest translation had size 13.4, which is comparable to 13:9, the distance of the closest 3D point to the
camera's reference position.
4 Algorithm II
The algorithm presented above assumes that the translational motion is sufficiently general, i.e., that the
camera locations do not lie close to any line or plane. In this section, we present a version of our algorithm
that deals with the common case of a camera moving along a line. One can extend it to deal with more
Expt. Time (sec) Z \Gamma1 Epis Epis Homog Homog Cycles
9 0.042 0.19 0.15 8.6 8.4 8.1 2.7 6.9 1.8 5.4 1
9(med) 0.04 0.19 0.15 6.8 6.7 7.1 1.1 2.8 1.1 4.9 1
4.3 3.2 14 1
11(med) 0.04 0.19 0.15 9.1 11 11 1.4 2.8 2.3 5.2 1
19(med)
Table
3: Results for the experiments with parameters given in the previous table. 'NoComp' and `NC'
refer to Steps P2 and P3 of the Proj algorithm, without any computation or initial compensation for the
homographies. The columns labelled 'S/T' give results for the Sturm/Triggs approach. We present errors
for the projected inverse depths ('Z \Gamma1 '), the epipoles ('Epis'), and the homographies ('Homog'). See text
for an explanation of the error measures used. The 'Cycles' column indicates the number of cycles used by
Proj in Step P5. All quantities shown represent the mean of the results over 100 random trials, except for
those rows labelled by '(med)', where the quantities represent the median over 100 random trials.
general motions such as a camera moving on a plane or in 3D, as we describe briefly below.
Definitions. Let the unit vector T 0 denote the direction of the T 0i , and let i j
Let
\Theta d
denote the (N I \Gamma 1) \Theta N p matrix whose (i; m)-th element equals (pm \GammaT 0 ) \Theta d i
be the N p \Theta N p projection matrix that annihilates the subspace generated by the eight
vectors
ae
\Theta h (b)
oe
One can show that this subspace is five-dimensional.
4.1 Algorithm Description
Step L0: Rescaling. We transform all images so that they center on the origin and have the same scale.
We choose the scale so
for the reference image. (At the end of the algorithm, we
transform the recovered unknowns back to the coordinate system of the original reference image.)
Step L1: Homography compensation. Assuming that the translations are zero, we recover the
homographies separately between the reference image and each subsequent image i.
Define the compensated image i by p i
. Let the image displacements d i
m and displacement
matrix D now refer to the compensated image points p i
cm .
Step L2a: Using Householder matrices [25][4], we compute a (N
annihilates the subspace generated by the six length-N p vectors f1g
is a projection.
Step L2b: Linear translation recovery. We solve the linear system
for T 0 .
Step L3: Iterative improvement of recovered translation. Starting from the previous estimate
we minimize the error
trace
\Theta d
\Theta d
with respect to T 0 . Take T
Step L4: Improved homography recovery. The same as in Step P4 of our general-motion approach.
Step L5: Iteration (optional). The same as in Step P5 of our general-motion approach.
4.2 Discussion
Step L0. As in [6] and [18][13], this step reduces the bias of our linear algorithm in Step L2.
Step L2. This step of our projective algorithm is exactly the same as in our Euclidean algorithm of [18];
the only difference is that the projective algorithm computes an estimate of KT rather than T: However,
in the projective case, the linear algorithm exploits more of the available information and gives a better
approximation to the result of minimizing the full error. Since HL annihilates only one more degree of
freedom than Q 5 ; the projective linear algorithm uses all but one of the available constraints, while the
Euclidean linear algorithm forgoes three of the available constraints. One can show that the additional
length-N p vector annihilated by HL is
z
z
For linear motion, (3) becomes
z
Assume we have compensated for the homographies in Step L1. At the ground-truth value for
to noise, and
independent of the denominator in (16). Thus, if the residual homographies ffiM i are small, our algorithm
works for any size and direction of translation [18]. This also holds for Step L3. If Step L1 does not suffice
to make the ffiM i small, the iteration in Step L5 may. Thus, we expect the algorithm to work even for very
large translations, as the Euclidean version of our approach has been shown to do [14].
Step L3. For e =T 0 in or near the FOV, instead of Steps L2 and L3, one could use the recent method of
Srinivasan [29][28] to find the true global minimum of (15) directly. This is important, since [14][2] showed
that the least-squares error typically has several local minima for e near the image region. We have not yet
implemented Srinivasan's approach.
Remark. One could also recover the Z \Gamma1
m as in [18][23] and improve the homography computation as in
Step P4 of the general-motion algorithm. Since these are straightforward transcriptions of previous methods,
we do not discuss them here.
4.3 Linear-Translation Experiments
We generated sequences as in Experiments 1-4 except that the camera translated in constant steps of 0:2
along a line (with random rotations as before). We created 400 sequences, choosing random directions for
the image-plane projections of "
T and systematically varying T
y from :01 to 4 in steps of :01.
Figure
2 shows the mean angular errors in recovering T 0 for three versions of our algorithms: a pure Euclidean
approach [18][23], the projective approach Algorithm II, and Algorithm II with Step L1 replaced
by initial compensation of the rotations. We derived the curve labeled MLE by first using Levenberg-Marquardt
to compute the maximum-likelihood least-squares projective reconstruction and then computing
the calibration and motions from this by least-squares minimization.
The Euclidean estimate is worse than the projective ones when T is parallel to the image plane or to " z,
but it is comparable 4 for intermediate T. The overall median of these results is slightly better than those of
4 We verified that the Euclidean approach gives bad results for T ? "z simply because of the calibration error. With no
calibration error, it does perform well for T parallel to the image plane.
(degrees)
y
Figure
2: Angular errors in recovering T 0 . x axis shows the lower limit of a bin of size 0:25 in T
each data point is an average over 25 trials from the indicated bin. Cyan circles: MLE; blue squares:
Euclidean. x and + are for our linear projective algorithm, with initial compensation for the rotations and
homographies, respectively.
the projective algorithms.
We also ran Algorithm II (with initial projective compensation) on all 55 choices of image pairs from
the Castle sequence (Figure 1). Our algorithm recovered T 0 with an average error of 1:1 ffi , compared to 0:71 ffi
for the MLE. The median errors for the two approaches were respectively :80 ffi and :40 ffi , and the maximum
errors were 6:6 ffi and 4:4 ffi . Though the assumed focal length was incorrect by a factor of 12:2 (Section 3.4),
our approach did nearly as well as the MLE.
Because of the large error in the focal length, the variation of initially compensating for a rotation instead
of a homography yielded relatively large errors of about 10 ffi in the rotation. Our Euclidean algorithm usually
recovered T 0 accurately. Apart from 13 outliers (possibly due to local minima), the Euclidean algorithm
recovered T 0 with an average error of 5:1 ffi .
Comparison to the Sturm/Triggs Approach. To compare Algorithm II to the Sturm/Triggs ap-
proach, we created synthetic sequences of 16 images and 32 points using the structure from the PUMA
sequence. Let "
T true denote the true translation direction and "z the viewing direction in the reference image.
We systematically varied the angle ' true ji
from 0 ffi to 86 ffi in increments of 2:9 ffi . For each
selected ' true , we created sequences, where we chose the projection of "
T true on the x-y plane randomly
for each sequence. For each image, we randomly chose the rotation R i with oe with a standard
deviation of 5 ffi for the rotation around each axis, and we randomly and uniformly chose the translation
magnitudes up to a maximum of 1. Recall that the PUMA depths range from 13-32. We introduced varying
Angular
error
in
recovered
epipole
(deg.)
Figure
3: The angular errors in recovering the 3D epipole T 0 for the Sturm/Triggs approach and Algorithm
II, plotted versus ' true
trials. 'Triangles' show the
result of one Sturm/Triggs iteration; 'diamonds' show the Sturm/Triggs result after convergence or after a
cut off of 300 iterations if the algorithm did not converge by this number. '+' shows the linear estimate of
Algorithm II, Step L2, and '*' shows the result after the nonlinear minimization in Step L3 of Algorithm II.
calibrations as in Experiments 6-19, and added Gaussian noise of 0:05 pixels, assuming that the maximum
magnitude of the image point coordinates corresponded to 256 pixels.
Figure
3 compares the results of Algorithm II and the Sturm/Triggs approach for the 3D epipoles.
Each data point represents the mean over the 30 trials for the indicated value of ' true =i
: For
our approach, we plot results for the error
calc
true is the true value of the 3D epipole and
calc is the value recovered. Since the Sturm/Triggs approach recovers a different 3D epipole T 0i
S=T for each
image, we plot the average error
ii
our approach, we show the errors for
the initial linear estimate Step L2 and for one iteration of the nonlinear estimate Step L3. For comparison,
we show the Sturm/Triggs estimates after one iteration and after the algorithm either converges or reaches
300 iterations.
One iteration of the Sturm/Triggs approach does much worse than our linear estimate, and the converged
Sturm/Triggs result is also much worse. Our linear estimate is almost as good as the nonlinear estimate.
Our nonlinear algorithm averaged about 2 seconds of computation and the linear algorithm took fractions
of a second, while the iterated Sturm/Triggs approach averaged about 9 seconds.
We also applied the iteration of Step L5 and allowed Algorithm II to converge. The average over all
Figure
4: The first image of the rocket-field sequence.
sequences of the number of cycles till convergence was ! 4, and the average time till convergence was 8
seconds, less than the 9 seconds for the Sturm/Triggs approach. The Sturm/Triggs approach averaged 177
iterations. However, we used a much stricter convergence criterion for the Sturm/Triggs approach, to give
it a chance to converge to an accurate result. As proven in [15], the Sturm/Triggs approach minimizes
a particular error function. We defined the algorithm to have converged when that error changed by less
iterations. We defined Algorithm II to have converged when the residual homography
recovered in Step L4 satisfied
For an additional comparison, we ran the Step L5 iteration of Algorithm II on 100 sequences similar
to those above, defining convergence by
On average, Step L5 converged in 7.5
cycles and always in less than 15.
Figure
5 shows similar results for the homography recovery, for the homography error defined in Section
3.4.
We also tested Algorithm II and the Sturm/Triggs approach on the rocket-field real-image sequence [3],
see
Figure
4. This sequence has large translations ranging up to 7:5 in magnitude (recall that the depths vary
from 17 to 67). The camera moves approximately along a line, with T i deviating from its average direction
by up to 1:5 ffi . Steps L2 and L3 of our approach recovered T 0 with errors of 5:8 ffi and 6:4 ffi , respectively, and
Steps L1 and L4 recovered the homographies with average errors of 2:8 ffi and 2:0 ffi . The initial Sturm/Triggs
estimate had average errors of 14:7 ffi and 15:5 ffi for T 0 and the homographies, respectively. After 100 iterations,
the Sturm/Triggs approach gave errors of 3:3 ffi and 7:1
We have found that the Sturm/Triggs approach gives poor results for linear motion when the motion is
forward (or backward). We created sequences with 16 images of 32 points for randomly chosen structures,
where the depths varied between 40 and 60, the translation direction was (0:1; 0:1; 1) ; the
randomly up to 1, the rotations had oe there was zero noise. We allowed the Sturm/Triggs
approach up to 3000 iterations to converge, and defined it to have converged when its error changed by less
iterations. We proved in [15] that the Sturm/Triggs algorithm does eventually converge
to a local minimum of the error. On 9 of the trials, the algorithm had large errors in recovering the
epipoles, with an average error of 44 ffi for these trials. The maximum error of our linear estimate in Step L2
was 0:15 ffi . For a similar set of sequences with rotations of up to 15 ffi , the Sturm/Triggs approach gave
large errors on 25 out of our approach again gave nearly perfect results. We also found that
the Sturm/Triggs approach often failed to converge correctly when we used the structure from the PUMA
sequence to generate sequences.
These failures are produced by a bad initial choice of the projective depths, which causes the Sturm/Triggs
approach to converge to an incorrect local minima of its error. But, for small forward or backward motions,
there is no way to get good initial estimates for the projective depths-one cannot compute them accurately
from any small number of images, as [30] proposed to do. Also, the projective error surface is flat for
epipoles away from the forward-motion direction [17], and it tends to have local minima near the image
points [14][2][17]. If the initial choice of projective depths corresponds to an epipole far from the image
region, these factors could cause a local minimum to intercept the algorithm and prevent it from converging
to the true global minimum.
4.4 Extension to General Motion
We describe how to extend Algorithm II to any motion [26].
Step L'0. Rescaling.
Step L'1: Homography compensation.
Step L'2: Define H and HL and compute the SVD of DCH . Let NS 3 be the number of large
singular values of DCH , and let
A
be the right and left singular vectors corresponding to these
singular values. Let oe (s) denote the leading singular values of DCH , and define the (2N
oe (1)
Step L'0 is the same as Step L0, and Steps L'1-L'2 are the same as Steps P1, P2 in the general-motion
algorithm.
Homography
errors
Figure
5: The errors in recovering the homographies for the Sturm/Triggs approach and Algorithm II,
plotted versus ' true
all images in
each sequence. 'Triangles' show the result of one Sturm/Triggs iteration; `diamonds' show the Sturm/Triggs
result after convergence or after a cut off of 300 iterations if the algorithm did not converge by this number.
'+' shows the initial estimate of Algorithm II, Step L1, and `*' shows the result after Step L5 of Algorithm
II has converged.
Step L'3a. For each s with 1 s NS ; define
d
A
A
Np+m
. That is, we define the d
m so that H T
hn
d
x
d
y
Step L'3b: Linear estimates of effective translation directions. Let the unit vector T 0(s) be the
translation direction corresponding to
A . For each s, we solve
ni
\Theta
d
in the least-squares sense for T 0(s) .
Step L'3c: Refinement of effective translation directions. For each s, we refine the estimate for
T 0(s) by minimizing
d
with respect to T 0(s) .
Step L'4a: Isolate translational flow. Recall that hm is a 8 \Theta 2 matrix such that the b-th column of
hm equals h (b)
m . For each s, we compute a length-8 vector w (s) by solving
pm \GammaT 0(s)
hmw
Compute the
\Delta from H T
Step L'4b: Depth recovery. Solve the linear system
z
oe
for the Z \Gamma1
m and the 4NS constants
Step L'4c: Full translation recovery. We recover the translations via
\Theta \Phi
x
y
z
Step L'5: Improved homography recovery. The same as in Steps L4 and P4 of the algorithms
described above.
Step L'6: Iteration (optional). The same as in Steps P5 and L5 of the algorithms described above.
4.4.1 Discussion
This algorithm is the projective version of our Euclidean algorithm in [26].
Step L'3: Translation-direction recovery. Each leading singular vector corresponds to an effective
translation direction T 0(s) . We recover the T 0(s) exactly as before in Algorithm II.
Step L'4a,b: Isolate translational flow; depth recovery. For each leading singular vector, the hmw
is the effective infinitesimal homography. As before, one cannot determine this uniquely due to projective
covariance, and we specify it by setting to zero the components of w (s) that would add a plane to the Z \Gamma1
.
The constants a are necessary since Step L'4a recovers the homographies up to an ambiguity.
If s 0 6= s; the Z \Gamma1
m corresponding to the recovered
may differ by an additive plane from the Z \Gamma1
We need the scales (s) to fix the scale of the Z \Gamma1
between different singular vectors.
We introduce the singular value oe in (18) to emphasize the singular vectors with larger singular values,
since these have less noise sensitivity.
4.4.2 Experiments
We created 100 sequences with general motion as in Experiments 6-19, using the PUMA structure, varying
calibration, oe (refer to Table 2). We ran the extended version of
Algorithm II without the iterative refinement of translations in Step L'3c and without the iteration of Step
L'6. Without Steps L'3c and L'6, the extended algorithm is purely linear, with no iteration apart from the
SVD and linear equation solving. We compared our results to those of the Sturm/Triggs (S/T) algorithm,
which we allowed up to 300 iterations to converge. On average, S/T required 135 iterations to converge.
This noniterative version of our algorithm averaged 0:21 seconds of computation, compared to 7:0 seconds
for the iterated Sturm/Triggs (IST) approach. It gave an average error for the projected Z \Gamma1
m of 1:39 ffi
compared to 1:63 ffi for IST. Its average error in computing the epipolar directions was 8:8
5:3 ffi . Its average homography error was 0:16 ffi and that of the IST was 0:52 ffi .
A single iteration of S/T gave an average error of 29:7 ffi for the epipoles and 3:2 ffi for the homographies.
We created a second set of 100 sequences with planar motion with the same parameters as the first.
We achieved planar motions by creating the T i as before and then setting the third singular value of the
matrix of translations to zero. The average errors for the projected Z \Gamma1
were 1:62
The average errors for the epipolar directions were 4:4 (IST). The average errors for the
homographies were 0:11
On average, IST took 140 iterations to converge. The first iteration of S/T gave an average error of 33 ffi
for the epipolar directions and 3:7 ffi for the homographies. We also ran IST for 11 iterations, to check its
speed of convergence. This took on average 0:61 seconds and gave average errors of 5:1 ffi , 22:9 ffi , and 2:4 ffi for
the projected inverse depths, epipolar directions, and homographies, respectively.
4.4.3 Approximately Linear or Planar Motions
What happens when we apply Algorithm II or its extension with a too restrictive motion model-for
example, assuming planar motion when the camera centers do not lie exactly on a plane? To the extent that
we can neglect the denominator in (3)-and doing so causes small second order effects when the translations
are small-this causes no additional error in the structure recovery. "Applying our algorithm with a too
restrictive motion model" means using NS of the leading singular vectors when the dimensionality of the
translational motion is greater than NS , i.e., we neglect the smaller components of the translational motion.
This does not prevent us from recovering a good approximation to the translations projected into an NS -
dimensional subspace. Since each singular vector gives a separate estimate of the Z \Gamma1
m , the only effect on
the structure recovery of restricting to a subset of the singular vectors is that we lose information that could
have been used to improve the estimates.
We ran our extended Algorithm II with sequences created as in our above test of this
algorithm on 100 general-motion sequences. As before, we ran the algorithm without the iterative refinement
of translations in Step L'3c and without the iteration of Step L'6. We obtained an average error for the
projected inverse depths of 1:37 ffi , which is less than the result with than the IST error.
We projected the true 3D epipoles into the plane of the epipolar directions recovered by our algorithm. The
average error in recovering the 3D epipoles in this plane was 6:4 ffi . The average error of Step L'5 in recovering
the homographies was 0:20 ffi .
These results suggest that it can be advantageous to ignore the smaller of the leading singular values
and vectors when these have a good deal of noise contamination. In the sequences above, the x, y, and
z components of the translational motion have similar magnitudes, but the smallness of the FOV causes
the signal (i.e., the image displacements) from the z-translations to be much smaller than for the other
directions. One can detect this from the smallness of the third leading singular value of DCH , which for
these sequences is often not much bigger than the noise effects.
Once one has recovered the structure accurately using a subset of the leading singular vectors, one can
return to calculate the translational components corresponding to the neglected leading singular vectors.
Conclusions
We presented fast projective structure-from-motion algorithms which have comparable accuracy to the MLE
and appear superior to the Sturm/Triggs approach [30]. These algorithms work for any motion as long as
the camera displacements are not too big, with jTj =Z ! 1=3 [16]. We showed experimentally that the
Sturm/Triggs approach often fails for linear camera motions, especially forward or backward motions. We
speculated that the recent characterizations of the projective least-squares error surface in [17][2][14] may
offer part of the explanation for this. We studied the advantages of a pure projective approach, versus
a mixed Euclidean/projective strategy, for the common situation when the calibration is fixed and partly
known. We showed that algorithms can recover the (projected) inverse depths and homographies even in
projective SFM. We clarified the nature of dominant-plane compensation, showing that it can be considered
as a small-translation approximation rather than a planar-scene approximation.
--R
"Recursive Structure and Motion from Image Sequences using Shape and Depth Spaces,"
"Optimal Structure from Motion: Local Ambiguities and Global Esti- mates,"
"A data set for quantitative motion analysis,"
Matrix Computations
"Direct Multi-Resolution Estimation of Ego-Motion and Structure from Motion,"
"In Defense of the Eight-Point Algorithm,"
"Projective structure and motion from from image sequences using subspace methods,"
"Direct Methods for Recovering Motion,"
"A Unified Approach to Moving Object Detection in 2D and 3D Scenes,"
"Linear subspace methods for recovering translational direction,"
"Direct recovery of shape from multiple views: A parallax based approach,"
"Sensitivity of the Pose Refinement Problem to Accurate Estimation of Camera Parameters,"
"Removal of translation bias when using subspace methods,"
"Three Algorithms for 2-Image and 2-Image Structure from Motion,"
"Fast and Accurate Self-Calibration"
"A Multi-frame Structure from Motion Algorithm under Perspective Projection,"
"A New Structure from Motion Ambiguity,"
Recovering Heading and Structure for Constant-Direction Motion,"
"A Critique of Structure from Motion Algorithms,"
"Direct Multi-Frame Structure from Motion for Hand-Held Cameras,"
"Structure from Motion using Points, Lines, and Intensities,"
"Fast Algorithms for Projective Multi-Frame Structure from Motion,"
"Computing the Camera Heading from Multiple Frames,"
"Multiframe Structure from Motion in Perspective,"
"A Linear Solution for Multiframe Structure from Motion,"
"Structure from Linear and Planar Motions,"
"Simplifying Motion and Structure Analysis Using Planar Parallax and Image Warping,"
"Extracting Structure from Optical Flow Using the Fast Error Search Technique,"
"Fast Partial Search Solution to the 3D SFM Problem,"
"A factorization based algorithm for multi-image projective structure and motion,"
"Shape and motion from image streams under orthography: A factorization method,"
"Factorization methods for projective structure and motion,"
--TR
--CTR
Lionel Moisan , Brenger Stival, A Probabilistic Criterion to Detect Rigid Point Matches Between Two Images and Estimate the Fundamental Matrix, International Journal of Computer Vision, v.57 n.3, p.201-218, May-June 2004
John Oliensis, Exact Two-Image Structure from Motion, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.12, p.1618-1633, December 2002
P. Anandan , Michal Irani, Factorization with Uncertainty, International Journal of Computer Vision, v.49 n.2-3, p.101-116, September-October 2002
John Oliensis, The least-squares error for structure from infinitesimal motion, International Journal of Computer Vision, v.61 n.3, p.259-299, February/March 2005
Amit K. Roy Chowdhury , R. Chellappa, Stochastic Approximation and Rate-Distortion Analysis for Robust Structure and Motion Estimation, International Journal of Computer Vision, v.55 n.1, p.27-53, October | structure from motion;linear algorithms;bundle adjustment;shape from X;sturm/triggs factorization;low level vision;dominant plane;factorization;projective geometry;projective multiframe structure from motion |
378060 | Dimensionality Reduction in Unsupervised Learning of Conditional Gaussian Networks. | AbstractThis paper introduces a novel enhancement for unsupervised learning of conditional Gaussian networks that benefits from feature selection. Our proposal is based on the assumption that, in the absence of labels reflecting the cluster membership of each case of the database, those features that exhibit low correlation with the rest of the features can be considered irrelevant for the learning process. Thus, we suggest performing this process using only the relevant features. Then, every irrelevant feature is added to the learned model to obtain an explanatory model for the original database which is our primary goal. A simple and, thus, efficient measure to assess the relevance of the features for the learning process is presented. Additionally, the form of this measure allows us to calculate a relevance threshold to automatically identify the relevant features. The experimental results reported for synthetic and real-world databases show the ability of our proposal to distinguish between relevant and irrelevant features and to accelerate learning; however, still obtaining good explanatory models for the original database. | Introduction
One of the basic problems that arises in a great variety of elds, including pattern
recognition, machine learning and statistics, is the so-called data clustering problem
[1, 2, 10, 11, 18, 22]. Despite the dierent interpretations and expectations it gives rise
to, the generic data clustering problem involves the assumption that, in addition to the
observed variables (also referred to as predictive attributes or, simply, features), there is
a hidden variable. This last unobserved variable would re
ect the cluster membership
for every case in the database. Thus, the data clustering problem is also referred to
as an example of learning from incomplete data due to the existence of such a hidden
variable. Incomplete data represents a special case of missing data where all the missing
entries are concentrated in a single variable: The hidden cluster variable. That is, we
refer to a given database as incomplete when all the cases are unlabelled.
From the point of view adopted in this paper, the data clustering problem may
be dened as the inference of the generalized joint probability density function for a
given database. Concretely, we focus on learning conditional Gaussian networks for
data clustering [25, 26, 27, 36, 37]. Roughly speaking, a conditional Gaussian network
is a graphical model that encodes a conditional Gaussian distribution [25, 26, 27] for the
variables of the domain. Then, when applied to data clustering, it encodes a multivariate
normal distribution for the observed variables conditioned on each state of the cluster
variable.
As we aim to automatically recover the generalized joint probability density function
from a given incomplete database by learning a conditional Gaussian network, this paper
is concerned with the understanding of data clustering as a description task rather than
a prediction task. Thus, in order to encode a description of the original database, the
learnt model must involve all the original features instead of a subset of them. When
unsupervised learning algorithms focus on prediction tasks, feature selection has proven
to be a valuable technique to increase the predictive ability of the elicited models. In this
paper, we demonstrate that, even when focusing on description, feature selection (also
known as dimensionality reduction) can be a protable tool to improve the performance
of unsupervised learning.
The general framework that we propose to show how unsupervised learning of conditional
Gaussian networks can benet from feature selection is straightforward and consists
of three steps: (i) identication of the relevant features for learning, (ii) unsupervised
learning of a conditional Gaussian network from the database restricted to the relevant
features, and (iii) addition of the irrelevant features to the learnt network to obtain an
explanatory model for the original database. According to this framework, feature selection
is considered a preprocessing step that should be accompanied by a postprocessing
step to fulll our objective. This postprocessing step consists of the addition of every
irrelevant feature to the learnt model to have a nal model that encodes the generalized
joint probability density function for the original data.
To completely dene the framework, one should decide on the automatic dimensionality
reduction scheme to identify the relevant features for learning. This paper
introduces a simple relevance measure to assess the relevance of the features for the
learning process in order to select a subset of them containing the most salient ones.
Additionally, we propose a heuristic method to automatically qualify every feature as
completely relevant or irrelevant for the learning process. This is carried out by the
automatic calculation of a relevance threshold. Those features with relevance measure
values higher than the relevance threshold are considered relevant for the learning pro-
cess, whereas the rest are qualied as irrelevant.
The experimental results reported in this paper show that the framework depicted
above provides us with good explanatory models for the original database reducing the
cost of the learning process as only relevant features are used in this process. In addition
to its eectiveness, the simplicity of the automatic dimensionality reduction scheme that
we propose represents a valuable advantage as it allows the framework to reduce the
dimensionality of the database where to perform learning very e-ciently. Besides, our
scheme is not tied to any particular learning algorithm and, therefore, it can be adapted
to most of them.
The remainder of this paper is organized as follows. In Section 2, we introduce conditional
Gaussian networks for data clustering. Section 3 is dedicated to explain in detail
our automatic dimensionality reduction scheme. We present a new relevance measure
as well as how to automatically discover the relevant and irrelevant features through the
calculation of a relevance threshold. This section also presents how to t our proposal
into unsupervised learning of conditional Gaussian networks under the framework already
outlined. Some experimental results showing the ability of our proposal to identify
the relevant features and to accelerate the learning process are compiled in Section 4.
Finally, we draw conclusions in Section 5.
Conditional Gaussian Networks for Data Cluster-
ing
This section starts introducing the notation used throughout this paper. Then, we give
a formal denition of conditional Gaussian networks. We also present the Bayesian
Structural EM algorithm [13], which is used for explanatory purposes as well as in our
experiments presented in Section 4 due to its good performance in unsupervised learning
of conditional Gaussian networks.
2.1 Notation
We follow the usual convention of denoting variables by upper-case letters and their
states by the same letters in lower-case. We use a letter or letters in boldface upper-case
to designate a set of variables and the same boldface lower-case letter or letters to
denote an assignment of a state to each variable in a given set. The generalized joint
probability density function of X is represented as (x). Additionally,
the generalized conditional probability density function of X given y. If all the
variables in X are discrete, then is the joint probability mass function of X.
Thus, denotes the conditional probability mass function of X given y. On
the other hand, if all the variables in X are continuous, then is the joint
probability density function of X. Thus, f(x j y) denotes the conditional probability
density function of X given y.
2.2 Conditional Gaussian Networks
As we have already mentioned, when facing a data clustering problem we assume the
existence of a random variable X partitioned as
an n-dimensional continuous variable Y and a unidimensional discrete hidden cluster
variable C. X is said to follow a conditional Gaussian distribution [25, 26, 27] if the
distribution of Y, conditioned on each state of C, is a multivariate normal distribution.
That is:
(c) is the n-dimensional mean vector,
and (c), the n n variance matrix, is positive denite.
We dene a conditional Gaussian network (CGN) for X as a graphical model that
encodes a conditional Gaussian distribution for X [25, 26, 27, 36, 37]. Essentially, CGNs
belong to a class of mixed graphical models introduced for the rst time by Lauritzen
and Wermuth [27], and further developed in [25, 26]. This class groups models in which
both discrete and continuous variables can be present and for which the conditional
distribution of the continuous variables given the discrete variables is restricted to be
multivariate Gaussian. More recently, CGNs have been successfully applied to data
clustering [36, 37].
Concretely, a CGN is dened by a directed acyclic graph s (model structure) determining
the conditional (in)dependencies among the variables of Y, a set of local
probability density functions and a multinomial distribution for the variable C. The
model structure yields to a factorization of the generalized joint probability density
function for X as follows:
Y
where
denotes the conguration of the parents of Y i ,
, consistent with x.
The local probability density functions and the multinomial distribution are those in
the previous equation and we assume that they depend on a nite set of parameters
Therefore, Equation 2 can be rewritten as follows:
Model structure
Multinomial distribution
Local probability density functions
c1
c1
c1
c2
c2
c2
Figure
1: Structure, local probability density functions and multinomial distribution for
a CGN with three continuous variables and one binary cluster variable.
Y
where c
denotes the parameters for the local probability density functions
when
If s h denotes the hypothesis that the conditional (in)dependence assertions implied
by s hold in the true generalized joint probability density function of X, then we obtain
from Equation 3 that:
Y
In order to encode a conditional Gaussian distribution for X, each local probability
density function of a CGN should be a linear-regression model. Thus, when
normal distribution with mean and standard deviation
0). Given this form, a missing arc from Y j to Y i implies that b c
the linear-regression model. When the local parameters are c
(b c
is a column vector.
loop
1. Run the EM algorithm to compute the MAP parameters b
for s l given
2. Perform search over model structures, evaluating each model structure by
l
l
3. Let s l+1 be the model structure with the highest score among these encountered in the search
4. if Score(s l : s l
Figure
2: A schematic of the BS-EM algorithm.
The interpretation of the components of the local parameters c
follows: Given
i is the unconditional mean of Y i , v c
i is the conditional variance
of Y i given Pa(s) i , and b c
linear coe-cient re
ecting the strength
of the relationship between Y j and Y i . See Figure 1 for an example of a CGN with three
continuous variables and one binary cluster variable.
Note that the model structure is independent of the value of the cluster variable C,
thus, the model structure is the same for all the values of C. However, the parameters
of the local probability density functions do depend on the value of C and they may
dier for the distinct values of the variable C.
2.3 Learning Conditional Gaussian Networks from Incomplete
Data
One of the methods for learning CGNs from incomplete data is the well-known Bayesian
Structural EM (BS-EM) algorithm developed by Friedman in [13]. Due to its good
performance, this algorithm has received special attention in the literature and has
motivated several variants of itself [32, 34, 35, 41]. We use the BS-EM algorithm for
explanatory purposes as well as in our experiments presented in Section 4.
When applying the BS-EM algorithm in a data clustering problem, we assume that
we have a database of N cases, every case is represented by an
assignment to the n observed variables of the involved in the problem
domain. So, there are (n + 1)N random variables that describe the database. Let O
denote the set of observed variables, that is, the nN variables that have assigned values.
Similarly, let H denote the set of hidden or unobserved variables, that is, the N variables
that re
ect the unknown cluster membership of each case of d.
For learning CGNs from incomplete data, the BS-EM algorithm performs a search
over the space of CGNs based on the well-known EM algorithm [7, 29] and direct optimization
of the Bayesian score. As shown in Figure 2, the BS-EM algorithm is comprised
of two steps: An optimization of the CGN parameters and a structural search for model
selection. Concretely, the BS-EM algorithm alternates between a step that nds the
maximum a posteriori (MAP) parameters for the current CGN structure usually by
means of the EM algorithm, and a step that searches over CGN structures. At each
iteration, the BS-EM algorithm attempts to maximize the expected Bayesian score instead
of the true Bayesian score.
As we are interested in solving data clustering problems of considerable size, the
direct application of the BS-EM algorithm as it appears in Figure 2 may be an unrealistic
and ine-cient solution. In our opinion, the reason of this possible ine-ciency is that the
computation of Score(s : s l ) implies a huge computational expense as it takes account
of every possible completion of the database. It is common to use a relaxed version
of the presented BS-EM algorithm that just considers the most likely completion of
the database to compute Score(s : s l ) instead of considering every possible completion.
Thus, this relaxed version of the BS-EM algorithm is comprised of the iteration of a
parametric optimization for the current model, and a structural search once the database
has been completed with the most likely completion by using the best estimate of the
generalized joint probability density function of the data so far (current model). That
is, the posterior probability distribution of the cluster variable C for each case of the
database,
l ), is calculated. Then, the case is assigned to the cluster where
the maximum of the posterior probability distribution of C is reached. We use this
relaxed version in our experiments of Section 4.
To completely specify the BS-EM algorithm, we have to decide on the structural
search procedure (step 2 in Figure 2). The usual approach is to perform a greedy hill-climbing
search over CGN structures considering all possible additions, removals and
reversals of a single arc at each point in the search. This structural search procedure
is desirable as it exploits the decomposition properties of CGNs and the factorization
properties of the Bayesian score for complete data. However, any structural search
procedure that exploits these properties can be used.
The log marginal likelihood of the expected complete data, log (d j s h ), is usually
chosen as the score to guide the structural search. We make use of it in our experiments.
According to [15], under the assumptions that (i) the database restricted to the cluster
variable C, d C , is a multinomial sample, (ii) the database d is complete, and (iii) the
parameters of the multinomial distribution of C are independent and follow a Dirichlet
distribution, we have that:
Y
Y
Y
Y
where d Y; c is the database d restricted to the continuous variables Y and to cases where
is the set of values that the cluster variable C can take. The term
p(d corresponds to the marginal likelihood of a trivial Bayesian network having
only a single node C. It can be calculated in closed form under reasonable assumptions
according to [5]. Moreover, each term of the form f c (d Y; c j s h ), for all c 2 V al(C),
represents the marginal likelihood of a domain containing only continuous variables
under the assumption that the continuous data is sampled from a multivariate normal
distribution. Then, these terms can be evaluated in factorable closed form under some
reasonable assumptions according to [15, 16, 19].
3 Automatic Dimensionality Reduction in Unsupervised
Learning of Conditional Gaussian Networks
This section is devoted to the detailed presentation of a new automatic dimensionality
reduction scheme applied to unsupervised learning of CGNs. The section starts with an
introductory revision on the general problem of feature selection, and a brief discussion
on some of the problems that appear when adapting supervised feature selection to the
unsupervised paradigm.
3.1 From Supervised to Unsupervised Feature Selection
In many data analysis applications the size of the data can be large. The largeness can
be due to an excessive number of features, the huge number of instances, or both. For
learning algorithms to work e-ciently, and even sometimes eectively, one may need to
reduce the data size. Feature selection has proven to be a valuable technique to achieve
such a reduction of the dimensionality of the data by selecting a subset of features on
which to focus the attention in the subsequent learning process.
In its general form, feature selection is considered a problem of searching for an
optimal subset of the original features according to a certain criterion [3, 23, 28]. The
criterion species the details of measuring the goodness of feature subsets as well as
the relevance of each feature. The choice of a criterion is in
uenced by the purpose
of feature selection. However, what is shared by the dierent purposes is the desire
of improving the performance of the subsequent learning algorithm usually in terms of
speed of learning, predictive ability of the learnt models, and/or comprehensibility of
the learnt models.
Roughly speaking, feature selection involves an algorithm to explore the space of
potential feature subsets, and an evaluation function to measure the quality of these
feature subsets. Since the space of all feature subsets of n features has size 2 n , feature
selection mechanisms typically perform a non-exhaustive search. One of the most popular
techniques is the use of a simple hill-climbing search known as sequential selection
which may be either forward or backward [3, 23, 28]. In the former, the search starts
with an empty set of selected features and, at each time, it adds the best feature among
unselected ones according to the evaluation function. The process stops when no further
improvement can be made. Similarly, backward sequential selection begins with the full
set of features and, at each time, it removes the worst feature based on the evaluation
function until no improvement is found. As it is addressed by Doak [9], feature selection
mechanisms based on sequential selection can require a great deal of processing time
in databases with large number of features. Also, more complex and eective search
algorithms can be used to explore the space of potential feature subsets. The main
advantage of these algorithms over sequential selection is that they avoid getting stuck
in local maxima by means of randomness. However, these approaches usually involve
a huge computational eort. One of the recent works in the eld is reported in [20].
In this paper, the authors propose exploring the space of feature subsets according to
an evolutionary, population-based, randomized search algorithm which represents an
instance of the Estimation of Distribution Algorithm (EDA) approach [24].
In [23], the authors distinguish two approaches to the evaluation function for feature
selection: Wrapper and lter. The wrapper approach implies a search for an optimal
feature subset tailored to the performance function of the subsequent learning algorithm.
That is, it considers feedback from the performance function of the particular subsequent
learning algorithm as part of the function to evaluate feature subsets. On the other
hand, the lter approach relies on intrinsic properties of the data that are presumed to
aect the performance of the learning algorithm but they are not a direct function of its
performance. Then, the lter approach tries to assess the merits of the dierent feature
subsets from the data, ignoring the subsequent learning algorithm.
When applied to supervised learning, the main objective of feature selection is the
improvement of the classication accuracy or class label predictive accuracy of the models
elicited by the subsequent learning algorithm considering only the relevant features
for the task. Independently of the approach used, both lter and wrapper approaches
require the class labels to be present in the data in order to carry out feature selection.
Filter approaches evaluate feature subsets usually by assessing the correlation of every
feature with the class label by using dierent measures [3, 28]. On the other hand, wrapper
approaches rely on the performance of the learning algorithm itself by measuring
the classication accuracy on a validation set to evaluate the goodness of the dierent
feature subsets [3, 23, 28]. There is some evidence from supervised feature selection
research that wrapper approaches outperform lter approaches [21].
Although feature selection is a central problem in data analysis as suggested by
the growing amount of research in this area, the vast majority of the research has been
carried out under the supervised learning paradigm (supervised feature selection), paying
little attention to unsupervised learning (unsupervised feature selection). Only a few
works exist addressing the latter problem. In [6], the authors present a method to rank
features according to an unsupervised entropy measure. Their algorithm works as a lter
approach plus a backward sequential selection search. Devaney and Ram [8] proposes
a wrapper approach combined with either a forward or a backward sequential selection
search to perform conceptual clustering. In [39], Talavera introduces a lter approach
combined with a search in one step, and a wrapper approach combined with either
a forward or a backward sequential selection search as feature selection mechanisms in
hierarchical clustering of symbolic data. The lter approach uses the feature dependence
measure dened by Fisher [11]. Whereas the performance criterion considered in [39]
is the multiple predictive accuracy measured by the average accuracy of predicting the
values of each feature present in the testing data, [40] applies the mechanism comprised
of a lter approach and a search in one step presented in [39] to feature selection in
conceptual clustering of symbolic data considering the class label predictive accuracy as
performance criterion.
In our opinion, two are the main problems to translate supervised feature selection
into unsupervised feature selection. Firstly, the absence of class labels re
ecting
the membership for every case in the database that is inherent to the unsupervised
paradigm makes impossible the use of the same evaluation functions as in supervised
feature selection. Secondly, there is not a standard accepted performance task for unsupervised
learning. Due to this lack of a unied performance criterion, the meaning of
optimal feature subset may vary from task to task. A natural solution to both problems
is proposed in [39] by interpreting the performance task of unsupervised learning as the
multiple predictive accuracy. This seems a reasonable approach because it extends the
standard accepted performance task for supervised learning to unsupervised learning.
Whereas the former learning comprises the prediction of only one feature, the class,
from the knowledge of many, the latter aims the prediction of many features from the
knowledge of many [12]. On the other hand, [6, 8, 40] evaluate their unsupervised feature
selection mechanisms by measuring the class label predictive accuracy of the learnt
models over the cases of a testing set after having performed learning in a training set
where the class labels were masked out. The speed of learning and the comprehensibility
of the learnt models are also studied in [8, 39], although they are considered less
important performance criteria.
3.2 How Learning Conditional Gaussian Networks for Data
Clustering Benets from Feature Selection
Our motivation to perform unsupervised feature selection diers from the motivation of
the previously referred papers due to our distinct point of view over the data clustering
problem. When the learnt models for data clustering are primarily evaluated regarding
their multiple or class label predictive accuracy, as it occurs in [6, 8, 39, 40], feature selection
has proven to be a valuable technique to reduce the dimensionality of the database
where to perform learning. This usually pursues an improvement of the performance of
the learnt models considering only the relevant features for the task. However, when
the main goal of data clustering, as it happens in this paper, is description rather than
prediction, the learnt models must involve all the features that the original database
has in order to encode a description of this database.
It is well-known that unsupervised learning of CGNs to solve data clustering problems
is a di-cult and time consuming task, even so more when focusing on description as
all the original features are usually considered in the learning process. With the aim to
solve these handicaps, we propose a framework where learning CGNs for data clustering
benets from feature selection. The framework is straightforward and consists of three
steps: (i) identication of the relevant features for learning, (ii) unsupervised learning
of a CGN from the database restricted to the relevant features, and (iii) addition of the
irrelevant features to the learnt CGN to obtain an explanatory model for the original
database. Thus, feature selection is considered a preprocessing step that should be accompanied
by a postprocessing step to achieve our objective. The postprocessing step
consists of the addition of every irrelevant feature to the elicited model as conditional
independent of the rest given the cluster variable.
To make the framework applicable to unsupervised learning of CGNs, we should
dene relevance. However, the meaning of relevance depends on the particular purpose
of dimensionality reduction due to the lack of a unied performance criterion for data
clustering. In our concrete case, the objective of reducing the dimensionality of the
databases when learning CGNs for data clustering is to decrease the cost of the learning
process while still obtaining good explanatory models for the original data. The
achievement of such a goal can be assessed by comparing, in terms of explanatory power
and runtime of the learning process, a CGN learnt from the given original database and
a CGN elicited when using dimensionality reduction in the learning process.
Such an assessment of the achievement of our objective leads us to make the following
assumption on the consideration of a feature as either relevant or irrelevant for the
learning process: In the absence of labels re
ecting the cluster membership of each case
of the database, those features that exhibit low correlation with the rest of features
can be considered irrelevant for the learning process. Implicitly, this assumption denes
relevance according to our purpose to perform dimensionality reduction. It is important
to note that the assumption is independent of any clustering of the data, so it can be
readily applied without requiring a previous clustering of the database.
The justication of the previous assumption is straightforward. Features low correlated
with the rest are likely to remain conditionally independent of the rest of features
given the cluster variable when learning a CGN from the original database. Thus, a
CGN elicited from the original database restricted to features highly correlated with
the rest is likely to encode the same set of conditional dependence assertions as a CGN
learnt from the original database. The parameters for the local probability density functions
of the features that appear in both CGNs should be similar as well. Furthermore,
if low correlated features are added to that CGN elicited from the restricted database
as conditional independent of the rest given the cluster variable, then this nal CGN
is likely to encode the same set of conditional dependence and independence assertions
as the CGN learnt from the original data. Thus, the explanatory power of both CGNs
should be almost the same as the models are likely to be very similar.
Some other works that have successfully made use of a similar assumption are [11,
39, 40]. Although the three works present the assumption in its general form, they only
validate it for conceptual clustering of symbolic data. Our paper is the rst, to our
knowledge, that veries it for continuous domains.
3.2.1 Relevance Measure
In order to assess the relevance of Y i , evaluating
the following simple and, thus, e-cient relevance measure:
ijjrest
where n is the number of features in the database, N is the number of cases in the
database and r ijjrest is the sample partial correlation of Y i and Y j adjusted for the remainder
variables. This last quantity can be expressed in terms of the maximum likelihood
estimates of the elements of the inverse variance matrix as r ijjrest
Then, the relevance measure value for each feature Y i ,
as the average likelihood ratio test statistic for excluding an edge between Y i and any
other feature in a graphical Gaussian model [38]. This means that those features likely
to remain conditional independent of the rest given the cluster variable as learning
progresses receive low relevance measure values. Thus, this measure shows a reasonable
behavior according to our denition of relevance.
3.2.2 Relevance Threshold
After having calculated the relevance measure value for every feature of the database,
a decreasing relevance ranking of the features can be obtained. Now, we would like to
know how many of them are needed to perform learning appropriately, that is, we would
like to identify, in the relevance ranking, the relevant features for the learning process.
If we knew that only k features were needed, we could simply choose the rst k features
in our relevance ranking, namely, those k features with the highest relevance measure
values. However, to have this kind of knowledge is not at all usual. We propose a novel
and automatic solution for this problem.
The relevance measure value for each feature Y i , interpreted as
the average value of the likelihood ratio test statistic for excluding a single edge between
Y i and any other feature in a graphical Gaussian model. Thus, we propose the following
Evaluate the relevance measure for each feature Y i ,
Calculate the relevance threshold
Let Y Rel be the feature subset containing only the relevant features
loop
1. Run the EM algorithm to compute the MAP parameters b
s Rel
l
for s Rel
l given
2. Perform search over model structures, evaluating each model structure by
l
s Rel
l
l
s Rel
l
l
3. Let s Rel
l+1 be the model structure with the highest score among these encountered in the search
4. exit loop when Score(s Rel
l
l
Let s final be the nal model obtained after adding the irrelevant features to s Rel
l
Calculate the MAP parameters b
s final for s final
Return (s final , b
Figure
3: A schematic of how to t our automatic dimensionality reduction scheme into
the BS-EM algorithm under the framework presented.
heuristic: The relevance threshold is calculated as the rejection region boundary for an
edge exclusion test in a graphical Gaussian model for the likelihood ratio test statistic
(see [38] for details). This heuristic agrees with our purpose to perform dimensionality
reduction as it qualies as irrelevant those features likely to remain conditional independent
of the rest given the cluster variable as learning progresses. As shown in [38],
the distribution function of the likelihood ratio test statistic is as follows:
is the distribution function of a X 2
1 random variable. Thus, for a 5 % test,
the rejection region boundary (which is considered our relevance threshold) is given by
the resolution of the following equation:
By a simple manipulation, the resolution of the previous equation turns into nding the
root of an equation. The Newton-Raphson method, used in our experiments, is only an
example of suitable methods for solving the equation. Only those features that exhibit
relevance measure values higher than the relevance threshold are qualied as relevant
for the learning process. The rest of the features are treated as irrelevant.
3.2.3 Fitting Automatic Dimensionality Reduction into Learning
In this subsection, we present how to t our automatic dimensionality reduction scheme
into the BS-EM algorithm under the general framework previously introduced. However,
it should be noticed that our scheme is not coupled to any particular learning algorithm
and it could be adapted to most of them.
Figure
3 shows that, after the preprocessing step that consists of our automatic
dimensionality reduction scheme, the BS-EM algorithm is applied as usual but restricting
the original database to the relevant features, Y Rel , and the hidden cluster
variable C. That is, the database where to perform learning consists of N cases,
Figure
4: Example of a TANB model structure with seven predictive attributes.
g, where every case is represented by an assignment to the relevant
features. So, there are (r + 1)N random variables that describe the database,
where r is the number of relevant features We denote the set of observed
variables restricted to the relevant features and the set of hidden variables by O Rel
(jO Rel respectively. Obviously, in Figure 3, s Rel
l represents
the model structure when only the relevant features are considered in the learning pro-
cess, and s Rel
l
denotes the hypothesis that the conditional (in)dependence assertions
implied by s Rel
l hold in the true joint probability density function of Y Rel .
Learning ends with the postprocessing step that comprises the addition of every
irrelevant feature to the model returned by the BS-EM algorithm as conditional independent
of the rest given the cluster variable. This results in an explanatory model for
the original database. The local parameters for those nodes of the nal model associated
to the irrelevant features can be easily estimated after completing the original database
d with the last completion of the restricted database d Rel .
4 Experimental Evaluation
This section is dedicated to show the ability of our proposal to perform an automatic
dimensionality reduction that accelerates unsupervised learning of CGNs without degrading
the explanatory power of the nal models. In order to reach such a conclusion,
we perform 2 sorts of experiments in synthetic and real-world databases. The rst evaluates
the relevance measure introduced in Section 3.2.1 as a means to assess the relevance
of the features for the learning process. The second evaluates the ability of the relevance
threshold calculated as it appears in Section 3.2.2 to automatically distinguish between
relevant and irrelevant features for learning.
As we have addressed, we use the BS-EM algorithm as our unsupervised learning
algorithm. In the current experiments, we limit the BS-EM algorithm to learn Tree
Augmented Naive Bayes (TANB) models [14, 30, 36]. This is a sensible and usual
decision to reduce the otherwise large search space of CGNs. Moreover, this allows to
solve e-ciently data clustering problems of considerable size as it is well-known the
di-culty involved in learning densely connected CGNs from large databases, and the
painfully slow probabilistic inference when working with these.
TANB models constitute a class of compromise CGNs dened by the following con-
dition: Predictive attributes may have, at most, one other predictive attribute as a
parent. Figure 4 shows an example of a TANB model structure. TANB models are
CGNs where an interesting trade-o between e-ciency and eectiveness is achieved,
that is, a balance between the cost of the learning process and the quality of the learnt
CGNs [36].
4.1 Databases Involved
There are 2 synthetic and 2 real-world databases involved in our experimental evaluation.
The knowledge of the CGNs used to generate the synthetic databases allows us to assess
accurately the achievement of our objectives. Besides, the real-world databases provide
us with a more realistic evaluation framework.
To obtain the 2 synthetic databases, we constructed 2 TANB models of dierent
complexity to be sampled. The rst TANB model involved 25 predictive continuous
attributes and 1 3-valued cluster variable. The rst 15 of the 25 predictive attributes
were relevant and the rest irrelevant. The 14 arcs between the relevant attributes were
randomly chosen. The unconditional mean of every relevant attribute was xed to 0 for
the rst value of the cluster variable, 4 for the second and 8 for the third. The linear
coe-cients were randomly generated in the interval [-1, 1], and the conditional variances
were xed to 1 (see Equation 5). The multinomial distribution for the cluster variable C
was uniform. Every irrelevant attribute followed a univariate normal distribution with
mean 0 and variance 1 for each of the 3 values of the cluster variable.
The second TANB model involved predictive continuous attributes and 1 3-valued
cluster variable. The rst 15 of the predictive attributes were relevant and the rest
irrelevant. The 14 arcs between the relevant attributes were randomly chosen. The
unconditional mean of every relevant attribute was xed to 0 for the rst value of
the cluster variable, 4 for the second and 8 for the third. The linear coe-cients were
randomly generated in the interval [-1, 1], and the conditional variances were xed
to 2 (see Equation 5). The multinomial distribution for the cluster variable C was
uniform. Every irrelevant attribute followed a univariate normal distribution with mean
0 and variance 5 for each of the 3 values of the cluster variable. This second model
was considered more complex than the rst due to the higher degree of overlapping
between the probability density functions of each of the clusters and the higher number
of irrelevant attributes.
From each of these 2 TANB models, we sampled 4000 cases for the learning databases
and 1000 cases for the testing databases. In the forthcoming, the learning databases
sampled from these 2 TANB models will be referred to as synthetic1 and synthetic2,
respectively. Obviously, we discarded all the entries corresponding to the cluster variable
for the 2 learning databases and the 2 testing databases.
Another source of data for our evaluation consisted of 2 well-known real-world
databases from the UCI repository of Machine Learning databases [33]:
waveform which is an articial database consisting of 40 predictive features. The
last 19 predictive attributes are noise attributes which turn out to be irrelevant
for describing the underlying 3 clusters. We used the data set generator from the
UCI repository to obtain 4000 cases for learning and 1000 cases for testing
pima which is a real database containing 768 cases and 8 predictive features. There
are 2 clusters. We used the rst 700 cases for learning and the last 68 cases for
testing.
The rst database was chosen due to our interest in working with databases of considerable
size (thousands of cases and tens of features). In addition to this, it represented an
opportunity to evaluate the eectiveness of our approach as the true irrelevant features
were known beforehand. The second database, considerably shorter in both number
of cases and number of features, was chosen to get feedback on the scalability of our
dimensionality reduction scheme. Obviously, we deleted all the cluster entries for the 2
learning databases and the 2 testing databases.
4.2 Performance Criteria
There exist 2 essential purposes to focus on the explanatory power or generalizability
of the learnt models. The rst purpose is to summarize the given databases into the
learnt models. The second purpose is to elicit models which are able to predict unseen
instances [28]. Thus, the explanatory power of the learnt CGNs should be assessed
by evaluating the achievement of both purposes. The log marginal likelihood, sc nal,
and the multiple predictive accuracy, L(test), of the learnt CGNs seem to be sensible
performance measures for the rst and the second purpose, respectively. The multiple
predictive accuracy is measured as the logarithmic scoring rule of Good [17]:
jd test j
y2d test
log f(y
where d test is a set of test cases and jd test j is the number of test cases. The higher the
value for this criterion, the higher the multiple predictive accuracy of the learnt CGNs.
Note that L(test) is not the primary performance measure but 1 of the 2 measures to
assess the explanatory power of the learnt CGNs. When focusing on description, L(test)
is extremely necessary to detect models that, suering from overtting, have high sc nal
values although they are not able to generalize the learning data to unseen instances.
It should be noted that Equation 10 represents a kind of probabilistic approach to the
standard multiple predictive accuracy understanding the latter as the average accuracy
of predicting the value of each feature present in the testing data. When the data
clustering problem is considered as the inference of a generalized joint probability density
function from the learning data via unsupervised learning of a CGN, the probabilistic
approach presented in Equation 10 is more appropriate than the standard multiple
predictive accuracy. This can be illustrated with a simple example. Let us imagine 2
dierent CGNs that exhibit the same standard multiple predictive accuracy but dierent
multiple predictive accuracy measured as the logarithmic scoring rule of Good. This
would re
ect that the generalized joint probability density functions encoded by the
2 CGNs are dierent. Moreover, this would imply that 1 of the 2 CGNs generalizes
the learning data to unseen instances better (i.e., the likelihood of the unseen instances
is higher) than the other, although their standard multiple predictive accuracy is the
same. Thus, the standard multiple predictive accuracy would not be an appropriate
performance criterion in this context as it would be unable to distinguish between these
models. Some other works that have made use of the logarithmic scoring rule of Good
to assess the multiple predictive accuracy are [31, 34, 36, 37, 41].
The runtime of the overall learning process, runtime, is also considered as valuable
information. Every runtime reported includes the runtimes of the preprocessing step
(dimensionality reduction), learning algorithm and postprocessing step (addition of the
irrelevant features).
All the results reported are averages over 10 independent runs for the synthetic1,
synthetic2 and waveform databases, and over 50 independent runs for the pima database
due to its shorter size. The experiments are run on a Pentium 366 MHz computer.
features of synthetic12216104relevance503010features of synthetic2251791
relevance3010features of waveform31197relevance1062features of pima7531
Figure
5: Relevance measure values for the features of the databases used. The dashed
lines correspond to the relevance thresholds.
4.3 Results: Relevance Ranking
Figure
5 plots the relevance measure values for the features of each of the 4 databases
considered. Additionally, it shows the relevance threshold (dashed line) for each database.
In the case of the synthetic databases, the 10 true irrelevant features of the synthetic1
database and the 15 of the synthetic2 database clearly appear with the lowest relevance
measure values.
In the case of the waveform database, it may be interesting to compare the graph of
Figure
5 with other graphs reported in [4, 40, 42] for the same database. Caution should
be used as a detailed comparison is not advisable due to the fact that relevance is dened
in dierent ways depending on the particular purpose of each these works. Moreover,
the work by Talavera [40] is limited to conceptual clustering of symbolic data, then, the
original waveform database was previously discretized. However, it is noticeable that
the 19 true irrelevant features appear plotted with low relevance values in the 4 graphs.
Although the shape of the graphs restricted to the 21 relevant features varies for the 3
works reported ([4, 40, 42]), these agree with our graph and consider the rst and last
few of these relevant features less important than the rest of the 21. The shape of our
graph is slightly closer to those that appear in [4, 42] than to the one plotted in [40].
Then, we can conclude that the relevance measure proposed exhibits a desirable
behavior for the databases where the true irrelevant features are known as it clearly
assigns low relevance values to them. The following subsection evaluates if these values
are low enough to automatically distinguish between relevant and irrelevant features
through the calculation of a relevance threshold.
Figure
6 shows the log marginal likelihood (sc nal) and multiple predictive accuracy
number of selected features for synthetic12216104sc_final
-74000
-76000
-78000
-80000
-82000
-84000
-86000
-88000
number of selected features for synthetic12216104L(test)
number of selected features for synthetic2251791
sc_final
-122000
-132000
number of selected features for synthetic2251791
number of selected features for waveform31197sc_final
-122000
number of selected features for waveform31197L(test)
number of selected features for pima7531
sc_final
-9000
number of selected features for pima7531
Figure
log marginal likelihood (sc nal) and multiple predictive accuracy (L(test)) of
the nal CGNs for the databases used as functions of the number of features selected
as relevant from a decreasing relevance ranking.
(L(test)) of the nal CGNs for the 4 databases considered as functions of the number
of features selected as relevant for learning. In addition to this, Figure 7 reports on the
runtime needed to learn the nal CGNs as a function of the number of features selected
number of selected features for synthetic12216104runtime
number of selected features for synthetic2251791
runtime
(seconds)1006020number of selected features for waveform31197runtime
(seconds)2000
number of selected features for pima7531
runtime
Figure
7: Runtime needed to learn the nal CGNs for the databases used as a function
of the number of features selected as relevant from a decreasing relevance ranking.
as relevant for learning. The selection of k features as relevant means the selection of
the k rst features of the decreasing relevance ranking obtained for the features of each
concrete database according to their relevance measure values. Thus, in this rst part of
the experimental evaluation we do not perform an automatic dimensionality reduction.
Instead, we aim to study performance as a function of the number of features involved
in learning. This allows us to evaluate the ability of our relevance measure to assess the
relevance of the features for the learning process.
In general terms, Figure 6 conrms that our relevance measure is able to induce an
eective decreasing relevance ranking of the features of each database considered. That
is, the addition of the features that have low relevance measure values (last features of
the rankings) does not imply a signicant increase in the quality of the nal models,
even in some cases, it hurts the explanatory power. Thus, this gure conrms that the
assumption that low correlated features are irrelevant for the learning process works
very well on the continuous domains considered. On the other hand, the addition of
these irrelevant features tends to increase the cost of the learning process measured as
runtime (see Figure 7).
Particularly interesting are the results for the synthetic databases where the original
models are known. The selection of true irrelevant features to take part in learning does
not produce better models but increases the runtime of the learning process. Also, it is
known that the last 19 of the 40 features of the waveform database are true irrelevant
features. According to the relevance measure values for the features of the waveform
database (see Figure 5), all the 19 true irrelevant features would appear in the last 21
positions of the decreasing relevance ranking. Furthermore, it can be appreciated from
Figure
6 that the addition of these 19 irrelevant features does not signicantly increase
Table
1: Comparison of the performance achieved when learning CGNs from the original
databases and when our automatic dimensionality reduction scheme is applied.
features original dimensionality dimensionality reduction
database original relevant sc nal L(test) runtime sc nal L(test) runtime
synthetic2
the explanatory power of the nal CGNs. The results obtained for the pima database,
where there is no knowledge on the existence of true irrelevant features, share the fact
that using all the features in the learning process degrades the quality of the nal models
as well as making the learning process slower. Thus, the explanatory power of the nal
CGNs appears to be not monotonic with respect to the addition of features as relevant
for learning. Hence the need of automatic tools for discovering irrelevant features that
may degrade the eectiveness and enlarge the runtime of learning.
4.4 Results: Automatic Dimensionality Reduction
Figure
5 shows the relevance threshold (dashed line), calculated as shown in Section 3.2.2,
for each of the databases considered. Only those features that exhibit relevance measure
values higher than the relevance threshold are qualied as relevant. The rest of the
features are considered irrelevant for learning.
It is interesting to notice that, for the 2 synthetic databases, all the true irrelevant
features are identied independently of the complexity of the sampled model. It should
be remembered that the synthetic2 database was sampled from a model more complex
than the one used to generate the synthetic1 database. The results obtained for the
waveform database are also specially appealing as the 19 true irrelevant features are
correctly identied. Moreover, our scheme considers 8 features of the remainder 21
features also as irrelevant. This appears to be a sensible decision as these 8 features
correspond to the rst 4 and the last 4 of the 21 relevant features. Remember that
[4, 40, 42] agree in this point: The rst and last few of the 21 relevant features are less
important than the rest of relevant features.
Table
1 compares, for the 4 databases considered, the performance achieved when
no dimensionality reduction is carried out and the performance achieved when our automatic
dimensionality reduction scheme is applied to learn CGNs. The column relevant
indicates the number of relevant features automatically identied by our scheme for each
database (see Figure 5). It clearly appears from the table that our scheme is able to
automatically set up a relevance threshold that induces a saving in runtime but still
obtaining good explanatory models. The application of our scheme as a preprocessing
step for the BS-EM algorithm (Figure provides us with a saving of runtime over the
original BS-EM algorithm that achieves 22 % for the synthetic1 database and
the synthetic2 database. Moreover, the explanatory power of the CGNs elicited from
the original synthetic databases and the CGNs obtained when using the automatic dimensionality
reduction scheme are exactly the same.
For the waveform database, our automatic dimensionality reduction scheme proposes
a reduction of the number of features of 68 %: Only 13 out of the 40 original features are
considered relevant. This reduction induces a gain in terms of runtime of 58 % whereas
our scheme does not signicantly hurt the quality of the learnt models. On the other
hand, the CGNs learnt with the help of our automatic dimensionality reduction scheme
from the pima database exhibit, on average, a more desirable behavior than the CGNs
elicited from the original pima database: Higher log marginal likelihood and multiple
predictive accuracy whereas the runtime of the learning process is shortened.
Conclusions
The main contribution of this paper is twofold. First, the proposal of a novel automatic
scheme to perform unsupervised dimensionality reduction comprised of (i) a simple and
e-cient measure to assess the relevance of every feature for the learning process, and
(ii) a heuristic to calculate a relevance threshold to automatically distinguish between
relevant and irrelevant features. Second, to present a framework where unsupervised
learning of CGNs benets from our proposed scheme in order to obtain models that describe
the original databases. This framework proposes performing learning taking into
account only the relevant features identied by the automatic dimensionality reduction
scheme presented. Then, every irrelevant feature is incorporated into the learnt model
in order to obtain an explanatory CGN for the original database.
Our experimental results for synthetic and real-world domains have suggested great
advantages derived from the use of our automatic dimensionality reduction scheme in
unsupervised learning of CGNs: A huge decrease of the runtime of the learning process,
and an achievement of nal models that appear to be as good as and, sometimes, even
better than the models obtained using all the features in the learning process. Addi-
tionally, the experimental results have proven that the assumption that we made, once
relevance was dened according to our purpose to perform dimensionality reduction,
works fairly well in the continuous domains considered.
This paper has primarily focused on the gain in e-ciency without degrading the
explanatory power of the nal models derived from the use of the referred scheme as a
preprocessing for the learning process. However, it is worth noticing that the identica-
tion of the relevant and irrelevant features for the learning process allows us to reach a
better comprehensibility and readability of the problem domains and the elicited models.
Few works have addressed the problem of unsupervised feature selection as a pre-processing
step [6, 8, 39, 40]. However, all of them dier from our work. Whereas we
focus on the description of the original database, [6, 8, 40] are interested in the class
label predictive accuracy and [39] in the multiple predictive accuracy. This impossibilities
a fair comparison between these dierent approaches. Moreover, our automatic
dimensionality reduction scheme oers a series of advantages over the other existing
mechanisms. In addition to its simplicity and e-ciency, our scheme is not coupled to
any particular learning algorithm and it could be adapted to most of them. On the
other hand, the existing unsupervised feature selection mechanisms based on wrapper
approaches are tailored to the performance criterion of the particular subsequent learning
algorithm (see [8, 39]) and, thus, usually require a great deal of processing time for
large databases. Furthermore, [6, 40] propose feature selection mechanisms based on l-
ter approaches that only provide the user with a ranking of the features leaving open the
problem of determining how many features should be used to perform a proper learning.
Our scheme is able to automatically distinguish between relevant and irrelevant features
in the relevance ranking. Then, one line of future research could be the extension of our
current contribution to categorical data in order to overcome the problem of determining
the number of features to be used by the subsequent learning algorithm.
We are aware that the contribution presented in this paper is unable to deal properly
with domains where redundant features exist (i.e., features whose values can be
exactly determined from the rest of the features). The reason is that the relevance
measure introduced in Section 3.2.1 scores each feature separately instead of groups of
features. Thus, redundant features would be considered relevant although they would
not provide the learning process with additional information over the true relevant fea-
tures. To detect these features is necessary because they have an eect on the runtime
of the learning process. One of the lines of research that we are currently exploring is
concerned with the extension of the general framework depicted in this paper to the
case where redundant features exist. Our current work is focused on the derivation of a
new relevance measure to assess the gain in relevance of each feature in relation to the
features considered relevant so far.
Acknowledgments
J.M. Pe~na wishes to thank Dr. Steve Ellacott for his interest in this work and his useful
comments. He also made it possible to visit the School of Computing and Mathematical
Sciences of the University of Brighton, Brighton, United Kingdom. The authors would
also like to thank the two anonymous reviewers whose useful comments to a previous
version of this paper have helped us to improve our manuscript.
This work was supported by the Spanish Ministry of Education and Culture (Min-
isterio de Educacion y Cultura) under AP97 44673053 grant.
--R
Analysis for Applications
Pattern Classi
Clustering Algorithms
Finding Groups in Data
Estimation of Distribution Algorithms.
Feature Selection for Knowledge Discovery and Data Mining
The EM Algorithm and Extensions
UCI repository of Machine Learning databases
--TR
--CTR
Martin H. C. Law , Mario A. T. Figueiredo , Anil K. Jain, Simultaneous Feature Selection and Clustering Using Mixture Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.9, p.1154-1166, September 2004
Lance Parsons , Ehtesham Haque , Huan Liu, Subspace clustering for high dimensional data: a review, ACM SIGKDD Explorations Newsletter, v.6 n.1, p.90-105, June 2004
J. M. Pea , J. A. Lozano , P. Larraaga, Globally Multimodal Problem Optimization Via an Estimation of Distribution Algorithm Based on Unsupervised Learning of Bayesian Networks, Evolutionary Computation, v.13 n.1, p.43-66, January 2005 | data clustering;edge exclusion tests;conditional Gaussian networks;feature selection |
378472 | An efficient algorithm for finding the CSG representation of a simple polygon. | We consider the problem of converting boundary representations of polyhedral objects into constructive-solid-geometry (CSG) representations. The CSG representations for a polyhedron P are based on the half-spaces supporting the faces of P. For certain kinds of polyhedra this problem is equivalent to the corresponding problem for simple polygons in the plane. We give a new proof that the interior of each simple polygon can be represented by a monotone boolean formula based on the half-planes supporting the sides of the polygon and using each such half-plane only once. Our main contribution is an efficient and practical O(n log n) algorithm for doing this boundary-to-CSG conversion for a simple polygon of n sides. We also prove that such nice formulæ do not always exist for general polyhedra in three dimensions. | desirable that such representations be compact and support efficient simulation of real-world
operations on the objects. Over the years two different styles of representation have
emerged; these are used by nearly all geometric modeling systems currently in existence.
The first style of representation describes an object by the collection of surface elements
forming its boundary: this is a boundary representation. In effect, boundary representations
reduce the problem of representing a solid object to one of representing surface elements.
This is a somewhat simpler problem, since it is set in one dimension less. The second
style of representation describes a solid object as being constructed by regularized boolean
operations on some simple primitive solids, such as boxes, spheres, cylinders, etc. Such
a description is referred to as a constructive solid geometry representation, or CSG rep-
resentation, for short. Each style of representation has its advantages and disadvantages,
depending on the operations we wish to perform on the objects. The reader is referred to
one of the standard texts in solid modeling [13, 16], or the review article [23] for further
details on these representations and their relative merits.
If one looks at modelers in either camp, for example the romulus [16], geomod [25],
and medusa [17] modelers of the boundary persuasion, or the padl-1 [27], padl-2 [2],
and gmsolid [1] modelers of the CSG persuasion, one nearly always finds provisions for
converting to the other representation. This is an important and indispensable step that
poses some challenging computational problems 1 . In this paper we will deal with certain
cases of the boundary-to-CSG conversion problem and present some efficient computational
techniques for doing the conversion. Our approach is based on that of Peterson [21].
Peterson considered the problem of obtaining a CSG representation for simple polyhedral
solids, such as prisms or pyramids (not necessarily convex), based on the half-spaces
supporting the faces of the solid. Such solids are, in effect, two-dimensional objects (think
of the base of the prism or pyramid) in which the third dimension has been added in a very
simple manner. Thus Peterson considered the problem of finding CSG representations for
simple polygons in the plane. related problem is that of finding convex decompositions
of simple polygons [4, 18, 20, 26, 28].) By a complicated argument, Peterson proved that
every simple polygon in the plane admits of a representation by a boolean formula based on
the half-planes supporting its sides. This formula is especially nice in that each of the supporting
half-planes appears in the formula exactly once, and hence the formula is monotone
(no complementation is needed). For this reason we focus our attention on such formulae,
which we call Peterson-style formulae.
A monotone formula for a polygon or polyhedron is desirable because it makes it possible
to predict how small perturbations of the defining half-spaces affect the overall volume. This
is an important property in various applications of solid modeling, such as machine tooling.
When machining a peg that has to fit in a hole, for example, it is important to know that
any errors in the position of the defining half-spaces will result in making the peg only
smaller. If a half-space and its negation both enter a formula, then a perturbation of the
defining plane can make the volume smaller in one part of the object and simultaneously
To quote from Requicha [23]: ".the relative paucity of known conversion algorithms poses significant
constraints on the geometric modeling systems that we can build today."
larger in another part.
In this paper we first give a short new proof that every polygon has a Peterson-style
formula (Section 3). Peterson did not explicitly consider algorithms for deriving this CSG
representation from the polygon. A naive implementation based on his proof would require
for the conversion, where n is the size (number of sides or vertices) of the polygon.
We provide in this paper an efficient \Theta(n log n) algorithm for doing this boundary-to-CSG
conversion (Section 4). We regard this algorithm as the major contribution of our paper;
the algorithm uses many interesting techniques from the growing field of computational
geometry [5, 22]. Nevertheless, it is very simple to code-its subtlety lies in the analysis of
the performance and not in the implementation. Finally (Section 5), we show that Peterson-
style formulae are not always possible for general polyhedra in three dimensions and discuss
a number of related issues.
We believe that the work presented in this paper illustrates how several of the concepts
and techniques of computational geometry can be used to solve problems that are of clear
importance in solid modeling and computer graphics. The solution that we obtain is both
mathematically interesting and practical to implement. We expect to see more such applications
of computational geometry to other areas in the future and hope that this paper will
motivate some researchers in the graphics area to study computational geometry techniques
more closely.
Formulation and history of the problem
Let P be a simple polygon in the plane; in this context, simple means non-self-intersecting.
By the Jordan curve theorem, such a polygon subdivides the plane into two regions, its
interior and its exterior. In general, we identify the polygon with its interior. Let us orient
all the edges of P so that the interior of P lies locally to the right of each edge, and give
each such oriented edge a name. We will call these names literals. To each literal we also
give a second meaning. A literal m also represents the half-plane bounded by the infinite
line supporting the edge m and extending to the right of that line. We will speak of such a
Figure
1: A simple polygon
P and the half-plane supporting
side m
half-plane as supporting the polygon (even though P might
not all lie in the half-plane). See Figure 1 for an illustration
of these concepts.
Notice that, for each point x of the plane, if we know
whether x lies inside or outside each of the half-planes supporting
, then we know in fact if x is inside P . This fol-
lows, because each of the regions into which the plane is
subdivided by the infinite extensions of the sides of P lies
either wholly inside P , or wholly outside it. As a result,
there must exist a boolean formula whose atoms are the
literals of P and which expresses the interior of P . For
example, if P is convex, then this formula is simply the
"and" of all the literals.
Since "and"s and "or"s are somewhat cumbersome to write, we will switch at this point
to algebraic notation and use multiplication conventions for "and" and addition conventions
for "or". Consider the two simple polygons shown in Figure 2. Formulae for the two polygons
are uv(w(x polygon (a) and uvw(x (b). The associated
boolean expression trees are also shown in Figure 2. Notice that these are Peterson-style
formulae: they are monotone and use each literal exactly once. The reader is invited at this
point to make sure that these formulae are indeed correct.
(a)
(b)
x
y
z
x
z
y
y
x z
z
y
x
Figure
2: Formulae for two polygons
A more complex formula for a simple polygon was given by Guibas, Ramshaw, and
Stolfi [9] in their kinetic-framework paper. That style of formula for the two polygons of
Figure
2 is uv \Phi vw \Phi wx \Phi xy \Phi yz \Phi zu. Here \Phi denotes logical "xor" and the overbar
denotes complementation. As explained in [9], that type of formula is purely local, in that it
depends only on the convex/concave property of successive angles of the polygon. The rule
should be obvious from the example: as we go around, we complement the second literal
corresponding to a vertex if we are at a convex angle, and the first literal if we are at a
concave angle. Thus the formula is the same for both of the example polygons.
Although a formula of this style is trivial to write down, it is not as desirable in solid
modeling as a Peterson-style formula, because of the use of complementation and the "xor"
operator. The Peterson formula is more difficult to derive, because it captures in a sense how
the polygon nests within itself and thus is more global in character. It can be viewed naively
as an inclusion-exclusion style formula that reflects this global structure of the polygon. We
caution the reader, however, that this view of the Peterson formula is too naive and led us
to a couple of flawed approaches to this problem.
In general there are many boolean formulae that express a simple polygon in terms of
its literals. Proving the equivalence of two boolean formulae for the same polygon is a
non-trivial exercise. The reason is that of the 2 n primitive "and" terms one can form on
literals (with complementation allowed), only \Theta(n 2 ) are non-zero, in the sense that they
denote non-empty regions of the plane. Thus numerous identities hold and must be used
in proving formula equivalence.
The decomposition of a simple polygon into convex pieces [4, 18, 20, 26, 28] gives another
kind of boolean formula for the polygon, one in which the literals are not half-planes, but
convex polygons. Depending on the type of decomposition desired, the convex polygons
may or may not overlap; in the overlapping case, the formula may or may not contain
negations. If we expand the literals in a convex decomposition into "and"s of half-planes,
the result need not be a Peterson-style formula: negations, repeated literals, and half-planes
that do not support the polygon are all possible.
If we leave the boolean domain and allow algebraic formulae for describing the characteristic
function of a simple polygon, then such formulae that are purely local (in the same
sense as the "xor" formula above) are given in a paper of Franklin [6]. Franklin describes
algebraic local formulae for polyhedra as well. We do not discuss this further here as it goes
beyond the CSG representations we are concerned with.
3 The existence of monotone formulae
In this section we will prove that the interior of every simple polygon P in the plane can be
expressed by a Peterson-style formula, that is, a monotone boolean formula in which each
literal corresponding to a side of P appears exactly once.
As it turns out, it is more natural to work with simple bi-infinite polygonal chains (or
chains, for short) than with simple polygons. An example of a simple bi-infinite chain c is
shown in Figure 3. Such a chain c is terminated by two semi-infinite rays and in between
contains an arbitrary number of finite sides. Because it is simple and bi-infinite, it subdivides
the plane into two regions. We will in general orient c in a consistent manner, so we can
speak of the region of the plane lying to the left of c, or to the right of c, respectively. By
abuse of language, we will refer to these regions as half-spaces.
left half-space
the semi-infinite rays
right half-space
a
c
d
e
f
l
Figure
3: A simple bi-infinite chain
The interior of a simple polygon P can always be viewed as the intersection of two such
chain half-spaces. Let ' and r denote respectively the leftmost and rightmost vertex of P .
As in
Figure
4, extend the sides of P incident to ' infinitely far to the left, and the sides
incident to r infinitely far to the right. It is clear that we thus obtain two simple bi-infinite
chains and that the interior of P is the intersection of the half-space below the upper chain
with the half-space above the lower chain. Notice also that the literals used by the upper
and lower chains for these two half-spaces form a partition of the literals of P . Thus it
suffices to prove that a chain half-space admits of a monotone formula using each of its
literals exactly once.
l
r
upper chain
lower chain
Figure
4: The interior of a simple polygon P
If a chain consists of a single line, then the claim is trivial: the formula is the single
literal that defines the line.
If a chain c has more than one edge, we prove the claim by showing that there always
exists a vertex v of c such that if we extend the edges incident to v infinitely far to the other
side of v, these extensions do not cross c anywhere. In particular, the extensions create two
new simple bi-infinite chains c 1 and c 2 that, as before, partition the literals used by c. See
Figure
5 for an example. It is easy to see that the half-space to the right (say) of c is then
either the intersection or the union of the half-spaces to the right of c 1 and c 2 . It will be
the intersection if the angle of c at v in the selected half-space is convex (as is the situation
in
Figure
5), and the union if this angle is concave.
The existence of the desired vertex v is relatively easy to establish. Of the two half-spaces
defined by c there is one that is bounded by the two semi-infinite rays in a "convex"
fashion. What we mean by this is that when we look at this half-space from a great distance
above the xy-plane (so we can discern only the semi-infinite rays bounding it) it appears as
a convex angle ( ). For example, in Figure 5, the right half-space R of c is the convex
one. Now consider the convex hull h(R), the intersection of all half-planes containing R.
This hull is an unbounded polygonal region whose vertices are vertices of c. Clearly at least
one vertex of c lies on h(R), and any such vertex is a good vertex at which to break c; that
is, it can serve as the vertex v of the previous argument. The reason is clear from Figure 6:
at any such vertex the extensions of the sides incident upon it cannot cross c again.
It is worth remarking here that the determination of the splitting vertex v in the manner
above is not at all influenced by whether we are trying to obtain a boolean formula for the
right half-space of c or the left half-space of c. The choice of which half-space to take the
convex hull of is determined solely by the behavior of the semi-infinite rays of c. Indeed, if we
were to choose the wrong ("concave") half-space, its convex hull would be the whole plane
and would contain no vertices. We can summarize the situation by saying that we always
R
c
chainc
chain
Figure
5: The splitting vertex v for a chain c
R
c
Figure
The convex hull h(R)
split at a vertex of the convex hull of the polygonal chain c; this definition automatically
selects the correct half-space.
By recursively applying this decomposition procedure until each subchain becomes a
single bi-infinite straight line we can derive the following theorem.
Theorem 3.1 Every half-space bounded by a simple bi-infinite polygonal chain has a monotone
boolean formula using each of the literals of the chain exactly once. The same holds
for the interior of any finite simple polygon.
If we are given a polygonal chain c, such as the one in Figure 3, then certain aspects
of the boolean formula of (say) the right half-space R of c can be immediately deduced by
inspection. For example, it follows from the arguments above that there exists a boolean
formula for R that not only uses each literal exactly once, but in fact contains these literals
in the order in which they appear along c; if we were to omit the boolean operators and
parentheses in the formula, we would just get a string of all the literals in c in order.
Furthermore, the boolean operators between these literals are easy to deduce. As the
previous discussion makes clear, between two literals that define a convex angle in R the
corresponding operator has to be an "and", and between two literals that define a concave
angle the corresponding operator has to be an "or". Thus, with parentheses omitted, the
boolean formula for the chain c in Figure 3 has to look like a
a
c
e
f d
Figure
7: Our methods cannot
obtain all valid formulae
for this polygon
The preceding paragraph shows that the crux of the
difficulty in computing a boolean formula is to obtain the
parenthesization, or equivalently, the sequence of the appropriate
splitting vertices. We call this the recursive chain-
splitting problem for a simple bi-infinite chain. The solution
of this problem is the topic of the next section. For
the chain of Figure 3 a valid solution is
We conclude by noticing that a procedure for solving
this problem may be non-deterministic, since in general we
will have a choice of several splitting vertices. We can in
fact simultaneously split at any subset of them. Still, not
all valid Peterson-style formulae for a simple polygon are
obtained in this fashion. Our formulae all have the property
that the literals appear in the formula in the same
order as in the polygon. Figure 7 shows an example of a
Peterson-style formula where that is not true: a valid formula for the polygon shown is
4 The conversion algorithm
We have seen in Section 3 that we can find a monotone boolean formula for a simple polygon
if we can solve the following recursive chain-splitting problem:
Given a simple bi-infinite polygonal chain with at least two edges, find a vertex
z of its convex hull. Split the chain in two at z and extend to infinity the two
edges incident to z, forming two new chains. Because z is on the convex hull,
both chains are simple. Recursively solve the same problem for each chain that
has at least two edges.
We present an O(n log n) algorithm to solve this chain-splitting problem, where n is the
number of vertices of the polygon P . The algorithm uses only simple data structures and
is straightforward to implement.
Before we describe our algorithm, let us consider a naive alternative to it. Many algorithms
have been published that find the convex hull of a simple polygon in linear
time [7, 15, 11, 14, 24]. With slight modifications, any of these algorithms can be used
to find a vertex on the hull of a simple bi-infinite polygonal chain. If we use such an algorithm
to solve the chain-splitting problem, the running time is O(n) plus the time needed
to solve the two subproblems recursively. The worst-case running time t(n) is given by the
recurrence
0!k!n
which has solution
(a)
(b)
Figure
8: Two
paths with worst-case
and best-case
splitting behavior
This quadratic behavior occurs in the worst case, shown in Figure
8a, because each recursive step spends linear time splitting a
single edge off the end of the path. In the best case, on the other
hand, each split divides the current path roughly in half, and the
algorithm runs in O(n log n) time. This asymptotic behavior can be
obtained for the path shown in Figure 8b, if the splitting vertices are
chosen wisely.
The best case of this naive algorithm is like a standard divide-and-
conquer approach: at each step the algorithm splits the current path
roughly in half. In general, however, it is difficult to guarantee an
even division, since all vertices on the convex hull might be extremely
close to the two ends of the path. Thus, to avoid quadratic behavior,
we must instead split each path using less than linear time. Other
researchers have solved similar problems by making the splitting cost
depend only on the size of the smaller fragment [8, 10]. If the running
time t(n) obeys the recurrence
0!k!n
then log n). Our method uses a similar idea: the splitting cost is O(log n) plus
a term that is linear in the size of one of the two fragments. The fragment is not necessarily
the smaller of the two, but we can bound its size to ensure an O(n log n) running time
overall. The details of this argument appear in Section 4.5.
We present our algorithm in several steps. We first give a few definitions, then give an
overview of our approach. We follow the informal overview with a pseudo-code description
of the algorithm. Section 4.3 gives more detail about one of the pseudo-code operations,
and Section 4.4 describes the data structure used by the algorithm. Section 4.5 concludes
the presentation of the algorithm by analyzing its running time.
4.1 Definitions
As shown in Section 3, we can find a boolean formula for P by splitting the polygon at its
leftmost and rightmost vertices to get two paths, then working on the two paths separately.
We denote by the current path, either upper or lower. If u and v are vertices of , we use
the notation (u; v) to refer to the subpath of between u and v, inclusive. The convex
hull of a set of points A is denoted by h(A); we use h(u; v) as shorthand for h((u; v)). A
path (u; v) has j(u; v)j edges; similarly, jh(u; v)j is the number of edges on h(u; v).
We can use the path (u; v) to specify a bi-infinite chain by extending its first and
last edges. Let e u be the edge of (u; v) incident to u, and let * e u be the ray obtained
by extending e u beyond u. Let e v and * e v be defined similarly. Then (u; v) specifies the
bi-infinite polygonal chain obtained by replacing e u by * e u and e v by * e v . In general, for
arbitrary u and v, this bi-infinite chain need not be simple. Our algorithm, however, will
guarantee the simplicity of each bi-infinite chain it considers. We assume in what follows
that * e u and * e v are not parallel, which guarantees that the convex hull of the bi-infinite
chain has at least one vertex. A slight modification to the algorithm is needed if the rays
are parallel.
4.2 The algorithm
This section presents the algorithm that recursively splits a polygonal chain. We first outline
the algorithm and then present it in a pseudo-code format. Subsequent sections give the
details of the operations sketched in this section.
We now outline the algorithm. Given a polygonal path (u; v) with at least two edges,
we partition it at a vertex x to get two pieces (u; x) and (x; v) with roughly the same
number of edges. The vertex x is not necessarily a vertex of h(u; v); this partitioning is
merely preparatory to splitting (u; v) at a hull vertex. In O(j(u; v)j) time we compute
the convex hulls of (u; x) and (x; v) in such a way that for any vertex z of (u; v), we
can easily find h(x; z). Our data structure lets us account for the cost of finding h(x; z) as
part of the cost of building h(u; x) and h(x; v). The details of this accounting appear in
Section 4.5.
The next step of the algorithm locates a vertex z of the convex hull of the bi-infinite
chain (u; v)[ * e u [ * e v . We will split (u; v) at z. The vertex z can be on the path (u; x) or
on the path (x; v). Without loss of generality let us assume that z is a vertex of (u; x);
because u is not a vertex of the convex hull, z 6= u. We recursively split (u; z), partitioning
it at its midpoint, building convex hulls, and so on. However, and this is the key observation,
we do not have to do as much work for (z; v) if z 6= x. We already have the hull h(x; v),
and we can easily find h(z; x) from our data structure for h(u; x). Thus we can recursively
split (z; v) without recomputing convex hulls. Intuitively speaking, we do a full recursion
(including convex hull computation) only on pieces whose length is less than half the length
of the piece for which we last computed convex hulls.
The key to our algorithm's efficiency is avoiding the recomputation of convex hulls. The
naive algorithm builds O(n) hulls whose average size can be as much as n=2; our algorithm
also builds O(n) hulls, but their average size is only O(log n). Our algorithm locates n
splitting vertices in O(log n) time apiece, which contributes another O(n log n) term to the
running time. These two terms dominate the time cost of the algorithm, as Section 4.5
shows.
We present the algorithm more formally in the pseudo-code below. The pseudo-code represents
a convex hull h(x; v) using a data structure called a path hull, denoted by PH (x; v);
the path hull stores the vertices of h(x; v) in a linear array. The algorithm uses the path hull
PH (x; v) to produce PH (x; z) efficiently, for any splitting vertex z in (x; v). The algorithm
consists of two mutually recursive subroutines, f() and p(), whose names stand for full and
partial. The routine f(u; v) partitions (u; v) at x to get two equal parts, builds a path
hull structure for each, and calls p(u; x; v). The subroutine p(u; x; v) uses PH (x; u) and
PH (x; v) to find the splitting vertex z; Section 4.3 gives the details of this operation. The
routine then splits (u; v) at z and recurses on each fragment; it ensures that the required
path hulls have been built whenever p() is called. We start the algorithm by invoking f()
on the entire path .
begin
1. if (u; v) is a single edge then return;
else
begin
2. Let x be the middle vertex of (u; v);
3. Build PH (x; u) and PH (x; v);
4. p(u; x; v);
p(u; x; v) /* x is a vertex of (u; v), not equal to u
or v. Path hulls PH (x; v) and PH (x; u)
have been computed. */
begin
5. Find a vertex z of h((u; v)[ * e u [ * e v ). Output z as part of the sequence of
splitting vertices.
else
begin
7. if z 2 (u; x) then build PH (x; z) from PH (x; u);
else build PH (x; z) from PH (x; v);
if z is a vertex of (u; x) then
8. begin f(u; z); p(z; x; v); end
else
9. begin p(u; x; z); f(z; v); end
The chain-splitting algorithm
4.3 Finding a splitting vertex
This section shows how to use the path hull data structure to find the splitting vertex z.
Our method exploits the fact that PH (x; v) represents h(x; v) as a linear array of convex
hull vertices: we perform binary search on the array to find the splitting vertex.
Given a path (u; v), we want to find a vertex of the convex hull of the bi-infinite chain
that (u; v) specifies. Each such vertex belongs to the finite convex hull h(u; v); we solve
our problem by finding a vertex of h(u; v) that is guaranteed to belong to the infinite hull.
The edges of the infinite hull h((u; v) [ * e u [ * e v ) have slopes in a range bounded by the
slopes of * e u and * e v . Vertices of the hull have tangent slopes in the same range. We simply
find a vertex of h(u; v) with a tangent slope in the range. Let d u and d v be the direction
vectors of the rays * e u and * e v . Because * e u and * e v are not parallel, d u and d v define an
angular range of less than 180 degrees; define d to be the negative of the bisector of this
angular range. An extreme vertex of h(u; v) in direction d is guaranteed to be a vertex of
the infinite hull. 2 See Figure 9 for an example.
d d
d
Figure
9: We find an extremal vertex in the direction d
We use binary search on each of the two path hulls PH (u; x) and PH (x; v) to find an
extreme vertex in direction d. We compare the two vertices and pick the more extreme of
the two. If we break ties consistently in the binary searches and in the comparison of the
two extreme vertices (say, by preferring the left vertex of tied pairs), the vertex we find is
guaranteed to be a vertex of the infinite hull.
4.4 Implementing path hulls
In this section we describe the path hull data structure used in the previous two sections.
The path hull PH (x; v) represents the convex hull of (x; v). It is not symmetric in its ar-
guments: it implicitly represents h(x; v 0 ) for all vertices v 0 in (x; v), but does not represent
not equal to x. The structure PH (x; v) has three essential properties:
1. PH (x; v) represents h(x; v) by a linear array of vertices. Let " v be the vertex of h(x; v)
closest to v on (x; v). Then the array lists the vertices of h(x; v) in clockwise order,
starting and ending with " v.
2. Given PH (x; v), we can transform it into PH (x; v 0 ) for any vertex v 0 in (x; v),
destroying PH (x; v) in the process. Let the vertices of (v; x) be numbered
successively transform PH (x; v) into PH (x; v i ) for each v i
in sequence from v time proportional to j(x; v)j.
3. PH (x; v) can be built from (x; v) in O(j(x; v)j) time.
We get these properties by adapting Melkman's algorithm for finding the convex hull of
a polygonal path [15]. We satisfy requirement 2 by "recording" the actions of Melkman's
algorithm as it constructs h(x; v), then "playing the tape backwards."
To avoid computing square roots, in practice we do not compute the bisector of the angle defined by du
and dv . Instead, we find the normals to du and dv that point away from the infinite hull, then add the two
to get a direction d strictly between these normals.
Many linear-time algorithms have been proposed to find the convex hull of a simple
polygon [7, 15, 11, 14, 24]. Some of these algorithms need to find a vertex on the hull
to get started; we use Melkman's algorithm because it does not have this requirement. It
constructs the hull of a polygonal path incrementally: it processes path vertices in order,
and at each step it builds the hull of the vertices seen so far.
The algorithm keeps the vertices of the current convex hull in a double-ended queue, or
deque. The deque lists the hull vertices in clockwise order, with the most recently added
hull vertex at both ends of the deque. Let the vertices in the deque be
. The algorithm operates on the deque with push and pop operations that
specify the end of the queue, bottom or top, on which they operate. The algorithm appears
below; it assumes that no three of the points it tests are collinear, though this restriction
is easy to lift.
Get the first three vertices of the path with the function NextVertex() and put
them into the deque in the correct order.
while v / NextVertex() returns a new vertex do
if v is outside the angle
begin
while v is left of \Gamma\Gamma\Gamma* v b v b+1 do pop(v b ; bottom);
while v is left of \Gamma\Gamma\Gamma* v
Melkman's convex hull algorithm
Figure
if it lies in the shaded
sector
We now sketch a proof of correctness; Melkman gives a full
proof [15]. We first consider the case in which the vertex v is
discarded. This happens when v is inside the angle
(See
Figure
10.) We know that v b+1 is connected to v t\Gamma1 by a
polygonal path, and that v is connected to v b by a polygonal path.
The two paths do not intersect, so v must lie inside the current
hull. When v is not discarded, it lies outside the current hull, and
the algorithm pops hull vertices until it gets to the endpoints of
the tangents from v to the current hull. The algorithm is linear:
if it operates on a path with n vertices, it does at most 2n pushes
and pops.
We can use the algorithm to build an array representation of
the hull. The algorithm does at most n pushes at either end of the
deque, so we can implement the deque as the middle part of an
array of size 2n. Pushes and pops increment and decrement the array indices of the ends of
the queue; pushes write in a new element, pops read one out. The resulting deque contains
the vertices of the convex hull in a contiguous chunk of an array.
As we have described it so far, the array data structure has path hull properties 1 and 3;
how can we obtain property 2? When the algorithm builds h(x; v) starting from x and
working toward v, at intermediate steps it produces h(x; v 0 ) for every vertex v 0 in (x; v).
We need to be able to reconstruct these intermediate results. To do this, we add code to
the algorithm to create a transcript of all the operations performed, recording what vertices
are pushed and popped at each step. The structure PH (x; v) stores not only the deque
that represents h(x; v), but also the transcript of the operations needed to create the deque
from scratch. To reconstruct PH (x; v 0 ) from PH (x; v), we read the transcript in reverse
order, performing the inverse of each recorded operation (pushing what was popped, and
vice versa), until the deque represents h(x; v 0 ). We throw away the part of the transcript
we have just read, so that PH (x; v 0 ) stores only the transcript of the operations needed to
create Because we discard every step we have read over, we look at each step of the
transcript at most once during the playback. Therefore, reconstructing the intermediate
results takes time proportional to the original cost of finding PH (x; v). This completes the
proof that the path hull data structure has all three of its required properties.
4.5 Analyzing the running time
In this section we analyze the running time of the chain-splitting algorithm given on page 10.
The analysis uses a "credit" scheme, in which each call to f() or p() is given some number
of credits to pay for the time used in its body and its recursive calls. We give O(n log n)
credits to the first call to f (), then show that all calls have enough credits to pay for their
own work and that of their recursive calls.
We begin the analysis by proving that f() and p() are called O(n) times: Every call
to p(u; x; v) splits (u; v) into two non-trivial subpaths, and every call to f(u; v) for which
(u; v) has more than one edge passes (u; v) on to p(). The initial path can be split only
O(n) times, so the recursion must have O(n) calls altogether.
How much work is done by a call to f(u; v), exclusive of recursive calls? We assume
that the vertices of are stored in an array. Therefore line 2 of f() takes only constant
time. Line 3 is the only step of f() that takes non-constant time; as shown in Section 4.4,
line 3 takes O(j(u; v)j) time. We define the value of a credit by saying that a call f(u; v)
needs j(u; v)j credits-one credit per edge of (u; v)-to pay for the work it does, exclusive
of its call to p(). The constant-time steps in f() take O(n) time altogether and hence are
dominated by the rest of the running time.
A call to p(u; x; v) does accountable work in lines 5 and 7. The cost of line 5 is dominated
by two binary searches, which take O(log n) time. Line 5 therefore takes O(n log n) time over
the whole course of the algorithm. Section 4.4 shows that the cost of building PH (x; z) at
line 7 can be accounted as part of the construction cost of the path hull from which PH (x; z)
is derived. Thus we can ignore the work done at line 7 of p(); its cost is dominated by that
of line 3 of f ().
To complete our analysis of the running time, we must bound the cost of all executions
of line 3 of f (). In a single call to f(u; v), line 3 uses j(u; v)j credits. The sum of all credits
used by line 3 is proportional to the time spent executing that line. We give ndlog 2 ne
credits to the first call to f (), then show that this is enough to pay for all executions of
line 3. We use the following two invariants in the proof:
1. A call to f(u; v) is given at least mdlog 2 me credits, where to pay for
itself and its recursive calls.
2. A call to p(u; x; v) is given at least (l
and to pay for its recursive calls.
Lemma 4.1 If a call to f() or p() is given credits in accordance with invariants 1 and 2,
it can pay for all executions of line 3 it does explicitly or in its recursive calls.
Proof: Let m, l, and r be as defined above. The proof is by induction on m.
A call to f(u; v) with no credits and needs none, since it does not
reach line 3. There are no calls to p() with
A call to f(u; v) with m ? 1 gets at least mdlog 2 me credits and spends m
of them executing line 3. It has mdlog 2 (m=2)e to pass on to its call to p(u; x; v).
The larger of l and r is dm=2e, and dlog 2 dm=2ee, so the call to
gets at least mdlog 2 credits, as required
by invariant 2.
A call to p(u; x; v) splits (u; v) into two paths (u; z) and (z; v) with a
and b edges, respectively. The call to p(u; x; v) divides its credits between its
recursive calls evenly according to subpath size. If z = x, then the two calls
to f() get at least ldlog 2 max(l; r)e ldlog 2 le and rdlog 2 max(l; r)e rdlog 2 re
credits, satisfying invariant 1. If z 6= x, then without loss of generality assume
that z belongs to (u; x) and line 8 is executed; the other case is symmetric. The
call to f(u; z) gets at least adlog 2 max(l; r)e adlog 2 ae, as required. The call
to p(z; x; v) gets at least bdlog 2 max(l; r)e bdlog 2 max(b \Gamma r)e, as required by
invariant 2. This completes the proof.
Altogether the calls to f() and p() take O(n log n) time, plus the time spent building
path hulls at line 3. The preceding lemma shows that all the executions of line 3 take only
O(n log n) time, and hence the entire algorithm runs in O(n log n) time.
4.6 Implementation
The algorithm described in this section has been implemented. The implementation is
more general than the algorithm we have so far described: it correctly handles the cases of
collinear vertices on convex hulls and parallel rays on bi-infinite chains. These improvements
are not difficult. Handling collinear vertices requires two changes: the program detects and
merges consecutive collinear polygon edges, reporting them to the user, and the while loop
tests in Melkman's algorithm are changed from ``v is left of '' to "v is on or to the left
of the line supporting." When a chain has parallel infinite rays, its convex hull may be
bounded by two infinite lines. Even though the hull has no vertices, the chain nevertheless
has valid splitting vertices on the hull. The program finds a splitting vertex by searching
for an extreme vertex in a direction perpendicular to the infinite rays, using a special case
to avoid selecting u or v.
As input the program takes a list of polygon vertices in order (either clockwise or
counterclockwise), specified as x-y coordinate pairs. As output the program produces a
Figure
displayed as a polygon
list of the splitting vertices in the order they are computed; it also produces a correctly
parenthesized boolean formula for the input polygon. When the program is applied to the
polygon shown in Figure 11, it produces the following (slightly abbreviated) output:
main: Calling f() on 8.17
p: splitting at vertex 16, 15, 9, 10, 13, 11,
main: Calling f() on 17.25, 0.8
p: splitting at vertex 18, 19, 20, 0, 25, 24,
Boolean formula is:
In this formula, the number i refers to the edge joining vertex i to vertex (i
here n is 26.
The program produces graphical as well as textual output. Marc H. Brown created
the graphical displays using an algorithm animation system he is developing at the DEC
Systems Research Center. (Brown's thesis [3] provides more information on algorithm
animation.) The animation shows multiple color views of the state of the computation,
which are updated as the program runs. One view shows the input polygon and highlights
chains, convex hulls, and splitting vertices as the algorithm operates on them. Another view
pictorially displays the formula for each subchain as a boolean combination of half-planes.
Other views show the incremental construction of the boolean formula and its parse tree.
Each view emphasizes a different aspect of the algorithm; together they illuminate many of
its important features.
The title page of this report shows a black-and-white snapshot of the animated algorithm
running on a small example. The snapshot features three views of the algorithm.
The "Geometry" view shows the polygon; in the snapshot, this view highlights the initial
splitting vertices. The "Formula" view shows the development of the boolean formula over
time. The "CSG Parse Tree" view shows the parse tree corresponding to the formula; each
node of the tree displays the region that corresponds to the subtree rooted at the node.
5 Formulae for polyhedra
We have shown that the interior of a simple polygon can be represented by a Peterson-style
formula: a monotone boolean formula that uses each literal once. We would like to find
such a formula for a polyhedron P in 3-space. Here, the literals are half-spaces bounded by
the planes supporting the faces.
In this section we prove that not all polyhedra have a Peterson-style formula. Figure 12
illustrates a simplicial polyhedron (each face is a triangle) with eight vertices and twelve
faces. Six of the faces are labeled; the six unlabeled faces lie on the convex hull of P .
The edge between C and C 0 is a convex angle. The half-spaces defined by faces A and B
intersect the faces A 0 and B 0 . Similarly, the half-spaces defined by A 0 and B 0 intersect faces
A and B. After we establish a couple of lemmas, we will prove that P has no Peterson-style
formula by assuming that it has one and deriving a contradiction.
We begin by observing that any collection of planes divides space into several convex
regions. (In the mathematical literature, this division is usually called an arrangement [5].)
If a polyhedron P has a CSG representation in terms of half-spaces, then we can specify a
subset of the planes bounding these half-spaces and derive a representation for the portion
of P inside any convex region determined by the subset.
More precisely, let f be a boolean formula whose literals are the half-spaces of P ; we
can think of f as an expression tree. If the tree for f has nodes a and b, then we denote the
lowest common ancestor of a and b in f by lca f (a; b). Let H Hm be a subset of the
half-spaces defined by faces of P . Each point in space can be assigned a string ff 2 f0; 1g m
such that the i-th character of ff is 1 if and only if the point is in half-space H i . All the
points assigned the string ff are said to be in the region R ff . We use f j ff to denote the
formula obtained by setting each H and simplifying the result by using algebraic
rules: a. The expression
tree for f j ff inherits several important properties from the expression tree for f :
Lemma 5.1 Let f be a formula that uses the half-spaces H
others) and let ff be a string in f0; 1g m . Then the derived formula f j ff has the following
three properties:
1. If f is monotone, then so is f j ff . Similarly, if f is Peterson-style, so is f j ff .
2. If the expression tree for f j ff has nodes a, b, and c, with
(a; b), then
in the tree for f .
A
C'
A'
A'
Figure
12: Two views of a simplicial polyhedron with no Peterson-style formula
3. If the expression tree for f j ff contains a node c at depth k, then the tree for f contains
the node c at depth k.
Proof: All three properties are maintained by the rules that form the expression
tree for f j ff by simplifying the expression tree for f .
The next lemma shows the interaction between the region R ff and boolean formulae f j ff .
Lemma 5.2 If a polyhedron P has a formula f that uses half-spaces H
others) then, for any string ff 2 f0; 1g m , the portion of P inside the region R ff is described
by the formula f j ff .
Proof: The statement above simply says that formulae f and f j ff agree inside
the region R ff . This follows from the definition of f j ff and the fact that the
simplification rules do not change the value of the formula.
Two corollaries of Lemma 5.2 give us constraints on the formula of a polyhedron based
on its edges and faces. In these corollaries and the discussion that follows, we will add an
argument to a formula f j ff to emphasize which half-spaces are not fixed by the string ff.
Corollary 5.3 Let P be a polyhedron with Peterson-style formula f . If faces A and B of P
meet at an edge of P , the operator in f that is the lowest common ancestor of A and B,
lca f (A; B), is an "and" if and only if A and B meet in a convex angle.
Hm be the half-spaces of P except for the two defined
by A and B. Choose an interior point of the edge formed by faces A and B,
and let ff be its string. The two-variable formula f j ff (AB) must describe the
polyhedron in the vicinity of the edge, so by Lemma 5.1(2), lca f (A; B) is an
"and" if and only if A and B meet in a convex angle.
Corollary 5.4 Let P be a polyhedron with a Peterson-style formula f using half-spaces
and A and B. If the half-space defined by face B intersects face A at some
point with string ff then f j ff A.
Proof: The two-variable formula f j ff (AB) must describe the face A both inside
and outside the half-space of B, so B cannot appear in the formula.
Now we are ready to look at the polyhedron P in Figure 12. Suppose P has a Peterson-
style formula f . Then it has a formula f j 111111 (ABCA describes the region inside
the unlabeled faces. We look at the constraints on this formula and derive a contradiction.
Consider the three faces A, B, and C. By Corollary 5.3 we know that lca f (B;
and lca f 5.4 applied to faces A and C implies that the formula
describing these three faces is
where the string ff 1 appropriately fixes all the half-spaces except A, B, and C. Similarly,
the formula describing A 0 , B 0 , and C 0 is
Now consider the region inside all unlabeled half-spaces and outside C and C 0 . The
portion of P within this region can be described by a Karnaugh map [12]:
The '?' appears because four planes cut space into only fifteen regions; since we want a
monotone forces us to make it a '1'. Examining all Peterson-style
formulae on A, B, A 0 , and B 0 reveals that the only formula with this map is
In order to combine the formulae 1, 2, and 3 into a single formula on six variables, we
must determine which operators are repeated in the three formulae. We knew from formulae
1 and 2 that the operators lca f (A; B) and lca f both "and"s-now we know
that they are distinct "and"s because lca f (lca f (A; B); lca f
The "or"s of the first two formulae are distinct because they are descendents of distinct
"and"s. Finally, by Lemma 5.1(3), the "or" of formula 3 is different from the other "or"s
because it is not nested as deeply as the "and"s.
Thus, all five operators of the formula on the six labeled half-spaces appear in the
3. Using the nesting depth of the operators, we know that the formula
looks like (2(2 Filling in the half-space names gives the formula for
the portion of P inside the unlabeled faces:
Notice, however, that in this formula the lca of C and C 0 is an "or". Thus lca f (C; C
"or". But this contradicts Corollary 5.3, so the formula above cannot represent the portion
of P inside the convex hull of P . This contradiction proves that P has no Peterson-style
formula.
6 Closing observations
Though the previous section proves that not all polyhedra have Peterson-style formulae, it
may still be the case that the interior of a polyhedron with n faces can be represented by a
formula using O(n) literals. The trivial upper bound on the size of a formula is O(n 3 ). In
fact, the interiors of any set of cells formed by a collection of n planes can be described by
a formula that represents each convex cell as the "and" of its bounding planes and "or"s
the cell representations together. The size of the formula is at worst the total number of
sides of the cells formed by n planes, which is known to be O(n 3 ) [5].
Recent work by Paterson and Yao improves the upper bound on the size of a formula to
method, like the trivial one, slices the polyhedron into convex polyhedra
by cutting along the planes supporting the faces. By cutting along the planes in a particular
order, and by cutting a subpolyhedron with a plane only when the defining face lies on the
subpolyhedron boundary, the algorithm produces convex pieces whose total number of sides
is O(n 2 ). The formula corresponding to this decomposition has size O(n 2 ).
Neither of the formulae just described is monotone. Every polyhedron does have a
monotone formula, however: For the formula of each cell in the trivial decomposition, we
can "and" all of the half-spaces that contain the cell and then "or" these cell representations.
This formula uses no negations-to see that it defines the polyhedron, notice first that every
point in the polyhedron is in some cell. Second, to move from a cell in the polyhedron to
any point outside the polyhedron you must cross out of a half-space when you first leave
the polyhedron. Thus the representation of an interior cell contains no points outside the
polygon. This implies that the formula evaluates to false at exterior points and shows that
there is a monotone formula with size O(n 4 ).
It would be interesting to characterize the polyhedra that can be represented by Peterson-
style formulae. Peterson [21] showed that the representations of polygons give such formulae
for extrusions and pyramids. We would like to extend this class.
We can also consider looking for formulae for polygons with curved edges. If each edge of
a polygon is a piece of a bi-infinite curve that does not intersect itself, then an edge defines
a half-space with a curved boundary-perhaps we can represent the polygon as a formula
on these half-spaces.
Suppose we restrict the curves so that any two intersect in at most one point (the
pseudoline condition). Then it can be proved that there is always a polygon vertex v such
that the extensions to infinity (through v) of the two edges adjacent to v never intersect
the boundary of the polygon. Since this vertex v can be used as a splitting vertex, such
polygons have Peterson-style formulae.
If the curves are allowed to intersect in more than one point, then it may be impossible
for a formula to represent a polygon without representing other areas of the plane, too. For
example, if two ovals intersect in four points, then the region inside the first oval and outside
the second is disconnected. No boolean formula can represent one connected component
without including the other (unless we are allowed to introduce auxiliary curves).
Acknowledgments
We thank D. P. Peterson and Jorge Stolfi for their helpful discussions and polyhedra. We
also thank Cynthia Hibbard and Mart'in Abadi for their careful reading of the manuscript.
--R
Interactive modeling for design and analysis of solids.
Algorithm Animation.
Computational geometry and convexity.
Algorithms in Combinatorial Geometry
Polygon properties calculated from the vertex neighborhoods.
Finding the convex hull of a simple polygon.
time algorithms for visibility and shortest path problems inside triangulated simple polygons.
A kinetic framework for computational geometry.
Sorting Jordan sequences in linear time.
On finding the convex hull of a simple polygon.
Digital Logic and Computer Design.
An Introduction to Solid Modeling.
A linear algorithm for finding the convex hull of a simple polygon.
Geometric Modeling.
Solid modelling and parametric design in the Medusa system.
Art Gallery Theorems and Algorithms.
Binary partitions with applications to hidden-surface removal and solid modelling
Analysis of set patterns.
Halfspace representation of extrusions
Computational Geometry.
Representations for rigid solids: Theory
Convex hulls of piecewise-smooth Jordan curves
Rational B-splines for curve and surface representation
Convex decomposition of simple polygons.
The PADL-1.0/2 system for defining and displaying solid objects
Graphical input to a Boolean solid modeller.
--TR
Geometric modeling
Computational geometry: an introduction
Algorithms in combinatorial geometry
On-line construction of the convex hull of a simple polyline
Convex hulls of piecewise-smooth Jordan curves
An introduction to solid modeling
Art gallery theorems and algorithms
Polygon properties calculated from the vertex neighborhoods
Sorting Jordan sequences in linear time
Representations for Rigid Solids: Theory, Methods, and Systems
Convex Decomposition of Simple Polygons
Digital Logic and Computer Design
--CTR
Michael S. Paterson , F. Frances Yao, Optimal binary space partitions for orthogonal objects, Proceedings of the first annual ACM-SIAM symposium on Discrete algorithms, p.100-106, January 22-24, 1990, San Francisco, California, United States
Marc H. Brown , John Hershberger, Color and Sound in Algorithm Animation, Computer, v.25 n.12, p.52-63, December 1992
M. S. Paterson , F. F. Yao, Binary partitions with applications to hidden surface removal and solid modelling, Proceedings of the fifth annual symposium on Computational geometry, p.23-32, June 05-07, 1989, Saarbruchen, West Germany
J. Friedman , J. Hershberger , J. Snoeyink, Compliant motion in a simple polygon, Proceedings of the fifth annual symposium on Computational geometry, p.175-186, June 05-07, 1989, Saarbruchen, West Germany
Tamal K. Dey, Triangulation and CSG representation of polyhedra with arbitrary genus, Proceedings of the seventh annual symposium on Computational geometry, p.364-371, June 10-12, 1991, North Conway, New Hampshire, United States
J. Snoeyink , J. Hershberger, Sweeping arrangements of curves, Proceedings of the fifth annual symposium on Computational geometry, p.354-363, June 05-07, 1989, Saarbruchen, West Germany
Michael T. Goodrich, Applying parallel processing techniques to classification problems in constructive solid geometry, Proceedings of the first annual ACM-SIAM symposium on Discrete algorithms, p.118-128, January 22-24, 1990, San Francisco, California, United States | boundary-to-CSG conversion algorithms;solid modeling;simple polygons;constructive solid geometry |
378636 | Nice point sets can have nasty Delaunay triangulations. | We consider the complexity of Delaunay triangulations of sets of point s in $\Real^3$ under certain practical geometric constraints. The \emph{spread} of a set of points is the ratio between the longest and shortest pairwise distances. We show that in the worst case, the Delaunay triangulation of $n$ points in~$\Real^3$ with spread $\Delta$ has complexity $\Omega(\min\set{\Delta^3, n\Delta, n^2})$ and $O(\min\set{\Delta^4, n^2})$. For the case our lower bound construction consists of a uniform sample of a smooth convex surface with bounded curvature. We also construct a family of smooth connected surfaces such that the Delaunay triangulation of any good point sample has near-quadratic complexity. | Introduction
Delaunay triangulations and Voronoi diagrams are used as a fundamental tool in several geometric
application areas, including nite-element mesh generation [17, 25, 37, 40], deformable surface
modeling [16], and surface reconstruction [1, 3, 4, 5, 12, 35]. Many algorithms in these application
domains begin by constructing the Delaunay triangulation of a set of n points in IR 3 . Delaunay triangulations
can have
complexity
in the worst case, and as a result, all these algorithms have
worst-case running
time
(n 2 ). However, this behavior is almost never observed in practice except
for highly-contrived inputs. For all practical purposes, three-dimensional Delaunay triangulations
appear to have linear complexity.
One way to explain this frustrating discrepancy between theoretical and practical behavior
would be to identify geometric constraints that are satised by real-world input and to analyze
Delaunay triangulations under those constraints. These constraints would be similar to the realistic
input models such as fatness or simple cover complexity [8, 46], which many authors have used
to develop geometric algorithms with good practical performance. Unlike these works, however,
our (immediate) goal is not to develop new algorithms, but rather to formally explain the good
practical performance of existing code.
Portions of this work were done while the author was visiting INRIA, Sophia-Antipolis, with the support of a
UIUC/CNRS/INRIA travel grant. This research was also partially supported by a Sloan Fellowship and by NSF
CAREER grant CCR-0093348. An extended abstract of this paper was presented at the 17th Annual ACM Symposium
on Computational Geometry [30]. See http://www.cs.uiuc.edu/~jee/pubs/spread.html for the most recent
version of this paper.
Dwyer [23, 24] showed that if a set of points is generated uniformly at random from the unit
its Delaunay triangulation has linear expected complexity. Golin and Na [33] recently derived
a similar result for random points on the surface of a three-dimensional convex polytope with constant
complexity. Although these results are encouraging, they are unsatisfying as an explanation
of practical behavior. Real-world point data generated by laser range nders, digital cameras,
tomographic scanners, and similar input devices is often highly structured.
This paper considers the complexity of Delaunay triangulations under two types of practical
geometric constraints. First, in Section 2, we consider the worst-case Delaunay complexity as
a function of both the number of points and the spread|the ratio between its diameter and
the distance between its closest pair. For any n and , we construct a set of n points with
spread whose Delaunay triangulation has
complexity
n),
our lower bound construction consists of a grid-like sample of a right circular cylinder with constant
height and radius. We also show that the worst-case complexity of a Delaunay triangulation is
We conjecture that our lower bounds are tight, and sketch a possible technique to
improve our upper bounds.
An important application of Delaunay triangulations that has received a lot of attention recently
is surface reconstruction: Given a set of points from a smooth surface , reconstruct an
approximation of . Several algorithms provably reconstruct surfaces if the input points satisfy
certain sampling conditions [4, 5, 12, 35]. In Section 3, we consider the complexity of Delaunay
triangulations of good samples of smooth surfaces. Not surprisingly, oversampling almost any surface
can produce a point set whose Delaunay triangulation has quadratic complexity. We show
that even surface data with no oversampling can have quadratic Delaunay triangulations and that
there are smooth surfaces where every good sample has near-quadratic Delaunay complexity. We
also derive similar results for randomly distributed points on non-convex smooth surfaces. An
important tool in our proofs is the denition of sample measure, which measures the intrinsic
di-culty of sampling a smooth surface for reconstruction.
Throughout the paper, we analyze the complexity of three-dimensional Delaunay triangulations
by counting the number of edges. Two points are joined by an edge in the Delaunay triangulation
of a set S if and only if they lie on a sphere with no points of S in its interior. Euler's formula
implies that any three-dimensional triangulation with n vertices and e edges has at most 2e - 2n
triangles and e - n tetrahedra, since the link of every vertex is a planar graph.
We dene the spread of a set of points (also called the distance ratio [18]) as the ratio between the
longest and shortest pairwise distances. In this section, we derive upper and lower bounds on the
worst-case complexity of the Delaunay triangulation of a point set in IR 3 , as a function of both the
number of points and the spread. The spread is minimized at (n 1=3 ) when the points are packed
into a tight lattice, in which case the Delaunay triangulation has only linear complexity. On the
other hand, all known examples of point sets with quadratic-complexity Delaunay triangulations,
such as points on the moment curve or a pair of skew lines, have
spread
(n). Thus, it is natural to
ask how the complexity of the Delaunay triangulation changes as the spread varies between these
two extremes.
The spread of a set of points is loosely related to its dimensionality. If a set uniformly covers a
Nice Point Sets Can Have Nasty Delaunay Triangulations 3
bounded region of space, a surface of bounded curvature, or a curve of bounded curvature, its spread
is respectively (n 1=3 ), (n 1=2 ), or (n). The case of surface data is particularly interesting in light
of numerous algorithms that reconstruct surfaces using a subcomplex of the Delaunay triangulation.
We will discuss surface reconstruction in more detail in the next section. Indyk et al. [36] observed
that molecular data usually has sublinear spread, by examining a database of over 100,000 small
drug molecules.
Several algorithmic and combinatorial bounds are known that depend favorably on spread,
especially for dense point sets; a d-dimensional point set is dense if its spread is O(n 1=d ). Edelsbrunner
et al. [28] showed that a dense point set in the plane has at most O(n 7=6 ) halving lines 1 ,
and a dense point set in IR 3 has at most O(n 7=3 ) halving planes. The best upper bounds known
for arbitrary point sets are O(n 4=3 ) [19] and O(n 5=2 ) [41], respectively. Valtr [42, 43, 44] proved
several other combinatorial bounds for dense planar sets that improve the corresponding worst-case
bounds. Verbarg [45] describes an e-cient algorithm to nd approximate center points in
dense point sets. Cardoze and Schulman [14], Indyk et al. [36], and Gavrilov et al. [31] describe
algorithms for approximate geometric pattern matching whose running times depend favorably on
the spread of the input set. Clarkson [18] describes data structures for nearest neighbor queries in
arbitrary metric spaces which are e-cient if the spread of the input is small.
Although our results are described in terms of spread, they also apply to other more robust
quality measures. For example, we could dene the average spread of a point set as the average
(in some sense), over all points p, of the ratio between the distances from p to its farthest and
nearest neighbors. In each of our constructions, the distance ratios of dierent points dier by at
most a small constant factor, so our results apply to average spread as well.
2.1 Lower Bounds
The crucial special case of our lower bound construction is
n). For any positive integer x,
let [x] denote the set f1; xg. Our construction consists of n evenly spaced points on a helical
space curve:
2k
sin
Figure
1. S p n is a grid-like uniform "-sample (see Section 3) of a right circular cylinder, where
1=n). By adding additional points on two hemispherical caps at the ends of the cylinder,
we can extend S p
n into a uniform "-sample of a smooth convex surface with bounded curvature
and constant local feature size. The closest pair of points in S p
n has distance 2=
the diameter of S p
n is 2-o(1), so the spread of S p
n is
n-o(1). We will show that the Delaunay
triangulation of S p
n has
complexity
(n 3=2 ).
Let h (t) denote the helix (t; cos t; sin t), where the parameter > 0 is called the pitch. The
combinatorial structure of the Delaunay triangulation depends entirely on the signs of certain
insphere determinants. Using elementary trigonometric identities and matrix operations, we can
1 Edelsbrunner et al. [28] only prove the upper bound O(n 5=4 = log n); the improved bound follows immediately
from their techniques and Dey's more recent O(n 4=3 ) worst-case bound [19].
Figure
1. A set of n points whose Delaunay triangulation has
complexity
simplify the insphere determinant for ve points on this helix as follows.
We obtain the surprising observation that changing the pitch does not change the combinatorial
structure of the Delaunay triangulation of any set of points on the helix. (More generally, scaling
any set of points on any circular cylinder along the cylinder's axis leaves the Delaunay triangulation
invariant.) Thus, for purposes of analysis, it su-ces to consider the case
(t; cos t; sin t).
Our rst important observation is that any set of points on a single turn of any helix has a
neighborly Delaunay triangulation, meaning that every pair of points is connected by a Delaunay
edge. For any real value t, dene the bitangent sphere (t) to be the unique sphere passing
through h(t) and h(-t) and tangent to the helix at those two points.
Lemma 2.1. For any 0 < t < , the sphere (t) intersects the helix h only at its two points of
tangency.
Proof: Symmetry considerations imply that the bitangent sphere must be centered on the y-axis,
so it is described by the equation x constants a and r. Let
denote
the intersection curve of (t) and the cylinder y Every intersection point between (t)
and the helix must lie on
. If we project the helix and the intersection curve to the xy-plane,
we obtain the sinusoid a portion of the parabola
These two curves meet tangentially at the points (t; cos t) and (-t; cos t).
The mean value theorem implies that
cos x at most four times in the range - < x < .
(Otherwise, the curves y
would intersect more than twice in that
range.) Since the curves meet with even multiplicity at two points, those are the only intersection
points in the range - < x < . Since
(x) is concave, we have
are no intersections with jxj . Thus, the curves meet only at their two points of tangency.
Corollary 2.2. Any set S of n points on the helix h(t) in the range - < t < has a neighborly
Delaunay triangulation.
Nice Point Sets Can Have Nasty Delaunay Triangulations 5
Figure
2. The intersection curve of the cylinder and a bitangent sphere projects to a parabola on the xy-plane.
Proof: Let p and q be arbitrary points in S, and let be the unique ball tangent to the helix at
p and q. By Lemma 2.1, does not otherwise intersect the helix and therefore contains no point
in S. Thus, p and q are neighbors in the Delaunay triangulation of S.
We can now easily complete the analysis of our helical point set S p
n . Lemma 2.1 implies that
every point in S p
n is connected by a Delaunay edge to every other point less than a full turn
around the helix h p
1=n (t). Each full turn of the helix contains at least b
points. Thus, except
for points on the rst and last turn, every point has at least 2b
Delaunay neighbors, so the
total number of Delaunay edges is more than 2b p nc(n - 2b p nc) > 2n 3=2 - 4n. This crude lower
bound does not count Delaunay edges in the rst and last turns of the helix, nor edges that join
points more than one turn apart. It is not di-cult to show that there are at most O(n) uncounted
Delaunay edges if
n is an integer [29] and at most O(n 3=2 ) uncounted edges in general.
Theorem 2.3. For any n, there is a set of n points in IR 3 with spread
whose Delaunay triangulation
has
complexity
(n 3=2 ). Moreover, this point set is a uniform sample of a smooth convex
surface with constant local feature size.
We generalize our helix construction to other values of the spread as follows.
Theorem 2.4. For any n and
(n 1=3 ), there is a set of n points in IR 3 with spread whose
Delaunay triangulation has
complexity
Proof: There are three cases to consider, depending on whether the spread is at least n, between
n and n, or at most
n. The rst case is trivial.
For the case
a set of evenly spaced points on a helix with pitch =n:
2k
sin
Every point in S is connected by a Delaunay edge to every other point less than a full turn away on
the helix, and each turn of the helix
contains
points, so the total complexity of the Delaunay
triangulation
is
(n).
The nal case n 1=3
p n is somewhat more complicated. Our point set consists of several
copies of our helix construction, with the helices positioned at the points of a square lattice, so the
construction loosely resembles a mattress. Specically,
2k
r
sin
6 Je Erickson
where r and w are parameters to be determined shortly. This set contains points. The
diameter of S is (w) and the closest pair distance is (1=
r), so its spread is
r).
Thus, given n and , we have
To complete our analysis, we need to show that Delaunay circumspheres from one helix do not
interfere signicantly with nearby helices. Let (t) denote the sphere tangent to the helix h at
h (t) and h(-t), for some 0 < 1 and 0 < t =2. We claim that the radius of this sphere
is less than 3. Since (t) is centered on the y-axis (see Lemma 2.1), we can compute its radius
by computing its intersection with the y-axis. The intersection points satisfy the determinant
equation
0:
For any > 0 and t > 0, this equation simplies to
which implies that the radius of (t) is
s
sin
We easily verify that this is an increasing function of both and t in the range of interest. Thus,
to prove our claim, it su-ces to observe that the radius of 1 (=2) is
Since adjacent helices are separated by distance 6, every point in S is connected by a Delaunay
edge to every point at most half a turn away in the same helix. Each turn of each helix contains
r) points, so the Delaunay triangulation of S has
complexity
(n
r)
2.2 Upper Bounds
Let B be a ball of radius R in be balls of radius at least r, where
Our upper bound proof uses geometric properties of the 'Swiss cheese'
Figure
3(a). In our upper bound proofs, B will be a ball that contains a subset of the points,
and each b i will be an empty circumsphere of some Delaunay edge.
Lemma 2.5. The surface area of C is O(R 3 =r).
Proof: The outer surface @C \ @B clearly has area O(R 2 su-ces to bound the
surface area of the 'holes'. For each i, let H be the boundary of the ith hole, and
@B. For any point x 2 H, let s x denote the open line segment of length r
extending from x towards the center of the ball b i with x on its boundary. (If x lies on the surface
of more than one b i , choose one arbitrarily.) Let
x2H s x be the union of all such segments,
and for each i, let S
s x . Each S i is a fragment of a spherical shell of thickness r inside
the ball b i . See Figure 3(b).
Nice Point Sets Can Have Nasty Delaunay Triangulations 7
(a) (b)
Figure
3. (a) Swiss cheese (in IR 2 ). (b) Shell fragments used to bound its surface area.
We can bound the volume of each shell fragment S i as follows:
r
where r i r is the radius of b i . The triangle inequality implies that s x and s y are disjoint for any
two points x; y 2 H, so the shell fragments S i are pairwise disjoint. Finally, since S ts inside a
ball of radius R its volume is O(R 3 ). Thus, we have
r
At this point, we would like to argue that any unit ball whose center is on the boundary
of C contains a constant amount of surface area of C, so that we can apply a packing argument.
Unfortunately, C might contain isolated components and thin handles with arbitrarily small surface
area (like the small triangular component in Figure 3(a)). Thus, we consider balls centered slightly
away from the boundary of C.
Lemma 2.6. Let U be any unit ball whose center is in C and at distance 2=3 from @C. Then U
contains
(1) surface area of C.
Proof: Without loss of generality, assume that U is centered at the origin and that (0; 0; 2=3) is
the closest point of @C to the origin. Let U 0 be the open ball of radius 2=3 centered at the origin,
let V be the open unit ball centered at (0; 0; 5=3), and let W be the cone whose apex is the origin
and whose base is the circle @U \ @V. See Figure 4. U 0 lies entirely inside C, and since r 1, we
easily observe that V lies entirely outside C. Thus, the surface area of @C\W @C\U is at least
the area of the spherical cap @U 0 \ W, which is exactly 4=27.
Theorem 2.7. Let S be a set of points in IR 3 whose closest pair is at distance 2, and let r be any
real number. Any point in S has O(r 2 ) Delaunay neighbors at distance at most r.
Proof: Let o be an arbitrary point in S, and let B be a ball of radius r centered at o. Call a
Delaunay neighbor of o a friend if it lies inside B, and call a friend q interesting if there is another
8 Je Erickson
U
U'
Figure
4. Proof of Lemma 2.6
point necessarily a Delaunay neighbor of o) such that jopj < joqj and \poq < 1=r. A
simple packing argument implies that o has at most O(r 2 ) boring friends.
Let q be an interesting friend of o, and let p be a point that makes q interesting, as described
above. Since q is a Delaunay neighbor of o, there is a ball d q that has o and q on its boundary
and no points of S in its interior. In particular, p is outside d q , so the radius of d q is greater
than the radius of the circle passing through o, p, and q. Let c be the center of this circle. Since
\poq < 1=r, we must have \pcq < 2=r, and since jpqj 2, we must have jcqj > r. See Figure 5.
We have just shown that the radius of d q is at least r.
<1/r
<2/r
c
Figure
5. The radius of any interesting Delaunay ball is at least r.
For every interesting friend q, let b q be the ball concentric with d q with radius 2=3 less than
the radius of d q , and let U q be the unit-radius ball centered at q. We now have a set of unit balls,
one for each interesting friend of o, whose centers lie at distance exactly 2=3 from the boundary
of the Swiss cheese
2.5, C has surface area O(r 2 ), and by Lemma 2.6,
each unit ball U q
contains
(1) surface area of C. Since the unit balls are disjoint, it follows that
has at most O(r 2 ) interesting friends.
Theorem 2.8. Let S be a set of points in IR 3 whose closest pair is at distance 2 and whose diameter
is 2, and let r be any real number. There are O( 3 =r) points in S with a Delaunay neighbor at
distance at least r.
Proof: Call a point far-reaching if it has a Delaunay neighbor at distance at least r, and let Q
be the set of far-reaching points. Let B be a ball of radius 2 containing S. For each q 2 Q, let
f q be a maximal empty ball containing q and its furthest Delaunay neighbor, and let b q be the
concentric ball with radius 2=3 smaller than f p . By construction, each ball b q has radius at least
Finally, for any far-reaching point q, let U q be the unit-radius ball centered at q. By
Lemma 2.5, the Swiss cheese
b q has surface area O( 3 =r), and by Lemma 2.6, each
unit ball U q
contains
(1) surface area of C. Since these unit balls are disjoint, there are at most
O( 3 =r) of them.
Nice Point Sets Can Have Nasty Delaunay Triangulations 9
Corollary 2.9. Let S be a set of points in IR 3 with spread . The Delaunay triangulation of S has
complexity O( 4 ).
Proof: For all r, let F(r) be the number of far-reaching points in S, i.e., those with Delaunay edges
of length at least r. From Theorem 2.8, we have =r). By Theorem 2.7, if the farthest
neighbor of a point p is at distance between r and r neighbors.
Thus, the total number of Delaunay edges is at most
O(r)
2.3 Conjectured Upper Bounds
We conjecture that the lower bounds in Theorem 2.4 are tight, but Corollary 2.9 is the best upper
bound known. Nearly matching upper bounds could be derived from the following conjecture,
using a divide and conquer argument suggested by Edgar Ramos (personal communication).
Let S be a well-separated set of points with closest pair distance 1, lying in two balls of radius
that are separated by distance at least c for some constant c > 1. Call an edge in the Delaunay
triangulation of S a crossing edge if it has one endpoint in each ball.
Conjecture 2.10. Some point in S is an endpoint of O() crossing edges.
Lemma 2.11. Conjecture 2.10 implies that S has O(minf 3 ; n;n 2 g) crossing edges.
Proof: Theorem 2.8 implies that only O( 2 ) points can be endpoints of crossing edges. Thus, we
can assume without loss of generality that We compute the total number of crossing
edges by iteratively removing the point with the fewest crossing edges and retriangulating the
resulting hole, say by incremental
ipping [27]. Conjecture 2.10 implies that we delete only O()
crossing edges with each point, so altogether we delete crossing edges. Not all of
these edges are in the original Delaunay triangulation, but that only helps us.
Theorem 2.12. Conjecture 2.10 implies that the Delaunay triangulation of n points in IR 3 with
spread has complexity O(minf 3 log ; n;n 2 g).
Proof: Assume Conjecture 2.10 is true, and let S be an arbitrary set of n points with diameter ,
where the closest pair of points is at unit distance. S is contained in an axis-parallel cube C of
width . We construct a well-separated pair decomposition of S [13], based on a simple octtree
decomposition of C. The octtree has O(log ) levels. At each level i, there are 8 i cells, each a cube
of width =2 i . Our well-separated pair decomposition includes, for each level i, the points in any
pair of level-i cells separated by a distance between c=2 i and 2c=2 i . A simple packing argument
implies that any cell in the octtree is paired with O(1) other cells, all at the same level, and so
any point appears in O(log ) subset pairs. Every Delaunay edge of S is a crossing edge for some
well-separated pair of cells.
Lemma 2.11 implies that the points in any well-separated pair of level-i cells have O( 3
crossing Delaunay edges. Since there are O(8 i ) such pairs, the total number of crossing edges
between level-i cells is O( 3 ). Thus, there are O( 3 log ) Delaunay edges altogether.
Lemma 2.11 also implies that for any well-separated pair of level-i cells, the average number of
crossing edges per point is O(=2 i ). Since every point belongs to a constant number of subset pairs
at each level, the total number of crossing edges at level i is O(n=2 i ). Thus, the total number of
Delaunay edges is O(n).
This upper bound is still a logarithmic factor away from our lower bound construction when
n). However, our argument is quite conservative; all crossing edges for a well-separated
pair of subsets are counted, even though some or all of these edges may be blocked by other points
in S. A more careful analysis would probably eliminate the nal logarithmic factor.
3 Nice Surface Data
Let be a smooth surface without boundary in IR 3 . The medial axis of is the closure of the set
of points in IR 3 that have more than one nearest neighbor on . The local feature size of a surface
point x, denoted lfs(x), is the distance from x to the medial axis of . Let S be a set of sample
points on . Following Amenta and Bern [1], we say that S is an "-sample of if the distance
from any point x 2 to the nearest sample point is at most " lfs(x).
The rst step in several surface reconstruction algorithms is to construct the Delaunay triangulation
or Voronoi diagram of the sample points. Edelsbrunner and Mucke [26] and Bajaj et al.
[7, 9] describe algorithms based on alpha shapes, which are subcomplexes of the Delaunay tri-
angulation; see also [34]. Extending earlier work on planar curve reconstruction [2, 32], Amenta
and Bern [1, 3] developed an algorithm to extract a certain manifold subcomplex of the Delaunay
triangulation, called the crust. Amenta et al. [4] simplied the crust algorithm and proved that if S
is an "-sample of a smooth surface , for some su-ciently small ", then the crust is homeomorphic
to . Boissonnat and Cazals [12] and Hiyoshi and Sugihara [35] proposed algorithms to produce
a smooth surface using natural coordinates, which are dened and computed using the Voronoi
diagram of the sample points. Further examples can be found in [5, 6, 11, 16, 21].
We have already seen that even very regular "-samples of smooth surfaces can have super-linear
Delaunay complexity. In this section, we show that "-samples of smooth surfaces can have Delaunay
triangulations of quadratic complexity, implying that all these surface reconstruction algorithms
take at least quadratic time in the worst case.
3.1
We will analyze our lower bound constructions in terms of the sample measure of a smooth
surface , which we dene as follows:
Z
dx
Intuitively, the sample measure of a surface describes the intrinsic di-culty of sampling that surface
for reconstruction. 2 The next lemma formalizes this intuition.
Lemma 3.1. For any C 2 surface and and " < 1=5, every "-sample of
contains
points.
2 Ruppert and Seidel [39] use precisely this function|but with a dierent denition of local feature size|to
measure the minimum number of triangles with bounded aspect ratio required to mesh a planar straight-line graph.
Nice Point Sets Can Have Nasty Delaunay Triangulations 11
Proof: Let S be an arbitrary "-sample of for some " < 1=5, and let
Amenta and Bern [1, Lemma 1] observe that the local feature size function is 1-Lipschitz, that
is, jlfs(x) - lfs(y)j < jxyj for any surface points x; y 2 . Thus, for any point x 2 , we have
1-" lfs(p), where p 2 S is the sample point closest to x.
It follows that we can cover with spheres of radius "
1-" lfs(p) around each sample point p.
Call the intersection of and the sphere around p the neighborhood of p, denoted N(p). Similar
Lipschitz arguments imply that
for any point x 2 N(p).
For any x 2 , let n x denote the normal vector to at x. Using the fact that local feature size
is at most the minimum radius of principal curvature, Amenta and Bern [1, Lemma 2] prove that
for any x; y 2 where
1-3- . Thus,
It follow that N(p) is monotone with respect to n p , so we can compute its
area by projecting it onto a plane normal to n p . Since the projection ts inside a circle of radius
1-" lfs(p), we have
1-" lfs(p)
min
cos \n x n p
1-" lfs(p)
We can now bound the sample measure of each neighborhood as follows: 3
min
1-" lfs(p)
1-2"
1-" lfs(p)
50
Finally, since is covered by n such neighborhoods,
We say that an "-sample is parsimonious if it contains O(()=" 2 ) points, that is, only a
constant factor more than the minimum possible number required by Lemma 3.1.
3.2 Oversampling Is Bad
The easiest method to produce a surface sample with high Delaunay complexity is oversampling,
where some region of the surface contains many more points than necessary. In fact, the only
surface where oversampling cannot produce a quadratic-complexity Delaunay triangulation is the
sphere, even if we only consider parsimonious samples.
The idea behind our construction is to nd two skew (i.e., not coplanar) lines tangent to the
place points on these lines in small neighborhoods of the tangent points to create a complete
bipartite Delaunay graph, and then perturb the points onto the surface. The neighborhoods must
be su-ciently small that the perturbation does not change the Delaunay structure. Also, the
original tangent lines must be positioned so that the resulting Delaunay circumspheres are small,
Figure
6. Parsimoniously oversampling a non-spherical surface.
so that we can uniformly sample the rest of the surface without destroying the local quadratic
structure. See Figure 6.
To quantify our construction, we rst establish a technical lemma about perturbations. Let
be sets of m 4 evenly spaced points on two skew
lines ' P and ' Q . Every segment p i q j is an edge in the Delaunay triangulation of P [ Q. An r-
perturbation of P [ Q is a set ~
such that jp i ~
arguments imply that if r is su-ciently small, the
Delaunay triangulation of ~ P [ ~
Q also has quadratic complexity.
Let - denote the distance between successive points in both P and Q. For each point
and q j 2 Q, let ij denote the empty ball whose boundary is tangent to ' P at p i and tangent to
' Q at q j , and let denote the largest radius of any bitangent sphere ij .
Lemma 3.2. Let ~
Q be an arbitrary r-perturbation of P [ Q, where r < - 2 =9. Every pair of
points ~
lies on an empty sphere with radius at most 2 and thus are neighbors in the
Delaunay triangulation of ~ P [ ~
Q.
4, we observe that > -. The distance between any point p i to any bitangent
sphere kj with k 6= i is at least
be the ball concentric with ij
with radius - 2 =9 larger than ij . This ball contains ~ p i and ~
q j but excludes every other point in
~
Q, and thus contains an empty circumsphere of ~
with radius at most 2.
Theorem 3.3. For any non-spherical C 2 surface and any " > 0, there is a parsimonious "-sample
of whose Delaunay triangulation has
complexity
is the number of sample points.
Proof: Let S be any parsimonious "-sample of , and let be a sphere, centered at a
point x 2 , with radius < lfs(x)=36m 2 , such that every point in S has distance at least 6 from
, and the intersection curve
\ is not planar (i.e., not a circle). Such a sphere always exists
unless is itself a sphere; for example, we could take x to be any point whose principal curvatures
are dierent.
Let ' p and ' q be skew lines tangent to
at points p and q, respectively; these lines must exist
since
does not lie in a plane. Let qm be sets of evenly
spaced points on ' p and ' q , respectively, in su-ciently small neighborhoods of p and q that every
bitangent sphere ij (see above) has radius less than 2. Such neighborhoods exist by continuity
3 We can obtain slightly better constants using the fact that every surface point lies in the neighborhood of its
closest sample point; see [1, Lemma 3].
Nice Point Sets Can Have Nasty Delaunay Triangulations 13
arguments. Let - denote the distance between successive points in P and Q and assume without
loss of generality that m- < =2. Finally, dene ~
qm g where
~
are the surface points closest to p i and q j , respectively.
Without loss of generality, P
lies in a ball ~ of radius 2 centered at x. Lipschitz
arguments imply that \ ~ lies between two balls of radius tangent to at p (or
at q). Thus, any point in P (or in Q) has distance at most
R <
from , since < lfs(x)=36m 2 . Lemma 3.2 now implies that for all i; j, there is a ball of radius less
than 4 that contains ~
excludes every other point in ~
Q. Since every other point
in S has distance at least 6 away from , this ball also excludes every point in S.
We conclude that S [ ~
Q is a parsimonious "-sample of consisting of points whose
Delaunay triangulation has
complexity
The reconstruction algorithm of Amenta et al. [4] extracts a surface from a subset of the
Delaunay triangles of the sample points. Their algorithm estimates the surface normal at each
sample point p using the Voronoi diagram of the samples. The cocone at p is the complement of
a very wide double cone whose apex is p and whose axis is the estimated normal vector at p. The
algorithm extracts the Delaunay triangles whose dual Delaunay edges intersect the cocones of all
three of its vertices, and then extracts a manifold surface from those cocone triangles. Usually only
a small subset of the Delaunay triangles pass this ltering phase, but our construction shows that
there
are
triangles in the worst case.
3.3 Uniform Sampling Can Still Be Bad
Unfortunately, oversampling is not the only way to obtain quadratic Delaunay triangulations. Let
S be a set of sample points on the surface . We say that S is a uniform "-sample of if the
distance form any point x 2 to its second-closest sample point is between ("=c) lfs(x) and " lfs(x),
for some constant c > 2. 4 We easily verify that a uniform "-sample is in fact an "-sample. A packing
argument similar to the proof of Lemma 3.1 implies that uniform "-samples are parsimonious.
Lemma 3.4. For any n and " >
1=n, there is a two-component surface and an n-point uniform
"-sample S of , such that the Delaunay triangulation of S has
complexity
Proof: The surface is the boundary of two sausages x and y , each of which is the Minkowski
sum of a unit sphere and a line segment. Specically, let
where U is the unit ball centered at the origin, 4n". The local feature
size of every point on is 1, so any uniform "-sample of has ((w
Equivalently, following Dey et al. [22], we could dene an "-sample to be uniform if jpqj= lfs(p) "=c for any
sample points p and q, for some constant c > 1.
14 Je Erickson
Dene the seams x and y as the maximal line segments in each sausage closest to the xy-plane:
d) and
Our uniform "-sample S contains 2w=" along each seam:
d) for all integers -w=" i w="; and
The Delaunay triangulation of these points has complexity (w 2 =" 2
be the ball whose boundary passes through p i and q j and is tangent to both seams.
The intersection x \ ij is a small oval, tangent to x at p i and symmetric about the plane
Figure
7(a).
PSfrag replacements
x
a
PSfrag replacements
a
(a) (b)
Figure
7. (a) Two sausages and a sphere tangent to both seams. (b) Computing the width of the intersection oval.
We claim that this oval lies in a su-ciently thin strip around the seam of x that we can
avoid it with the other sample points in S. We compute the width of the oval by considering the
intersection of x and ij with the plane calculations imply that ij is centered
at the point 1). The width of the intersection
oval,measured along the surface of x , is exactly 2\p i a From Figure 7(b), we see that
tan \p i a
j"
Thus, we can bound the width of the oval as follows:
4dj"
8w
Nice Point Sets Can Have Nasty Delaunay Triangulations 15
We conclude that x lies entirely within a strip of width less than 2" centered along the
seam x . A symmetric argument gives the analogous result for y \ ij . We can uniformly sample
so that no sample point lies within either strip except the points we have already placed along
the seams. Each segment p i q j is an edge in the Delaunay triangulation of the sample, and there
are
(w
Theorem 3.5. For any n and any " >
1=n, there is a connected surface and an n-point uniform
"-sample S of , such that the Delaunay triangulation of S has
complexity
Proof: Intuitively, we produce the surface by pushing two sausages into a spherical balloon.
These sausages create a pair of conical wedges inside the balloon whose seams lie along two skew
lines. The local feature size is small near the seams and drops quickly elsewhere, so a large fraction
of the points in any uniform sample must lie near the seams. We construct a particular sample
with points exactly along the seams that form a quadratic-complexity triangulation, similarly to
our earlier sausage construction. Our construction relies on several parameters: the radius R of the
spherical balloon, the width w and height h of the wedges, and the distance d between the seams.
Figure
8. A smooth surface with a bad uniform "-sample, and a closeup of one of its wedges.
Each wedge is the Minkowski sum of a unit sphere, a right circular cone with height h centered
along the z-axis, and a line segment of length w parallel to one of the other coordinate axes. A
wedge can be decomposed into cylindrical, spherical, conical, and planar facets. The cylindrical and
spherical facets constitute the blade of the wedge, and the seam of the blade is the line segment
of length w that bisects the cylindrical facet. The local feature size of any point on the blade is
exactly 1, and the local feature size of any other wedge point is 1 plus its distance from the blade.
Straightforward calculations imply that the sample measure of each wedge is O(w
A rst approximation e of the surface is obtained by removing two wedges from a ball of
radius R centered at the origin. One wedge points into the ball from below; its seam is parallel to
the x-axis and is centered at the point (0; h). The other wedge points into the ball from
above; its seam is parallel to the y-axis and is centered at (0; 0; R-h). Let
the distance between the wedges. Our construction has 1 w d h, so R < 3h.
To obtain the nal smooth surface , we round o the sharp edges by rolling a ball of radius
h=4 inside e
along the wedge/balloon intersection curves. We call the resulting warped toroidal
patches the sleeves. The local feature size of any point on the sleeves or on the balloon is at least
h=4. The surface area of the sleeves is O(R 2 so the sleeves have constant sample measure.
The local feature size of other surface points changes only far from the blades and by only a small
constant factor. Thus, 1). To complete the construction, we set
Figure 8.
Finally, we construct a uniform "-sample S with (w=") sample points evenly spaced along each
seam and every other point at least " away from the seams. Setting h > 5d (and thus R > 10d)
ensures that the Delaunay spheres ij between seam points do not touch except on the blades.
By the argument in Lemma 3.4, there
are
(w
points.
3.4 Some Surfaces Are Just Evil
In this section, we describe a family of surfaces for which any parsimonious "-sample has a Delaunay
triangulation of near-quadratic complexity. First we give a nearly trivial construction of a bad
surface with several components, and then we join these components into a single connected surface
using a method similar to Theorem 3.5.
Lemma 3.6. For any n and any " >
1=n, there is a smooth surface such that the Delaunay
triangulation of any parsimonious "-sample of has
complexity
is the size of
the sample.
Proof: Let P be a set containing the following k points:
We easily verify that every pair of points p i and q j lie on the boundary of a ball ij with every
other point in P at least unit distance outside. (See Lemma 3.2.)
is the unit-radius sphere centered at p. Clearly,
for every point x 2 , so be an arbitrary parsimonious "-sample of , let
be the sample points on its unit
sphere.
Choose an arbitrary pair P. By construction, ij contains only points in S p i
and S q j
its center until (without loss of generality) it has no points of S p i
in its interior.
Choose some point
on the boundary of the shrunken ball, and then shrink the ball
further about p 0 until it contains no point of S q i
. The resulting ball has p 0 and some q 0 2 S q i
on
its boundary, and no points of S in its interior. Thus, p 0 and q 0 are neighbors in the Delaunay
triangulation of S. There are at
least
To create a connected surface where good sample has a complicated Delaunay triangulation,
we add 'teeth' to our earlier balloon and wedge construction. Unfortunately, in the process, we
lose a polylogarithmic factor in the Delaunay complexity.
Theorem 3.7. For any n and any " <
1=n, there is a smooth connected surface such that
the Delaunay triangulation of any parsimonious "-sample of has
complexity
where n is the size of the sample.
Nice Point Sets Can Have Nasty Delaunay Triangulations 17
Proof: As in Theorem 3.5, our surface contains two wedges, but now each wedge has a row of small
conical teeth. Our construction relies on the same parameters R; w;h of our earlier construction.
We now have additional parameter t, which is simultaneously the height of the teeth, the distance
between the teeth, and half the thickness of the 'blade' of the wedge.
Our construction starts with the (toothless) surface described in the proof of Theorem 3.5, but
using a ball of radius t instead of a unit ball to dene the wedges. We add w=t evenly-spaced teeth
along the blade of each wedge, where each tooth is the Minkowski sum of a unit ball with a right
circular cone of radius t. Each tooth is tangent to both planar facets of its wedge. To create the
nal smooth surface , we roll a ball of radius t=3 over the blade/tooth intersection curves. The
complete surface has sample measure ((w=t)(1+log t)+log h 1). Finally, we set the parameters
Let S be a parsimonious "-sample of , and let any pair of teeth,
one on each wedge, there is a sphere tangent to the ends of the teeth that has
distance
(1) from
the rest of the surface. We can expand this sphere so that it passes through one point on each
tooth and excludes the rest of the points. Thus, the Delaunay triangulation of S has complexity
3.5 Randomness Doesn't Help Much
Golin and Na [33] proved that if S is a random set of n points on the surface of a convex polytope
with a constant number of facets, then the expected complexity of the Delaunay triangulation of S
is O(n). Unfortunately, this result does not extend to nonconvex objects, even when the random
distribution of the points is proportional to the sample measure.
Theorem 3.8. For any n, there is a smooth connected surface , such that the Delaunay triangulation
of n independent uniformly-distributed random points in has complexity (n
with high probability.
Proof: Consider the surface consisting of (n= log n) unit balls evenly spaced along two skew line
segments, exactly as in the proof of Theorem 3.6, with extremely thin cylinders joining them into
a single connected surface resembling a string of beads. With high probability, a random sample
of n points contains at least one point on each ball, on the side facing the opposite segment. Thus,
with high probability, there is at least one Delaunay edge between any ball on one segment and
any ball on the other segment.
Theorem 3.9. For any n, there is a smooth connected surface , such that the Delaunay triangulation
of n independent random points in , distributed proportionally to the sample measure, has
complexity log 4 n) with high probability.
Proof: Let be the surface used to prove Theorem 3.7, but with (n= log 2 n) teeth. With high
probability, a random sample of contains at least one point at the tip of each tooth.
Conclusions
We have derived new upper and lower bounds on the complexity of Delaunay triangulations under
two dierent geometric constraints: point sets with sublinear spread and good samples of smooth
surfaces. Our results imply that with very strong restrictions on the inputs, most existing surface
reconstruction algorithms are ine-cient in the worst case.
Our results suggest several open problems, the most obvious of which is to tighten the spread-
based bounds. We conjecture that our lower bounds are tight. Even the special case of dense point
sets is open.
Another natural open problem is to generalize our analysis to higher dimensions. Dey et al. [22]
describe a generalization of the cocone algorithm [4] that determines the dimension of a uniformly
sampled manifold (in the sense of Section 3.3) in a space of any xed dimension. Results developed
in a companion paper [29] imply that for any n and
p n, there is a set of n points in IR d
with spread whose Delaunay triangulation has
complexity
(n dd=2e-1 ). The techniques used in
Section 2.2 generalize easily to prove that any d-dimensional Delaunay triangulation has O( d+1 )
edges, but this implies a very weak bound on the overall complexity. We conjecture that the
complexity is always O( d )|in particular, O(n) for all dense point sets|and can only reach the
maximum
(n).
Our bad surface examples are admittedly quite contrived, since they have areas of very high
curvature relative to their diameter. An interesting open problem is whether there are bad surfaces
with smaller 'spread', i.e., ratio between diameter and minimum local feature size. What is the
worst-case complexity of the Delaunay triangulation of good surface samples as a function of the
spread and sample measure of the surface? Is there a single surface such that for any ", there is
a uniform "-sample with quadratic Delaunay complexity, or (as I conjecture) is the cylinder the
worst case? Even worse, is there a \universally bad" surface such that every uniform sample of
has super-linear Delaunay complexity?
Our surface results imply that most Delaunay-based surface reconstruction algorithms can be
forced to take super-linear time, even for very natural surface data. It may be possible to improve
these algorithms by adding a small number of Steiner points in a preprocessing phase to reduce the
complexity of the Delaunay triangulation. In most of our bad surface examples, a single Steiner
point reduces the Delaunay complexity to O(n). Bern, Eppstein, and Gilbert [10] show that any
Delaunay triangulation can be reduced to O(n) complexity in O(n log n) time by adding O(n)
Steiner points; see also [15]. Unfortunately, the Steiner points they choose (the vertices of an
octtree) may make reconstruction impossible. In order to be usable, any new Steiner points must
either lie very close to or very far from the surface, and as our bad examples demonstrate, both
types of Steiner points may be necessary. Boissonnat and Cazals (personal communication) report
that adding a small subset of the original Voronoi vertices as Steiner points can signicantly reduce
the complexity of the resulting Voronoi diagram with only minimal changes to the smooth surface
constructed by their algorithm [12].
After some of the results in this paper were announced, Dey et al. [20] developed a surface re-construction
algorithm that does not construct the entire Delaunay triangulation. Their algorithm
runs in O(n log n) time if the sample is locally uniform, meaning (loosely) that the density of the
sample points varies smoothly over the surface, but still requires quadratic time in the worst case.
Even more recently, Ramos [38] discovered a fast algorithm to extract a locally uniform sample
from any "-sample, thereby producing a surface reconstruction algorithm that provably runs in
O(n log n) time.
Finally, are there other natural geometric conditions under which the Delaunay triangulation
provably has small complexity?
Nice Point Sets Can Have Nasty Delaunay Triangulations 19
Acknowledgments
. I thank Herbert Edelsbrunner for asking the (still open!) question that started
this work, Kim Whittlesey for suggesting charging Delaunay features to area, Edgar Ramos for
suggesting well-separated pair decompositions, and Tamal Dey and Edgar Ramos for sending me
preliminary copies of their papers [20, 22, 38]. Thanks also to Sariel Har-Peled, Olivier Devillers,
and Jean-Daniel Boissonnat for helpful discussions.
--R
Surface reconstruction by Voronoi
The crust and the
A new Voronoi-based surface reconstruction algorithm
A simple algorithm for homeomorphic surface recon- struction
The power crust
Automatic reconstruction of surface and scalar
Realistic input models for geometric algorithms.
Sampling and reconstructing manifolds using alpha-shapes
Provably good mesh generation.
Representing 2d and 3d shapes with the Delaunay triangulation.
Smooth surface reconstruction via natural neighbour interpolation of distance functions.
A decomposition of multidimensional point sets with applications to k-nearest-neighbors and n-body potential elds
Pattern matching for spatial point sets.
Selecting heavily covered points.
Dynamic skin triangulation.
Sliver exudation.
Nearest neighbor queries in metric spaces.
Improved bounds for planar k-sets and related problems
Surface reconstruction in almost linear time under locally uniform sampling.
Detecting undersampling in surface reconstruction.
Shape dimension from samples.
The expected number of k-faces of a Voronoi diagram
Incremental topological ipping works for regular triangulations.
Cutting dense point sets in half.
Arbitrarily large neighborly families of congruent symmetric convex 3-polytopes
Nice point sets can have nasty Delaunay triangulations.
Geometric pattern matching: A performance study.
On the average complexity of 3d-Voronoi diagrams of random points on convex polytopes
Surface reconstruction using alpha shapes.
Geometric matching under noise: Combinatorial bounds and algorithms.
Generating sliver-free three dimensional meshes
Smooth surface reconstruction in near linear time.
A Delaunay re
Tetrahedral mesh generation by Delaunay re
An improved bound for k-sets in three dimensions
Convex independent sets and 7-holes in restricted planar point sets
Planar Point Sets with Bounded Ratios of Distances.
Approximate center points in dense point sets.
On Fatness and Fitness: Realistic Input Models for Geometric Algorithms.
--TR
Higher-dimensional Voronoi diagrams in linear expected time
Three-dimensional alpha shapes
Provably good mesh generation
Selecting Heavily Covered Points
A decomposition of multidimensional point sets with applications to <italic>k</italic>-nearest-neighbors and <italic>n</italic>-body potential fields
Automatic reconstruction of surfaces and scalar fields from 3D scans
Realistic input models for geometric algorithms
Tetrahedral mesh generation by Delaunay refinement
A new Voronoi-based surface reconstruction algorithm
The crust and the MYAMPERSANDBgr;-Skeleton
<italic>r</italic>-regular shape reconstruction from unorganized points
Sliver exudation
Crust and anti-crust
Smoothing and cleaning up slivers
A simple algorithm for homeomorphic surface reconstruction
Smooth surface reconstruction via natural neighbour interpolation of distance functions
Voronoi-based interpolation with higher continuity
Dynamic skin triangulation
--CTR
Mordecai J. Golin , Hyeon-Suk Na, The probabilistic complexity of the Voronoi diagram of points on a polyhedron, Proceedings of the eighteenth annual symposium on Computational geometry, p.209-216, June 05-07, 2002, Barcelona, Spain
Sunghee Choi , Nina Amenta, Delaunay triangulation programs on surface data, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.135-136, January 06-08, 2002, San Francisco, California
Mordecai J. Golin , Hyeon-Suk Na, On the average complexity of 3D-Voronoi diagrams of random points on convex polytopes, Computational Geometry: Theory and Applications, v.25 n.3, p.197-231, July
Leonidas Guibas , An Nguyen , Daniel Russel , Li Zhang, Collision detection for deforming necklaces, Proceedings of the eighteenth annual symposium on Computational geometry, p.33-42, June 05-07, 2002, Barcelona, Spain
Yogish Sabharwal , Nishant Sharma , Sandeep Sen, Nearest neighbors search using point location in balls with applications to approximate Voronoi decompositions, Journal of Computer and System Sciences, v.72 n.6, p.955-977, September 2006
Sariel Har-Peled , Shakhar Smorodinsky, On conflict-free coloring of points and simple regions in the plane, Proceedings of the nineteenth annual symposium on Computational geometry, June 08-10, 2003, San Diego, California, USA
Sunghee Choi, The Delaunay tetrahedralization from Delaunay triangulated surfaces, Proceedings of the eighteenth annual symposium on Computational geometry, p.145-150, June 05-07, 2002, Barcelona, Spain
Pankaj Agarwal , Leonidas Guibas , An Nguyen , Daniel Russel , Li Zhang, Collision detection for deforming necklaces, Computational Geometry: Theory and Applications, v.28 n.2-3, p.137-163, June 2004
Dominique Attali , Jean-Daniel Boissonnat , Andr Lieutier, Complexity of the delaunay triangulation of points on surfaces the smooth case, Proceedings of the nineteenth annual symposium on Computational geometry, June 08-10, 2003, San Diego, California, USA
Jeff Erickson, Local polyhedra and geometric graphs, Proceedings of the nineteenth annual symposium on Computational geometry, June 08-10, 2003, San Diego, California, USA
J. D. Boissonnat , S. Oudot, Provably good surface sampling and approximation, Proceedings of the Eurographics/ACM SIGGRAPH symposium on Geometry processing, June 23-25, 2003, Aachen, Germany
Tamal K. Dey , Joachim Giesen , Samrat Goswami , Wulue Zhao, Shape dimension and approximation from samples, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.772-780, January 06-08, 2002, San Francisco, California
Stefan Funke , Edgar A. Ramos, Smooth-surface reconstruction in near-linear time, Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.781-790, January 06-08, 2002, San Francisco, California
Siu-Wing Cheng , Tamal K. Dey , Edgar A. Ramos , Tathagata Ray, Sampling and meshing a surface with guaranteed topology and geometry, Proceedings of the twentieth annual symposium on Computational geometry, June 08-11, 2004, Brooklyn, New York, USA
Dominique Attali , Jean-Daniel Boissonnat, A linear bound on the complexity of the delaunay triangulation of points on polyhedral surfaces, Proceedings of the seventh ACM symposium on Solid modeling and applications, June 17-21, 2002, Saarbrcken, Germany
Raphalle Chaine, A geometric convection approach of 3-D reconstruction, Proceedings of the Eurographics/ACM SIGGRAPH symposium on Geometry processing, June 23-25, 2003, Aachen, Germany
Jeff Erickson, Dense point sets have sparse Delaunay triangulations: or "but not too nasty", Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms, p.125-134, January 06-08, 2002, San Francisco, California | delaunay triangulation;sample measure;sample;spread;lower bounds;surface reconstruction |
378824 | The structural cause of file size distributions. | We propose a user model that explains the shape of the distribution of file sizes in local file systems and in the World Wide Web. We examine evidence from 562 file systems, 38 web clients and 6 web servers, and find that the model is a good description of these systems. These results cast doubt on the widespread view that the distribution of file sizes is long-tailed and that long-tailed distributions are the cause of self-similarity in the Internet. | Introduction
Numerous studies have reported traffic patterns in the Internet
that show characteristics of self-similarity (see [17]
for a survey). Most proposed explanations are based on the
assumption that the distribution of transfer times in the net-work
is long-tailed [19] [18] [22] [12]. In turn, this assumption
is based on the assumption that the distribution of file
sizes is long-tailed [16] [9].
We contend that the distribution of file sizes in most systems
fits the lognormal distribution. We support this claim
with empirical evidence from a variety of systems and also
with a model of user behavior that explains why file systems
tend to have this structure.
We argue that the proposed model is a better fit for the
data than the long-tailed model, and furthermore that our
user model is more realistic than the explanations for the
alternative.
We conclude that there is insufficient evidence for the
claim that the distribution of file sizes is long-tailed. This
result creates a problem for existing explanations of self-similarity
in the Internet. We discuss the implications and
review alternatives.
1.1. What does "long-tailed" mean?
In the context of self-similarity, a long-tailed distribution
must have a hyperbolic tail; that is
where X is a random variable, c is a constant, and is a
shape parameter that determines how long-tailed the distribution
is.
Other definitions of long-tailed are used in other con-
texts. This definition is appropriate for us because it describes
the asymptotic behavior that is required to produce
self-similarity [20]. By this definition, the lognormal distribution
is not long-tailed [19].
2. Distribution of file sizes
In this section we propose a model of the operations that
create new files and show that a simulation of this model
yields a distribution of file sizes that is a good match for the
distributions that appear on real file systems.
Then we show that the simulator is equivalent to a numerical
method for solving a partial differential equation
(PDE). We show that the solution of this PDE is the analytic
form of the result of the simulation. Finally, we use the
analytic form to estimate the parameters of observed distributions
and measure the goodness-of-fit of the model.
We find that the model describes real file systems well.
We conclude that the user behavior described by the model
explains the observed shape of the distribution of file sizes.
2.1. User model
Thinking about how users behave, we can list the most
common operations that create new files:
copying: The vast majority of files in most file systems
were created by copying, either by installing software
(operating system and applications) or by downloading
from the World Wide Web.
translating and filtering: Many new files are created by
translating a file from one format to another, compil-
ing, or by filtering an existing file.
editing: Using a text editor or word processor, users add or
remove material from existing files, sometimes replacing
the original file and sometimes creating a series of
versions.
Thus we assert that many file-creating operations can be
characterized as linear file transformations: a process reads
a file as input and generates a new file as output, where the
size of the new file depends on the size of the original.
This assertion suggests a model for the evolution of a
file system over time: assume that the system starts with a
single file with size s , and that users repeat the following
steps:
1. Select a file size, s, at random from the current distribution
of file sizes.
2. Choose a multiplicative factor, f , from some other distribution
3. Create a new file with size fs and add it to the system.
It is not obvious what the distribution of f should be, but
we can make some assumptions. First, we expect that the
most common operation is copying, so the mode of f should
be 1. Second, thinking about filtering and translations, we
expect that it should be about as common to double the size
of a file or halve it; in other words, we expect the distribution
of f to be symmetric on a log axis.
In Section 2.4 we show that, due to the Central Limit
Theorem, the shape of this distribution has little effect on
the shape of the resulting distribution of sizes. For now we
will choose a distribution of f that is lognormal with the
mode at 1 and an unspecified variance.
There are, then, two parameters in this model, the size of
the original file, s , and the standard deviation of the distribution
of f , which we call
. The parameter s determines
the mode of the final distribution;
controls the dispersion.
Next we see if this model describes real systems. In
November 1993 Gordon Irlam conducted a survey of file
systems. He posted a message on Usenet asking UNIX
system administrators to run a script on their machines and
mail in the results. The script uses the find utility to traverse
the file system and report a histogram of the sizes of
the files. The results are available on the web [15].
In June 2000 we ran this script on one of our worksta-
tions, a Pentium running Red Hat Linux 6.0. There were
89937 files on the system, including the author's home di-
rectory, a set of web pages, the operating system and a few
applications.
File size (bytes)0.20.61.0
{file
size
Distribution of File Sizes
cdf from simulation
actual cdf
Figure
1. cdf of file sizes on a UNIX workstation.
Figure
1 shows the cumulative distribution function (cdf)
of the sizes of these files, plotted on a log x-axis. 877 files
with length 0 are omitted.
The figure also shows the cdf of file sizes generated by
the simulation, using parameters that were tuned by hand to
yield the best visual fit. Clearly the model is a good match
for the data, suggesting that this model is descriptive of real
systems.
2.2. The analytic model
There are two problems with this model so far: first, it
provides no insight into the functional form of these distri-
butions, if there is one; second, it does not provide a way to
estimate the model parameters.
A solution to both problems comes from the observation
that the simulator is effectively computing a numerical solution
to a partial differential equation (PDE). By solving
the PDE analytically, we can find the functional form of the
distribution.
The PDE is the diffusion, or heat equation: u
where u(x; t) is the probability density function of file sizes
as a function of x, which is the logarithm of the file size, and
t, which represents time since the file system was created.
k is a constant that controls the rate of diffusion.
The range is from 0 to 1. The initial condition in time
is the delta function with a peak at the initial file size,
It is not obvious what the boundary condition
at should be, since the model does not include a
meaningful description of the behavior of small files. For-
tunately, the boundary behavior is usually irrelevant, so we
can choose either u(0;
In either case the solution is approximately
exp
"2
x
(2)
2kt.
File size (bytes)0.20.61.0
{file
size
File Sizes, Irlam dataset
lognormal model
actual cdf
Figure
2. cdf of file sizes on a machine that participated
in the Irlam survey.
In other words, the distribution under a log transform is
Gaussian with mean and standard deviation that increases
with k, the rate of diffusion, and t, time. The model of user
behavior provides no way to estimate k, or even to map t
onto real time, so we treat as a free parameter.
This observation leads to an easy way to estimate and
. Given a list of file sizes s i , we can use the mean of
as an estimate of , and the standard deviation
of x i as an estimate of . In the next section we use this
technique to fit analytic models to a collection of datasets.
2.3. Is this model accurate?
Irlam's survey provides data from 656 machines in a variety
of locations and environments. Of these, we discarded
43 because they contained no files with non-zero length,
and an additional 52 because they contained fewer than 100
files.
For the remaining 561 file systems, we estimated parameters
and compared the analytic distributions with the empirical
distributions. As a goodness-of-fit metric we used
the Kolmogorov-Smirnov statistic (KS), which is the largest
vertical distance between the fitted and actual cdfs, measured
in percentiles. The KS statistic is not affected by the
log transform of the x-axis.
In the best case KS is 1.4 percentiles. For comparison,
the fitted model in Figure 1 has a KS of 2.7. The median
value of KS is 8.7, indicating that the typical system fits the
model well. In the worst case KS is 40, which indicates that
the model is not a good description of all systems.
To give the reader a sense for this goodness-of-fit, we
present the dataset with the median value of KS in Figure 2.
The maximum deviation from the model occurs between
and 64 bytes, where there appears to be a second mode.
This kind of deviation is common, but in general the fitted
File size (bytes)0.20.61.0
{file
size
Simulation with non-normal factors
model
cdf 1000
cdf 100000
Figure
3. A file system simulation using an empirical
distribution of multiplicative factors.
models describe the tail of the cdf well.
Irlam's survey also includes data from one DOS ma-
chine. For this dataset the KS statistic is 5, which indicates
that this model fits at least one non-UNIX file system.
We conclude that this model is a good description of
many real systems, with the qualification that for some purposes
it might be more accurate to extend the model by including
a mixture of lognormal modes.
2.4. Is this model realistic?
The assumptions the model makes about user behavior
are
1. The file system starts with a single file.
2. New files are always created by processing an existing
file in some way, for example by copying, translating
or filtering.
3. The size of the derivative file depends on the size of
the original file.
The first assumption is seldom true. Usually a new file
system is populated with a copy of an existing system or
part of one. But in that case the new file system fits the
model as well as the old, and evolves in time the same way.
The second assumption is not literally true; there are
many other ways files might be created. For example, a
new file might be the concatenation of two or more existing
files, or the size of a new file might not depend on an
existing file at all.
The third assumption is based on the intuition that many
file operations are linear; that is, they traverse the input file
once and generate output that is proportional to the size of
the input. Again, this is not always true.
In addition, the simulation assumes that the distribution
of multiplicative factors is lognormal with mean 1, but we
can relax this assumption. As long as the logarithms of the
multiplicative factors have two finite moments, the distribution
of file sizes converges to a lognormal distribution, due
to the the Central Limit Theorem.
To derive this result, recall that as the simulation pro-
ceeds, the size of the nth file added to the system depends
on the size of one of its predecessors, and the size of the
predecessor depends on the size of one of its predecessors,
and so on. Thus, the size of the nth file is
::: fm (3)
where m is the number of predecessors for the nth file and
the f i are the random multiplicative factors. Taking the logarithm
of this equation yields
log(s
In log space, the distance of the nth file from the mean is
the sum of m random variables, each of which is roughly
normally-distributed. The Central Limit Theorem says that
as m goes to infinity, the distribution of this sum converges
to normal, provided that the logarithms of the f i are independent
and have two finite moments. Therefore the distribution
of the sum (Equation 4) is normal, and the distribution
of the product (Equation 3) is lognormal.
To demonstrate this effect with a realistic workload, we
collected a sample of multiplicative factors from a single
user (the author) and a single application (emacs). When
emacs updates a file, it creates a backup file with the same
name as the original, postpended with a tilde. In the author's
file system there are 989 pairs of modified and original files.
For each pair we computed the ratio of the current size to
the backup size.
The distribution of ratios is roughly symmetric in log
space, although there is a small skew toward larger values
(it is more common for files to grow than shrink). Also, the
distribution is significantly more leptokurtotic than a Gaussian
(more values near the mode).
These deviations have little effect on the ultimate shape
of the size distribution. Figure 3 shows a simulation starting
with using the observed ratios as multiplicative
factors. The black curves show the cdf of file sizes after
10, 1000, and 100000 files were created. The dashed gray
line shows a lognormal model fitted to the final curve. The
simulated distribution converges to a lognormal.
We conclude that, even if the user model is not entirely
realistic, it is robust to violations of the assumptions.
3. The Pareto model of file sizes
Several prior studies have looked at distribution of file
sizes, in both local file systems and the World Wide Web.
The consensus of these reports is that the tail of the distribution
is well-described by the Pareto distribution.
To explain this observation, Carlson and Doyle propose
a physical model based on Highly Optimized Tolerance
(HOT), in which web designers, trying to minimize down-load
times, divide the available information into files such
that the distribution of file sizes obeys a power law [6] [23].
In this section we review prior studies and compare the
Pareto model to the lognormal model. We find that the log-normal
model is a better description of these datasets than
the Pareto model, and conclude that there is little evidence
that the distribution of file sizes is long-tailed.
Furthermore, we believe that the diffusion model is more
realistic than the HOT model. The HOT model is based on
the assumption that the material available on a web page
is "a single contiguous object" that the website designers
are free to divide into files, and that they do so such that
the the files with the highest hit rates are the smallest. We
believe that constraints imposed by the content determine,
to a large extent, how material on a web page is divided into
files. Also, while web designers give some consideration
to minimizing file sizes and transfer times, there are other
objectives that have a stronger effect on the structure of web
pages.
Finally, a major limitation of the HOT model is that it
does not explain why local file systems exhibit the same
size distributions as web pages, when local file systems are
presumably not subject to the kind of optimization Carlson
and Doyle hypothesize.
3.1. Evidence for long tails
The cdf of the Pareto distribution is
The parameter determines the shape of the distribution
and the thickness of the tail; the parameter k determines the
lower bound and the location of the distribution.
This distribution satisfies Equation 1, so the Pareto distribution
is considered long-tailed, as is any distribution that
is asymptotic to a Pareto distribution.
Unfortunately, there is no rigorous way of identifying a
long-tailed distribution based on a sample. The most common
method is to plot the complementary cumulative distribution
function (ccdf) on a log-log scale. If the distribution
is long-tailed, we expect to see a straight line or a curve that
is asymptotic to a straight line.
But empirical ccdfs are often misleading. First, there
are other distributions, including the lognormal, that appear
straight up to a point and then deviate, dropping off with increasing
steepness. Thus, a ccdf that appears straight does
File Sizes from Crovella dataset
lognormal model
Pareto model
actual ccdf
Figure
4. ccdf of file sizes from Crovella dataset.
not necessarily indicate a long tail. Also, log-log axes amplify
the extreme tail so that only a few files tend to dominate
the figure. What appears to be a significant feature in
the extreme tail might be the result of just a few files.
One alternative is to use Q-Q plots or P-P plots to compare
the measured distributions to potential models. For the
datasets in this section, Q-Q plots are not useful because
they are dominated by the few largest values. P-P plots are
useful, but we did not find that P-P plots produced additional
useful information for these data.
Crovella and Taqqu have proposed an additional test
based on the aggregate behavior of samples [8]. These techniques
are useful for estimating the parameter of a synthetic
Pareto sample, but they are not able to distinguish a Pareto
sample from a lognormal sample with similar mean and
variance (see their Figures 5 and 8).
In the following sections we use ccdfs to evaluate the
evidence that size distributions are long-tailed.
3.2. File sizes on the web, client's view
Crovella et al. presented one of the first measurements
of file sizes that appeared to be long-tailed. In 1995 they instrumented
web browsers in computer labs at Boston University
to collect traces of the files accessed [10] [7] [9].
From these traces they extracted the unique file names and
plotted the ccdf of their sizes.
We obtained these traces from their web pages and performed
the same analysis, yielding 36208 unique files. Figure
4 shows the resulting ccdf along with a Pareto model
and a lognormal model. The slope of the Pareto model is
1.05, the value reported by Crovella et al. The lower bound,
k, is 3800, which we chose to be the best match for the ccdf,
and visually similar to Figure 8 in [7].
The Pareto model is a good fit for file sizes between 4KB
and 4MB, which includes about 25% of the files. Based on
this fit, Crovella et al. argue that this distribution is long-
tailed. At the same time, they acknowledge two disturbing
-1.0Session size, log10(bytes)
Pr{bytes
log10
Figure
5. ccdf of session sizes from Feldmann
dataset (reproduced from [13]).
features: the apparent curvature of the ccdf and its divergence
from the model for files larger than 4MB.
The lognormal model in the figure has parameters
11 and 3:3. This model is a better fit for the data
over most of the range of values, including the extreme
tail. Also, it accurately captures the apparent tail behav-
ior, which drops off with increasing steepness rather than
continuing in a straight line. We conclude that this dataset
provides greater support for the lognormal model than for
the Pareto model.
Feldmann et al. argue that the distribution of Web session
sizes is long-tailed, based on data they collected from
an ISP [13]. They use the number of bytes transferred during
each modem connection as a proxy for bytes transferred
during a Web session. The evidence they present is the ccdf
in their Figure 3, reproduced here as Figure 5. They do
not report what criteria they use to identify the distribution
as long-tailed, other than "a crude estimate of the slope of
the corresponding linear regions." Since the ccdf is curved
throughout, it is not clear what they are referring to. In our
opinion, this distribution exhibits the characteristic tail behavior
of a lognormal distribution. We conclude that this
dataset provides no support for the Pareto model.
3.3. File sizes on the web, server's view
Between October 1994 and August 1995, Arlitt and
collected traces from web servers at the
University of Waterloo, the University of Calgary, the University
of Saskatchewan, NASA's Kennedy Space Center,
ClarkNet (an ISP) and the National Center for Supercomputing
Applications (NCSA).
For each server they identified the set of unique file
names and examined the distribution of their sizes. They
File Sizes from NASA dataset
lognormal model
pareto model
actual ccdf
File size (bytes)1/4
{file
size
File Sizes from ClarkNet dataset
lognormal model
pareto model
actual ccdf
File size (bytes)1/4
{file
size
File Sizes from Calgary dataset
lognormal model
pareto model
actual ccdf
File size (bytes)1/4
{file
size
File Sizes from Saskatchewan dataset
lognormal model
pareto model
actual ccdf
Figure
6. ccdfs for the datasets collected by Arlitt
and Williamson.
report that these data sets match the Pareto model, and they
give Pareto parameters for each dataset, but they do not
present evidence that these models fit the data.
Four of these traces are in the Internet Traffic Archive
(http://ita.ee.lbl.gov). We processed the traces
by extracting each successful file transfer and recording the
name and size of the file.
To derive a set of distinct files, we treated as distinct
any log entries that had the same name but different sizes,
on the assumption that they represent successive versions.
Whether we use this definition of "distinct" or the alterna-
tive, we found a number of distinct files that is significantly
different from the numbers in [3], so our treatment of this
dataset may not be identical to theirs. Nevertheless, our
ccdfs are visually similar to theirs.
We estimated the Pareto parameter for each dataset using
aest [8]. The resulting range of values is from 0.97
to 1.02. We estimated the lower bounds by hand to yield
the best visual fit for the ccdf. We estimated lognormal parameters
for each dataset using the method in Section 2.2.
Figure
6 shows these models along with the actual ccdfs.
The results are difficult to characterize. For the NASA
dataset the lognormal model is clearly better. For the
Saskatchewan dataset the Pareto model is clearly better. For
the other two the ccdf lies closer to the Pareto model, but
both curves show the characteristic behavior of the lognormal
distribution, increasing steepness.
We believe that this curvature is indicative of non-long-
tailed distributions. The claim that a distribution is long-tailed
is a statement about how we expect it to behave as
file sizes go to infinity. In these datasets, the increasing
steepness of the tails does not lead us to expect the tail to
continue along the line of the Pareto model.
Although the Saskatchewan dataset provides some support
for the Pareto model, overall these datasets provide little
evidence that the distribution of file sizes is long-tailed.
Arlitt and Jin collected access logs from the 1998 World
Cup Web site [2] and reported the distribution of file sizes
for the 20728 "unique files that were requested and successfully
transmitted at least once in the access log." The raw
logs are available in the Internet Traffic Archive, but we
(gratefully) obtained the list of file sizes directly from the
authors.
They report that the bulk of the distribution is roughly
lognormal, but that the tail of the distribution "does exhibit
some linear behavior" on log-log axes. They estimate a
Pareto model for the tail, with 1:37. Again, we chose
a lower bound by hand to match the ccdf and estimated log-normal
parameters analytically.
Figure
7 shows the ccdf along with the two models. For
files smaller than 128KB, the lognormal model is a slightly
better fit. For larger files, neither model describes the data
well.
File Sizes from World Cup dataset
lognormal model
Pareto model
actual ccdf
Figure
7. ccdf of file sizes from World Cup dataset.
Again, this dataset gives us little reason to expect the
distribution to continue along the line of the Pareto model.
Except for a single 64 MB file, the extreme tail is dropping
off very steeply, which is consistent with a non-long-tailed
distribution. We conclude that this dataset does not support
the claim that the distribution of file sizes is long-tailed.
Arlitt, Friedrich and Jin did a similar analysis of more
than files transferred by the Web
proxy server of an ISP [1]. They plot the cdf of file sizes
and show that a lognormal model fits it very well. They
also show the ccdf on log-log axes and claim that "since
this distribution does exhibit linear behavior in the upper
region we conclude that it is indeed heavy-tailed."
Those figures are reproduced here in Figure 8. We do
not see any sign of linear behavior in the ccdf. In fact, it
clearly exhibits increasing steepness throughout, which is
the characteristic behavior of a non-long-tailed distribution.
We conclude that this dataset provides strong support for
the lognormal model and no support for the Pareto model.
3.4. Hybrid models of file sizes
Both Barford et al. and Arlitt et al. have proposed hybrid
models that combine a lognormal distribution with a Pareto
tail [5] [4] [1] [2].
Figure
9 is a reproduction from [4], showing the size distribution
of 66998 unique files downloaded by a set of Web
browsers at Boston University in 1998 (the W98 dataset),
along with a hybrid model.
The hybrid model fits both the bulk of the distribution
and the tail behavior, but it is not clear how much of the
improvement is due to the addition of two free parameters.
Furthermore, the extreme tail still appears to be diverging
with increasing steepness from the model.
If we are willing to use a model with more parameters,
it is natural to extend the lognormal model to include more
File Size in log10(Bytes)
-5
log
Percentage
File Size in log2(Bytes)
Lognormal model
Actual ccdf
Pareto model
Actual cdf
Figure
8. Distribution of file sizes from a Web proxy
server (reproduced from [1]).
than one mode. Figure 10 shows a two-mode lognormal
model chosen to fit the W98 dataset. This model is a better
fit for the data than the hybrid model.
For this example we performed an automated search for
the set of parameters-the mean and variance of each mode
and the percentage of files from the first mode-that minimized
the KS statistic. There are more rigorous techniques
for estimating multimodal normal distributions, but they are
not necessary for our purpose here, which is to find a log-normal
model that fits the data well.
For the other datasets in this paper there are two-mode
lognormal models that describe the tail behavior better than
either the Pareto or the hybrid model. For example, Figure
11 shows a lognormal model fitted to the problematic
Saskatchewan dataset. It captures even the extreme tail behavior
accurately.
We conclude that long-tailed models are not necessary to
describe the observed distributions, and therefore that these
datasets do not provide evidence that the distribution of file
sizes is long-tailed.
3.5. Aggregation
Looking at file sizes on the Internet, we are seeing the
mixture of file sizes from a large number of file systems. If
the distribution of file sizes on local systems is really log-0.20.40.60.81
log10(file
Model
-5
log10(P[X>x])
log10(file
Model
Figure
9. Distribution of unique file sizes from
Web browser logs, and a hybrid lognormal-Pareto
model. (Reproduced from [4])
normal, then it is natural to ask what happens when we aggregate
a number of systems. To address this question, we
went back to the Irlam survey and assembled all the data
into an aggregate.
In total there are 6,156,581 files with 161,583 different
sizes. The size of this sample allows us to examine the extreme
tail of the distribution.
Figure
12 shows the ccdf of these file sizes along with
lognormal and Pareto models chosen by hand to be the
best fit. The lognormal model is a better fit. Throughout
the range, the curve displays the characteristic curvature of
the lognormal distribution. This dataset clearly does not
demonstrate the definitive behavior of a long-tailed distribution
A bigger data set allows us to see even more of the tail.
In 1998 Douceur and Bolosky collected the sizes of more
than 140 million files from 10568 file systems on Windows
machines at Microsoft Corporation [11]. They report that
the bulk of the distribution fits a lognormal distribution, and
they propose a two-mode lognormal model for the tail, but
they also suggest that the tail fits a Pareto distribution.
Figure
13 shows the ccdf of file sizes from this dataset
along with three models we chose to fit the tail: a lognormal
model, a Pareto model and a two-mode lognormal model.
File size (bytes)1/4
{file
size
File Sizes from w98 dataset
lognormal model
actual ccdf
Figure
10. ccdf of file sizes from Web browser logs
and a two-mode lognormal model.
File size (bytes)1/4
{file
size
File Sizes from Saskatchewan dataset
lognormal model
actual ccdf
Figure
11. ccdf of file sizes from the Saskatchewan
dataset and a two-mode lognormal model.
Again, the tail of the distribution displays the characteristic
behavior of a non-long-tailed distribution. The simple
lognormal model captures this behavior well, although it is
offset from the data. The two-mode lognormal model fits
the entire distribution well.
Based on these datasets, we conclude that the lognormal
model is sufficient to describe the aggregate distributions
that result from combining large numbers of file systems.
4. Self-similar network traffic
Most current explanations of self-similarity in the Internet
are based on the assumption that the distribution of file
sizes is long-tailed. We have argued that the evidence for
this assumption is weak, and that there is considerable evidence
that the distribution is actually lognormal and therefore
not long-tailed.
In this section we review existing models of self-similar
File size (bytes)1/16
{file
size
File Sizes from Irlam survey
lognormal model
pareto model
actual ccdf
Figure
12. ccdf of file sizes in the Irlam survey.
File size (bytes)1/32
{file
size
File Sizes from Microsoft survey
lognormal model
pareto model
two-mode lognormal model
actual ccdf
Figure
13. ccdf of file sizes in the Microsoft survey.
traffic and discuss the implications of our findings.
One explanatory model is an M/G/1 queue in which
network transfers are customers with Poisson arrivals and
the network is an infinite-server system [19] [18]. In this
model, if the distribution of service times is long-tailed then
the number of customers in the system is an asymptotically
self-similar process.
Willinger et al. propose an alternative that models users
as ON/OFF sources in which ON periods correspond to net-work
transmissions and OFF periods correspond to inactivity
[22]. If the distribution of the lengths of these periods is
long-tailed, then as the number of sources goes to infinity,
the superposition of sources yields an aggregate traffic process
that is fractional Gaussian noise, which is self-similar
at all time scales.
Two subsequent papers have extended this model to include
a realistic network topology, bounded network capacity
and feedback due to the congestion control mechanisms
of TCP [16] [12]. In both cases, the more realistic models
yielded qualitatively similar results.
All of these models are based on the assumption that the
distribution of file transfer times is long-tailed. There is
broad consensus that this assumption is true, but there is
little direct evidence for it.
Crovella et al. have made the indirect argument that the
distribution of transfer times depends on the distribution of
available file sizes, and that the distribution of file sizes is
long-tailed [9]. However, we have argued that the evidence
for long-tailed file sizes is weak.
If the distribution of file sizes is not long-tailed, then
these explanations need to be revised. There are several
possibilities:
Even if file sizes are not long-tailed, transfer times
might be. The performance of wide-area networks is
highly variable in time; it is possible that this variability
causes long-tailed transfer times.
Even if the length of individual transfers is not long-
tailed, the lengths of bursts might be. From the net-
work's point of view there is little difference between
a TCP timeout during a transfer and the beginning of a
new transfer [19].
The distribution of interarrival times might be long-
tailed. There is some evidence for this possibility, but
also evidence to the contrary [19] [3] [5] [13].
Two recent papers have argued that the dynamics of
congestion control are sufficient to produce self-similarity
in network traffic, and that it is not necessary
to assume that size or interarrival distributions are
long-tailed [21] [14].
A final possibility is that network traffic is not truly
self-similar. In the M/G/1 model, if the distribution
of service times is lognormal, the resulting count process
is not self-similar and not long-range dependent
[19], but over a wide range of time scales it may be
statistically indistinguishable from a truly self-similar
process.
As ongoing work we are investigating these possibili-
ties, trying to explain why Internet traffic appears to be self-similar
5. Conclusions
The distribution of file sizes in local file systems and
on the World Wide Web is approximately lognormal
or a mixture of lognormals. We have proposed a user
model that explains why these distributions have this
shape.
The lognormal model describes the tail behavior of observed
distributions as well as or better than the Pareto
model, which implies that a long-tailed model of file
sizes is unnecessary.
In our review of published observations we did not find
compelling evidence that the distribution of file sizes is
long-tailed.
Since many current explanations of self-similarity in
the Internet are based on the assumption of long-tailed
file sizes, these explanations may need to be revised.
Acknowledgments
Many thanks to Mark Crovella (Boston University) and
Carey Williamson (University of Saskatchewan) for making
their datasets available on the Web; Martin Arlitt (Hewlett-
Packard) for providing processed data from the datasets he
collected; Gordon Irlam for his survey of file sizes, and John
Douceur (Microsoft Research) for sending me the Microsoft
dataset. Thanks also to Kim Claffy and Andre Broido
(CAIDA), Mark Crovella, Rich Wolski (University of Ten-
nessee) and Lewis Rothberg (University of Rochester) for
reading drafts of this paper and making valuable comments.
Finally, thanks to the reviewers from SIGMETRICS and
MASCOTS for their comments.
--R
Workload characterization of a Web proxy in a cable modem environment.
Workload characterization of the
Web server workload characterization: the search for invariants.
Changes in Web client access patterns: Characteristics and caching implications.
Generating representative Web workloads for network and server performance evalua- tion
Highly optimized tolerance: a mechanism for power laws in designed systems.
Estimating the heavy tail index from scaling properties.
Characteristics of WWW client-based traces
A large-scale study of file-system contents
Dynamics of IP traffic: a study of the role of variability and the impact of control.
The changing nature of network traffic: Scaling phenom- ena
The adverse impact of the TCP congestion-control mechanism in heterogeneous computing systems
Unix file size survey - <Year>1993</Year>
On the relationship between file sizes
M/G/1 input process: a versatile class of models for network traffic.
Proof of a fundamental result in self-similar traffic modeling
The chaotic nature of TCP congestion control.
Heavy tails
--TR
area traffic
through high-variability
Self-similarity in World Wide Web traffic
Heavy-tailed probability distributions in the World Wide Web
Dynamics of IP traffic
--CTR
Tony Field , Uli Harder , Peter Harrison, Network traffic behaviour in switched Ethernet systems, Performance Evaluation, v.58 n.2+3, p.243-260, November 2004
Cheng , Wei Song , Weihua Zhuang , Alberto Leon-Garcia , Rose Qingyang Hu, Efficient resource allocation for policy-based wireless/wireline interworking, Mobile Networks and Applications, v.11 n.5, p.661-679, October 2006 | file sizes;long-tailed distributions;self-similarity |
379527 | Performance and fluid simulations of a novel shared buffer management system. | We consider a switching system that has multiple ports that share a common buffer, in which there is a FIFO logical queue for each port. Each port may support a large number of flows or connections, which are approximately homogeneous in their statistical characteristics, with common QoS requirements in cell loss and maximum delay. Heterogeneity may exist across ports. Our first contribution is a buffer management scheme based on Buffer Admission Control, which is integrated with Connection Admission Control at the switch. At the same time, this scheme is fair, efficient, and robust in sharing the buffer resources across ports. Our scheme is based on the resource-sharing technique of Virtual Partitioning. Our second major contribution is to advance the practice of discrete-event fluid simulations. Such simulations are approximations to cell-level simulations and offer orders of magnitude speed-up. A third contribution of the paper is the formulation and solution of a problem of optimal allocation of bandwidth and buffers to each port having specific delay bounds, in a lossless multiplexing framework. Finally, we report on extensive simulation results. The scheme is found to be effective, efficient, and robust. | Introduction
This paper considers a model of a switching node shown
in
Figure
1, which has multiple(N ) ports with bandwidths
that share a common buffer of size B. In the
shared buffer there is a FIFO logical queue implemented as
a linked-list for each port. Each port may support a large
number of flows or connections, which we envision to be
more or less homogeneous in their statistical characteris-
tics, with heterogeneity existing across ports. This view
is compatible with the QoS requirements, which is that
there is a common delay bound on all connections through
a port and that the bound on the probability of either violating
the delay bound or losing cells is specific to each port.
While shared buffer management has been of interest for
some time, see for instance [IRL78], [FGO83], [HBO93],
[CHA93], [GCG95], [WMA95], [CHA96], the approach
taken here is rather different. Our work shares objectives
with recent work on Virtual Queueing (see [SWR97] and
references therein), in which per-VC behavior is emulated
on a shared FIFO buffer. However, the algorithms are different
and further work is necessary to make the linkages
explicit.
This work has three main contributions. First, we propose
a shared buffer management scheme based on Buffer
Admission Control (BAC), which is integrated with Connection
Admission Control (CAC) at the switch and is
at the same time fair, efficient and robust in sharing the
buffer resource across multiple ports. Recent work on
CAC for single port systems [EMW95], [EMI97], [LZT97],
[LZI97], [RRR97] has used crucially the idea of trading
bandwidth and buffer to arrive at designs which allocate
specific amounts of bandwidth and buffer per connection.
If this approach is to be followed for multiport, shared
buffer systems, it is imperative that CAC and BAC work
in tandem. The dual objectives of satisfying QoS, which
requires protecting and isolating connections, and extracting
multiplexing gains from sharing the buffer resources,
need to be balanced in shared memory architectures. With
this in mind we base our scheme on Virtual Partitioning
[MZI96], [BMI97], [MZI97], which is a technique for
fair, efficient and robust resource
at each point in time, ports
that are using less than their nominal allocations, which
may be called "underloaded", are accorded higher priority;
conversely, ports that are exceeding their nominal alloca-
tions, which may be called "overloaded", are given lower
priority; finally, the priority mechanism is implemented by
a dynamic adaptation of the classical technique of trunk
reservation [AKI84], [KEY90], [REI91], wherein cells for a
port are admitted to the buffer only if the free capacity in
the buffer exceeds a reservation parameter, which is specific
to the port's loading status. The role of the reserved
capacity is to protect underloaded ports and the dynamic
reservation mechanism forces traffic to overloaded ports to
back-off in the event that underloaded ports need to claim
their allocated share. Note that the push-out mechanism is
not explicitly used in this scheme for ease of implementa-
tion, but it may easily be added. Also note that the state
information required to implement BAC for port i consists
of just the current queue for port i and the current total
buffer occupancy.
Traffic classes/ports are differentiated by QoS and also
by the degree of isolation or protection required [CSZ93].
This is realized by the nominal allocations and the reservation
parameters. For instance, a greedy "best effort"
class/port will have relatively large reservation parameters
working against it. Classical trunk reservation displays extraordinary
robustness properties and, also, optimal reservation
parameters scale slowly (eg. logarithmically or sub-
linearly) with the size of the problem [REI91], [MGH91].
We observe similar properties here. It suffices for the reserved
capacities to be small and there are broad ranges
of the reservation parameters that offer desirable perfor-
mance. The former is key to the efficiency and the latter
to the robustness of the scheme.
In a fluid simulation of our BAC the sample path of the
total buffer occupancy Q(t) exhibits persistence or "stick-
iness" at thresholds, which are determined by the nominal
allocations and reservation parameters. These thresholds
are points where the BAC is active and admits only fractional
amounts of fluid generated by affected ports. The
determination of these fractions is based here on a particular
notion of fairness, wherein all affected ports are treated
equally. Alternative notions of fairness may readily be sub-
stituted. These fractions, called "fairshare", are computed
at the onset of a sticky period during simulations by solving
an equation. We should point out that similar issues
arise in Fair Queueing, Generalized Processor Sharing and
Max-Min Fair ABR services, and the techniques for fluid
simulation developed here are expected to extend to these
applications.
The traffic source models considered in this work are adversarial
to the extent permitted by Dual Leaky Bucket
regulators, which are assumed to be typically present to
police the sources. In the case of lossless multiplexing, the
sources are permitted to collude so that in the worst case
the sources synchronize to burst at their peak rates at the
same time. In the contrasting, less conservative, statistical
multiplexing approach, the sources are assumed to be
non-collusive, so that their burst activity is asynchronous,
i.e., their phases are independent, uniformly distributed
random variables. In addition, we allow very small probabilities
to bound the possibility that QoS requirements in
delay and/or loss are not satisfied. Note that the design for
statistical multiplexing relies on the design for lossless mul-
tiplexing, so that the complete treatment is the composite
of phase 1 (lossless) and phase 2 (statistical).
This is the approach taken in [EMW95]. LoPresti et.
al. [LZT97], in an important recent paper, formulated a
general bandwidth-buffer trade-off problem in the lossless
multiplexing framework, in addition to treating the buffers
and bandwidth as two resources in the analysis of statistical
multiplexing, in contrast to a conservative device
in [EMW95] which allowed a reduction to a single-resource
problem. Our extension of the resource allocation problem
in lossless multiplexing to a shared buffer, multiport system
integrates the notion of the traffic class profile, an input
generated exogenously, possibly by market conditions,
and also delay bounds for each traffic class. The solution
to this general problem classifies bandwidth-buffer systems
in a new way and sheds light on the desired mix of these
resources to match traffic mixes. Le Boudec [LEB96] has
also considered a problem of optimal resource allocation to
inhomogeneous flows in the lossless framework in which the
optimization criterion is quite different from ours.
It has been shown in [MMO95] that in a single-resource
problem, the Chernoff estimate of the loss probability for
asynchronous, independent sources is maximized by source
rate waveforms which are on-off, periodic and extremal.
This result was used in [EMI97] to formalize the choice
of on-off waveforms as extremal in the bounding approach
followed in [EMW95]. More recently, Rajagopal, Reisslein
and Ross [RRR97], in an important work, have shown that
the extremal source rate processes for a two-resource formulation
of the statistical multiplexing problem are not on-
off. Since the numerical results in [LZT97] and [RRR97]
on system capacity with statistical multiplexing do not
show a marked increase over that obtained by the approach
in [EMW95] and [EMI97], which is based on a reduction to
a single resource and on-off waveforms, we adopt here the
latter, since it is substantially easier to implement.
Section 2 below gives the details of the system model and
Buffer Admission Control. Section 3 formulates and solves
a problem of optimally allocating buffers and bandwidth
to ports in a lossless multiplexing framework. Section 4
describes the issues in fluid simulation of the BAC. Section
5 describes the numerical results, which are all based on
statistical multiplexing. Concluding remarks are in Sec.6.
II. Description of System Model and Buffer
Management
A. System Model
The N-port system is shown in Figure 1. The term "con-
nection" is used interchangeably with "source". The shared
buffer capacity is B and we let
denote the aggregate
output bandwidth. Note that, while each port is
work-conserving, work-conservation does not apply to the
Source
Rate
Time
on
Fig. 2. Rate
aggregate output bandwidth. That is, unlike Generalized
Processor Sharing (GPS), it is possible for some ports to
be not fully utilized, while other ports are backlogged.
Each port has a logical or virtual FIFO queue, which is
implemented as a linked-list, with cell addresses in a data
RAM [EPA97]. Note that situations may arise where the
buffer is full, but there is no queue for one or more ports.
In this case, the incoming data for a port without a queue
may access the port directly and not be blocked. We let
denote the logical queue for port i at time t, and
denotes the total buffer occupancy at t.
We assume that there are K i connections for port i. A
tacit assumption made here is that the connections for any
port share common QoS requirements and are policed by
Dual Leaky Bucket regulators with similar, if not iden-
tical, specifications. In the analysis of the next section
it is assumed that the connections for port i are homogeneous
with Dual Leaky Bucket regulators specified by
In the succeeding section on fluid simulations
we allow the additional generality of heterogeneous
traffic classes for each port. The QoS requirement on the
probability of loss for connections going through port i is
and the delay bound is D i .
In the Dual Leaky Bucket regulator with specifications
is the mean sustainable rate or the token rate,
B T is the burst tolerance or the token buffer size, and P
is the peak rate. The connection traffic rate which is regulated
by such a device is assumed to be on-off, periodic and
extremal in the sense that when the source is on its rate is
the maximum permitted rate, P . The amount of data \Theta
generated in an on period is also maximum, and is given
by
with the mean rate in each period being r. This extremal
rate process, denoted by \Omega\Gamma t), is shown in Figure 2, where
B. Description of the Buffer Admission Control
We let B i denote the "nominal allocation" of buffer
capacity to port i. Assume that B i is calculated by
some technique appropriate for single port systems, such
as [EMW95], [LZT97] or [RRR97], which takes into account
. Thus, the process of nominal buffer
allocation and CAC are intimately coupled. Typically,
the extent of buffer over-allocation
is a subject of design influenced by issues such as multiplexing
gain across ports, the degree of isolation desired000000000000000000000000000000000000000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111111111111111111
traffic
admission
block
admission
traffic
block
admit admit
R
R u N
BN
to port N
to port 1
port 1 traffic port N traffic
Fig. 3. Illustration of the Buffer Admission Control for an N-port
system where ports 1 and N are carrying "guaranteed service"
and "best effort" traffic, respectively.
for ports from other ports which are misbehaving due to,
say, the breakdown of policing regulators, and the presence
of "best effort" traffic through some ports. Some of these
issues are probed in Section 5 on numerical investigations.
We define the "status" of a port i as overloaded if
underloaded if Q . For each port
i we define the pair of "reservation parameters" (R u
with the superscripts u and denoting "underloaded" and
"overloaded", respectively. These parameters are chosen so
that R u
i for all i. Typically, R u
as we shall see,
and the main reason for having R u
is to give traffic
for such a port lower priority in accessing the buffer. Our
Buffer Admission Control (BAC) algorithm is as follows:
Admit traffic for port i into the buffer if and only if one of
the following conditions is satisfied
fflPort i is underloaded, i.e.
fflPort i is overloaded, i.e. Q
From the fluid flow point of view, the BAC is not complete
until the policy at the global thresholds
for each s and i, is specified. This is an important aspect
of the BAC. The principle we adopt, which is based
on a particular notion of fairness, admits fluid to affected
ports at rates which correspond to equal fractions of the
rate of total fluid generated by the sources for each port.
This principle may readily be substituted by another such
as admitting fractions proportional to prespecified weights,
or, as in the cell-level control scheme of [RWM95], treating
the admitted fractions as control variables to meet QoS
requirements.
We illustrate the application of the BAC in a scenario
where port 1 is carrying "guaranteed service" traffic, while
port N is carrying "best effort" traffic. The tacit assumption
is that the traffic through port 1 is tightly regulated
and admission-controlled, while the traffic through port N
is loosely regulated, perhaps only its peak rate is policed,
and the QoS constraints for CAC are slack. Hence the flow
for port 1 is given priority in accessing the buffer over port
N . One level of control in buffer management is reflected
in the selection of the nominal allocations B 1 and BN . Yet
another level of control is exercised through reservation pa-
rameters. We set
N . Figure
3 illustrates the BAC for the two ports. Consider an instant
in time when port N is overloaded, while port 1 is underloaded
4and its sources have just become active.Besides
the reserved buffer capacity being made available to port
1, the other mechanism working in its favor is that every
departing cell of port N is systematically replaced by an
arriving cell for port 1.
III. Bandwidth and Buffer Allocations: The
Design Problem
In the first phase, the design problem is considered in the
lossless multiplexing framework, and in the second phase
with statistical multiplexing. The antecedents of this work
are [EMW95], [LZT97] and [EMI97]. The lossless multiplexing
problem is as follows. We are given (see Figure 1)
and C, the shared buffer capacity and aggregate output
bandwidth, respectively. The connections for port i are
subject to Dual Leaky Bucket regulation with parameters
furthermore, the sources are on-off periodic
and extremal (see Figure 2). We are also given the
traffic class profile j
represents the relative mix of connections of the various
traffic classes at the desired operating point. That is, the
desired operating point is Kj. The
scalar K is hence a proxy for the capacity of the system for
the specified traffic mix. The QoS requirements are zero
loss and delay bound D i for class/port i. The problem
is to obtain the buffer-bandwidth allocations (B
port such that the capacity K is maximized
subject to the resource constraints
It will be shown shortly that the above problem is equivalent
to the following.
Lossless Multiplexing Design Problem (B; C; j; D)
subject to
where
The burst size and time, \Theta i and T on;i , are given in (1)
and (2). The solution to this problem is given in the following
subsection.
A. Lossless Multiplexing
We begin by briefly reviewing the notion of a "virtual
buffer/trunk system" for a single source [EMW95]. Consider
a single source of class i with periodic, on-off rate pro-
cess\Omega\Gamma which supplies an initially empty, infinite buffer
with an output port of bandwidth c i , where c i ? r i for
stability. Let b i denote the maximum buffer content over
time. Then it is easy to see that
F
Fig. 4. Two cases for class i in the min-max design problem.
When K i sources of class i are connected to port i, the
constraint implied by lossless multiplexing is
where
The constraint implied by the bound on the maximum delay
is
Thus we obtain the design problem stated in (4)-(7).
Next we systematically normalize with respect to B, C
and K. Let
The resource constraints in (5) translate to
Also,
where
Note that fq i g, ft i g and fd i g are dimensionless parameters.
Hence the design problem is equivalent to the following
where
subject to
To analyze the problem it will help to first consider two
cases for each class: d i - 1 and d i ? 1. For reasons
which will become clear shortly, the former may be called
"bandwidth constrained" and the latter "balanced" (see
Figure
4). In case (i) the delay requirement necessitates
a disproportionate amount of bandwidth to be expended.
Clearly, the point E i is locally the most attractive allocation
point. For case (ii), the delay requirement is not so
stringent, which makes the balanced allocation point A i locally
the most attractive. However, as we shall see, global
considerations will lead to allocations different from the
local optima.
For the min-max problem (16)-(18), a little thought
shows that, for each class/port i, the globally optimum
allocation lie on the segment shown in
Figure
4. Hence, the constraint (18) may be replaced by
the following: for
using (19), we may eliminate from the problem
to obtain the following version:
where
ae P
subject to
The two cases in (22) correspond to whether the limiting
resource is bandwidth or buffer, respectively. Note too that
a feasible solution always exists.
There are now two cases to consider depending upon the
location of the corner point fq i with respect to
the separating hyperplane in fl-space, which is in (22). In
the first, simpler, case
This may be referred to as the "globally bandwidth-
constrained" case in analogy to the previously discussed
classification for the single class. Certainly this case holds
if each class is bandwidth constrained, i.e., d i - 1 for all i.
If (24) holds then it is easy to see that the solution to the
min-max problem is
Now consider the alternative to (24), namely E ? 0,
which may be termed the "globally balanced" case. In this
case, it may be deduced that the solution to the min-max
problem as stated in (21)-(23) must lie on the separating
hyperplane where the buffer and bandwidth are simultaneously
exhausted., i.e.,
It then follows that the min-max problem reduces to
min
subject to (26), and the inequality constraints (23); also
This problem is an LP with a simple solution, which is
also exploited in [LZT97]. The solution is incorporated in
the summary given below to the design problem (4)-(7),
which is stated in terms of the original variables.
Proposition 1: Let
(i) In the "globally bandwidth-constrained" case, E - 0.
In this case the maximum capacity is
which is obtained for the following buffer and bandwidth
allocations.
and
where
(ii) In the "globally balanced" case, E ? 0. Let the classes
be indexed in decreasing order of their on periods, i.e.,
and let k be the smallest integer such that
The capacity K is then maximized by the following allocations
K[ EB
Ton;j +B=C
Ton;k+B=C
Ton;j
jk \Theta k
Ton;k+Dk
The maximum capacity K is obtained from the relation
C. The buffer allocations are obtained from the
relation
which satisfies
The differences between the two cases in the above result
are noteworthy. In the globally bandwidth-constrained
case the sum of the allocated bandwidths equals the given
aggregate bandwidth, but that is not the case with the
buffers. In contrast, in the globally balanced case, both
buffers and bandwidth are totally consumed by the allocations
This dichotomy has broad implications, which we briefly
touch upon now. Suppose that in the former case,
class/port N is "best effort" with generous delay require-
ments, say DN - B=C. Clearly, by giving greater weight
to the best effort class in the traffic-class profile j, i.e., by
increasing jN , perhaps induced by tariffs, the quantity E
in (28) can be made to change from negative to positive,
which would make the system globally balanced, resulting
in full utilization of both resources with proper allocations.
B. Statistical Multiplexing
The objective here is to take the buffer-bandwidth allocation
and maximize the number
of connections K i by exploiting statistical multiplexing,
such that the loss probability In particular, as
mentioned in Section 1, here we do not consider the sharing
of buffer resources across ports as governed by the BAC.
The technique given below for computing the loss probability
for any given K i is based on the Chernoff asymptotic
approximation technique and is due to [EMW95].
A key observation is that the allocation (B based
on the behavior of the "virtual buffer/trunk system" for
a single source, where the peak buffer utilization is b i and
the peak bandwidth utilization is c i , with (see 10),
This relation is key to the reduction to a single-resource,
which we may now take to the bandwidth. The bandwidth
utilization by a single connection k is denoted by u ik . On
account of the assumption that the sources have phases
which are independent and uniformly distributed over the
period
. The loss probability for port i is
where the bandwidth load on port i is U
k=1 u ik .
Since the distribution of the random variables is bino-
mial, it is straightforward to obtain the following Chernoff
asymptotic approximation to P loss;i .
Proposition 2:
where
a
log
- a i
a i log a i
and
The capacity K max;i is the value of K i for which
IV. Fluid Simulations
In this section, we investigate the technical issues in a
fluid simulation of the proposed Buffer Admission Control
scheme. For the simulation, we also allow the additional
generality of possibly several classes of connections through
each port, with each class having a corresponding Dual
regulator. Note that the fluid flow representation
is an approximation to cell-level behavior in which
cell-level details in the behavior of sources, trunks and
buffers are smoothed into a steady rate of fluid flow from
the sources causing piecewise linear changes in buffer oc-
cupancy. Events such as admission/removal of connections
and starts/ends of bursts in connections are key events in
fluid simulations, which coincide with jumps in the fluid
rates. In addition to these source related events, there are
buffer and control related events. For example, when the
buffer becomes empty or full, the rate of flow out of the
buffer undergoes a discontinuous change. In a high speed
system such as ATM, the advantage of fluid simulation accrues
from the fact that these "events" occur on a much
slower time-scale as compared to cell arrivals and depar-
tures, thereby requiring much less simulation time to cover
a given amount of actual system time. Obviously, fluid
simulations must be event-driven in order to realize this
advantage.
In event-driven simulations, the event set in the system
determines the logical complexity; the larger the event set,
the more complex the simulation, since it is necessary to
determine the earliest next event from among a larger number
of possibilities.
In the simulations described here, the source model for
traffic of an individual connection of class j going into port
i consists of on-off, periodic rate waveforms, which are extremal
with respect to the Dual Leaky Bucket regulator
for the class (see Figure 2) and have phases that are independent
random variables uniformly distributed over the
period of the waveform. The rate process of the aggregate
traffic offered to port i is the superposition of the rate
processes of all connections through the port.
An important feature of the fluid simulations described
here, which is a reflection of the BAC, is that the logical
queues fQ i g for the ports are computed and monitored, of
course, but not the queues for the individual connections.
The set of events relevant to our fluid simulation is of
two types - (i) Source events and (ii) Buffer events. Buffer
events, which are threshold crossings, can further be subdivided
into (i) port-specific and (ii) global. We now proceed
to enumerate and describe these events.
Type 1: Source state changes The state changes are
discrete events corresponding to the discontinuities of the
piece-wise constant, aggregate rate process for each port.
ffl Type 2: Port threshold crossings When the logical
queue for any port empties or the port uses up its nominal
allocation, i.e., Q some of the incoming fluid
is treated differently by the control scheme. Hence the rate
of change of the port buffer occupancy, dQ i =dt, encounters
a discontinuity, as a result of which
discontinuous as well. We postpone the detailed discussion
of this event, except to note that the change in port status
caused by this event causes changes in the admitted fluid
rates for all classes feeding into the port.
ffl Type 3: Global threshold crossings These events
are characterized by
ij for some class j and status indicator s. When
one of these thresholds is encountered, there is a jump in
dQ=dt, since when R
ij , traffic of class j is not admitted
into the buffer, and immediately after
0 or B. We later discuss in detail the behavior of the system
following this event.
Event types 2 and 3 correspond to instants when the
BAC is active. There is a "stickiness" associated with these
thresholds, i.e., the thresholds behave like attractors, holding
the queue fixed until the next source state change. A
similar feature is shown to exist and analyzed in the loss
priority control scheme of [EMI94]. The stickiness is a result
of our control policy at the thresholds, which requires
admitting only a fraction of the fluid belonging to the affected
classes. The exact composition of admitted fluid of
the various classes can be controlled in different ways depending
on QoS considerations. Here, motivated by fair-
ness, equal fractions of each of these flows are admitted.
A. Characterization and Analysis of Events
We now proceed to describe how the aforementioned
events are processed in the simulation. Each of the above
events triggers computations that determine acceptance,
buffer usage and loss rates for each port and source class
till the next event occurs. The scheduling of future events
is also dependent on these computations. The set of ports
are grouped into subsets based on status.
ports with zero occupancy.
denotes underloaded
ports with nonzero occupancy.
denotes "border" ports, i.e. these
ports are at their overload thresholds.
denotes strictly overloaded ports.
A.1 Single Threshold System
For the benefit of the reader, we first consider the simple
case where the reservation parameters are common to all
ports, i.e., R u
Thus, there is
only one global threshold B \Gamma R in the buffer occupancy.
However, even in this case, inhomogeneity exists on account
of the diverse character and number of connections through
the ports and in the QoS specifications. This diversity is
reflected in the possibly different nominal allocations B i
for different ports.
be the piece-wise constant rate of fluid generated
for port i. We then have the following cases:
1. active in this case and all
fluid going into the system is accepted, i.e.,
dt
dt
2. In this case, the BAC blocks flows to
overloaded ports. However, ports at the border between
overload and underload, i.e., those with Q must
be treated carefully in the fluid system. To start with, we
avoid this complication and assume that there are no such
border ports with Q at instant t. We then define
the following rate function f(fi) in terms of the fraction
admitted fluid from overloaded ports.
i2O
Note that f(fi) monotonically increases with fi. We then
have the following cases:
(a) 0: In this case, the underloaded ports by themselves
cause a net increase in buffer occupancy and hence
no fluid from overloaded ports is admitted, i.e.,
(b) f(1) - 0: All fluid coming into the system is admitted
and no control is exercised, i.e.,
(c) In this case, the operating value of
and we have
the net occupancy exhibits stickiness at
Now consider the case in which the queue for one of the
ports is at nominal occupancy, say Q
going into this port is admitted if the port subsequently becomes
underloaded, which is the case if - i
ternatively, if the "fairshare" (fi- i causes the port
to stay overloaded, only the fairshare is admitted. If neither
of these conditions is true, the occupancy of this port
stays unchanged at Q i This port-level stickiness
is induced by the notion of port status, which undergoes
a discontinuous change when the condition Q
crossed. Thus, the contribution of this port to the net drift
is given by
dt
0). For this case, the modified rate
function is
i2O
This rate function is used instead of (40) to determine fi
and the net rate of change of the buffer occupancy, dQ=dt.
In this case, no fluid from ports with
is admitted. However, a fraction of fluid from
ports with Q admitted so as to keep dQ i =dt - 0.
Thus, we have
dt
4. When the entire buffer is occupied, the uniform
fraction fi of fluid going into underloaded and border
ports that is admitted is such that the net drift is non-
positive. We define the rate function
which clearly satisfies We have the cases
(a) f(1) - 0: All fluid from underloaded ports and border
ports is admitted, i.e.
0: The admitted fraction fi satisfies
and the net occupancy sticks at till the next event.
A.2 General Case
We now extend the above analysis of events to the general
case where the reservation parameters are arbitrary
and there are a number of classes of connections indexed
by through each port. Let - ij (t) denote the
piece-wise constant rate of fluid of class j offered to port i
at time t. Then, the total rate of fluid generated for port i
at time t is given by
The strictly admissible, or "legal", component of this rate
is given by
l
where the superscript s in R s
ij denotes "status" of port i
at time t.
Type 1 Events are most simply processed since they
merely involve changing the values of the legal fluid rates.
Assume that, at time t, the total occupancy is not at a
global threshold, i.e., Q(t) R where R is any one of
the reservation parameters R s
ij , and the port occupancies
are not at their nominal values, i.e. Q
the rates of change for these quantities at this instant are
expressed simply as
dt
dt
(S l
(S l
i2O
(S l
where I(:) is an indicator function, which is 1 if the argument
is true and 0 otherwise. Note that since we
have assumed that no port queue is at its nominal value.
We will later modify equations (46) to include terms for
these ports, but with this assumption, (46) determines the
evolution of port occupancies till an event of type 2 or type
3 occurs, i.e. one of the thresholds is encountered.
Type 2 Events occur when one or more port queues
are at nominal allocations fB i g. In this case, the set B is
non-empty and we need to include these ports in the rate
calculations. We assume initially that the total occupancy
Q(t) is not at a global threshold. For ports in B, we define
the following aggregate rate from "border" 1 flows,
The "legal" flow rates for these ports is computed using
(45) and the o superscript on the reservation parameters.
We then have the following cases
1. S l
. In this case, dQ i
positive
and none of the border fluid is admitted, since the port
becomes strictly overloaded after this instant.
2. S l
. In this case, dQ i
strictly negative and all the border fluid is
admitted, since the port becomes underloaded after this
instant.
3. S l
(t). This case demonstrates
"stickiness" at the port level. The fluid system has
while the cell behavior experiences fluctuations
about with a long-term adherence to this condi-
tion. In this case, the contention is between the various
classes of fluid going into port i. Following our notion of
fairness, we propose that a fraction ff i of the border fluid of
each border class be admitted. This fraction is determined
by the equation
dt
which gives
These cases can be combined into the following pair of
equations that determine dQ i =dt and ff i for ports in B
We would like to distinguish this "port-level border" by superscript
bp, from other "globally border" flows to be defined later on.
dt
l
With this information, the system evolution is determined,
until the next event, by the equation
dt
dt
(S l
(S l
[(S l
i2O
(S l
Type 3 Events are the most computationally involved
and also have the most impact in the control scheme. For
simplicity, we begin with the case
that, at instant t,
j is one of
the reservation parameters. Then, for each port, we define
the "globally border" flow rate S bg
i as
where, as before, superscript s represents port status(u or
o). The possibilities for system evolution are then as follows
1. dQ=dt This is possible if and only if the legal flows
by themselves cause the total occupancy to increase, since
i would be strictly disallowed after t. Thus we require
dt
(1)
(S l
(S l
i2O
(S l
2. This condition occurs when admitting the
legal and the border flows still causes the total occupancy
to decrease. This implies
dt
(2)
(S l
(S l
i2O
(S l
3. case occurs when dQ=dt (1) - 0 and
dQ=dt (2) - 0. Then, as before, we admit a fraction fi
of the border fluid that would keep the total occupancy
unchanged. This fraction is determined by the equation
dt
where
(S l
(S l
i2O
(S l
The form of equation (53) is generic to the computation
of "fairshare" in various contexts, such as Fair Queueing,
Generalized Processor Sharing and Max-Min Fairness. A
solution procedure is described in the Appendix.
When B 6= ;, we must make further distinctions between
the flows into ports with events. We
divide these flows into the following four classes,
l
The rate equations must now include these ports. Admitting
a fraction fi of border fluid results in the following rate
changes.
dt
(S l
(S l
(S l
(S l
dt
dt
The possibilities are as follows.
1. only legal flows
are admitted.
2.
unrestricted access.
3. The fraction fi
of border flows admitted is determined
equation is solved using the procedure outlined in the Appendix
V. Numerical Investigation
In this section we present simulation results of case studies
on the efficacy of the BAC and the influence of the
scheme's parameters on system performance. All cases
studied involve statistical multiplexing. In fact, as we shall
see, in some cases the number of connections considered are
significantly more than can be supported if there was no
buffer sharing among ports. Also, the cell loss ratios (CLR)
given in the results are the ratios of lost cells to cells offered
to the buffer. We have also computed the fraction of time
during which the losses occur but these are not reported.
Finally, we consider two classes of connections with rather
different characteristics. All results reported are in dupli-
cate, one set for each class. The results are grouped in
five subsections, with Sec. 5.1 giving background data and
benchmarks for the whole section.
A. Background Data and Benchmarks
In this study the standard sources or connections
are either of class 1 or 2 with the specifications given
in
Table
I. These classes have been studied earlier
in [EMW95], [LZT97] and [RRR97]. Note that non-
regulated sources, "rogue" and "best-effort" are also considered
in the study. Table II contains basic data on
Class r(Mbps) P(Mbps) \Theta(cells) Ton(ms) T(ms)
I
Dual Leaky Bucket parameters.
statistical multiplexing gain in capacity that may be expected
from sharing the buffer. Single port and 4-port systems
are considered. In all cases the CLR is in the range
which was a choice made to keep the
burden on the simulations acceptable.
The buffer management for the multiport system of Table
II is Complete Sharing, which is the case of Virtual
Partitioning is all reservation parameters are null. Thus,
in the multiport system, no consideration is given to selective
isolation and protection for purposes of QoS, and hence
the multiplexing gain reported is optimistic. In Table II,
Class Single port system 4-port system Sharing gain
II
Buffer requirements in single and multi-port systems with
Complete Sharing. speed=15Mbps.
Kmax is the maximum admissible number of connections
in the single port system for given L,
15Mbps. Thus, Kmax takes into account statistical
multiplexing across connections through each port.
In the following experiments, we take the multiplexing
advantage between ports into account only when all connections
in the system belong to the guaranteed service
category. We do not assume any multiplexing advantage
between ports when the buffer is shared with "best effort"
or greedy "rogue" connections.
B. BAC Performance under Homogeneous Loading
Here we provide basic performance data for our BAC.
The case considered has four ports
numbers of connections through each port (see Figure 5.
The reservation parameters are common to all ports:
R u
and the parameter R is varied. In Figures 5(a) and 6(a),
there are K connections and in Figures 5(b)
and 6(b), there are K connections through
each port.
The figure plots unconditional and conditional CLR. Of
great importance is the "CLR conditional on port under-
load" for each port, which is the ratio of cells lost to cells
offered during periods when the port is underloaded. This
quantity, above all, should be small if the BAC is working
effectively in conjunction with CAC.
Figures
5 and 6 show the effect of the reservation parameters
on the conditional and unconditional CLR values for
two different nominal buffer allocations B i . In Figure 5,
are set aggressively at 333 cells per port, while in Figure
6, the B i are set conservatively so that
For class 1 connections, which have a high value of P=r of
40, the reservation mechanism helps to substantially reduce
the CLR conditional on underload only for the conservative
setting of fB i g, while for class 2 connections with a
more benign P=r of 10, reservation is uniformly effective in
restricting underloaded CLR. In both cases, the CLR conditional
on underload is uniformly lower than the unconditional
CLR. The reservation effectively trades off over-loaded
CLR for underloaded CLR, while the unconditional
CLR increases very slowly with R. Thus, at the cost of
a small amount of throughput, the reservation mechanism
in the BAC enforces compliance to resource allocations in
CAC, even while allowing effective sharing of buffers across
ports.
C. Inhomogeneity among
The objective here is to investigate the capabilities of
the reservation mechanism to support diverse QoS to the
individual ports (see Figure 7), where the single variable
R controls the reservation parameters of each of the four
ports:
R u
The nominal allocations to the ports are set conservatively
at
For
Figure
7 the number of connections through ports,
in each case, have been selected such that, if the ports
were isolated with buffer allocations matching the nominal
allocations, the QoS experienced by the ports would be
substantially different
that, in Figure 7(a) at the intended difference in
CLR is completely diluted and the CLR experienced by the
ports is roughly similar, while Figure 7(b) shows a similar
dilution to a lesser extent. The main observation on this
experiment is that a small value of R (eg.
introduces a wide separation in QoS, again at the cost of
a small amount of throughput. To minimize this slowly
varying difference in throughput, the design should select
the smallest value of R that satisfies the CLR of all the
ports. This selection is usually determined by the most
stringent CLR requirement.
D. Effect of "Best-Effort" Traffic
Here we consider the case where one of the ports, port 4,
in a 4-port system is dedicated to carrying traffic not regulated
by Dual Leaky Buckets. The traffic through port-4,
called "best-effort", has been designed to be detrimental to
the other ports carrying regulated traffic, from the multiplexing
point of view. We assume that the aggregate source
traffic for port 4 is offered to the buffer at a constant rate,
which is slightly in excess of the bandwidth of the port,
if the queue
is initially empty. Indeed, a good implementation of the
"best-effort" service would restrict the peak rate not to exceed
the port speed. Here, however, our objective is to test
the BAC under the presence of such a greedy source that
takes up all of the buffer space permitted to it. We chose
Fig. 8(a) (59)
Fig. 8(b) (60)
The reservation here is applied only against the best effort
port, i.e., R u
while for the other ports R u
The results in Figure 8 show that the scheme hence provides
substantial protection and isolation for the regulated
sources, while carrying substantial "best effort" traffic.
E. Effect of Misbehaving Connection
In
Figure
9, we show the effect of a "rogue" connection
through port 4 in a 4-port system. Under ideal conditions,
the system is as described in Section 5.2 with the same
reservation parameters and nominal allocations. However,
we do not expect any multiplexing advantage across ports
in such a scenario, and we set cells.
The BAC here is only intended to protect the well-behaved
ports 1-3.
Our model of the misbehaving guaranteed service connection
is that of a connection with a defective Dual
Bucket regulator that fails to regulate the mean
rate. Hence, the connection pumps in fluid into the system
steadily at it's peak rate. This case differs from the best-effort
case of Section 5.4 in that the reservation is applied
uniformly to all ports, and also the port with the misbehaving
connection has other well-behaved connections through
it. Thus, while the reservation restricts well-behaved ports
as well, the rogue port is not always backlogged on account
of the multiplexing in bandwidth between connections
port 4.
The results in Figure 9 once again reveal the benefits of
the reservation mechanism. The ports carrying regulated
traffic are effectively protected from the effects of the rogue
connection. The difference between cases (a) and (b) is due
in part to the fact that in the former the peak rate of one
source is a significant fraction (6/15) of the port speed.
VI. Conclusions
This paper has proposed and evaluated a shared buffer
management scheme in which the Buffer Admission Control
is coupled to the Connection Admission Control. Also,
(a).1e-050.0010.1
CLR
CLR Conditional on Port Underload
CLR Conditional on Port Overload
CLR
(b).
CLR
CLR Conditional on Port Underload
CLR Conditional on Port Overload
CLR
Fig. 5. Effect of reservation parameter on unconditional and conditional
cell loss ratios in 4-port system. See Sec. 5.2. (a) K
class 1 connections in each port,
connections in each port,
a problem of optimal allocation of buffers and bandwidth
to ports in a shared memory, multiport system has been
formulated and solved in a lossless multiplexing framework.
The scheme is found to be efficient and robust.
Extensions to the work include, first, an analytical technique
to evaluate the benefits of statistically multiplexing
the buffer resource across ports and second, the development
of fluid simulation techniques for the Generalized Processor
Sharing discipline in a multiport system.
--R
The Overload Performance of Engineered networks with Non-Hierarchical and Hierarchical Rout- ing
Virtual Partitioning for Resource Sharing by State-Dependent Priorities: Analysis
Space Priority Management in a Shared Memory ATM Switch.
Dynamic Queue Length Thresholds in a Shared Memory ATM Switch.
Traffic Shaping at a Network Node: Theory
A New Approach for Allocating Buffers and Bandwidth to Heterogenous
Advances in Shared-Memory Designs for Gigabit ATM Switching
Sharing Memory Opti- mally
Optimal Buffer Sharing.
Simulation of a Simple Loss/Delay Priority Scheme for Shared Memory ATM Fabrics.
Buffer Management in a Packet Switch.
Optimal Control and Trunk Reservation in Loss Networks.
Network Calculus Made Easy.
A CAC Algorithm for VBR Connections over a VBR Trunk.
Source Time Scale Optimal Buffer/BandwidthTrade-off for Regulated Traffic in an ATM Node
Analysis and Optimal Design of Aggregated-Least-Busy-Alternative Routing in Symmetrical Lossless Networks with Trunk Reservations
Multiple Time Scale Regulation and Worst Case Processes for ATM Network Con- trol
Virtual Partitioning by Dynamic Priorities: Fair and Efficient Resource Sharing by Several Services.
Hierarchical Virtual Partitioning: Algorithms for Virtual Private Networking.
Optimal Trunk Reservation for a Critically Loaded Link.
Packet Multiplexers with Adversarial Regulated Traffic.
Dynamic Call Admission Control of an ATM Multiplexer with On/Off Sources.
Virtual Queueing Techniques for UBR
A Buffer Allocation Scheme for ATM Networks: Complete Sharing Based on Virtual Partition.
--TR
Supporting real-time applications in an Integrated Services Packet Network
A buffer allocation scheme for ATM networks
Virtual Partitioning by Dynamic Priorities
Traffic Shaping at a Network Node
Source Time Scale and Optimal Buffer/Bandwidth Trade-off for Regulated Traffic in an ATM Node
--CTR
David M. Nicol , Guanhua Yan, Discrete event fluid modeling of background TCP traffic, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.14 n.3, p.211-250, July 2004 | buffer management;fluid simulations;virtual partitioning |
379528 | Estimating small cell-loss ratios in ATM switches via importance sampling. | The cell-loss ratio at a given node in an ATM switch, defined as the steady-state fraction of packets of information that are lost at that node due to buffer overflow, is typically a very small quantity that is hard to estimate by simulation. Cell losses are rare events, and importance sampling is sometimes the appropriate tool in this situation. However, finding the right change of measure is generally difficult. In this article, importance sampling is applied to estimate the cell-loss ratio in an ATM switch modeled as a queuing network that is fed by several sources emitting cells according to a Markov-modulated ON/OFF process, and in which all the cells from the same source have the same destination. The charge of measure is obtained via an adaptation of a heuristic proposed by Chang et al. {1994} for intree networks. The numerical experiments confirm important efficiency improvements even for large nonintree networks and a large number of sources. Experiments with different variants of the importance sampling methodology are also reported, and a number of practical issues are illustrated and discussed. | INTRODUCTION
An Asynchronous Transfer Mode (ATM) communication switch can be modeled as
a network of queues with finite bu#er sizes. Packets of information (called cells)
join the network from several sources according to stochastic processes, and some
cells may be lost due to bu#er overflow. The long-term (or steady-state) fraction
of cells that are lost at a given node is called the cell-loss ratio (CLR) at that node.
Typical CLRs are small (e.g., less than 10 -8 ) and the cell losses tend to occur in
bunches. Cell losses are thus so rare that estimating the CLR with good precision
by straightforward simulation is very time-consuming, and in some cases practically
impossible.
Importance sampling (IS) (e.g., [Heidelberger 1995]) and the splitting method
(e.g., [Glasserman et al. 1999]) are two important candidate methods for improving
the quality of the estimator in such a situation. IS changes the probability laws
governing the system so that the rare events of interest occur more frequently,
eventually to the point of being no longer rare events. The estimator is modified
accordingly so that it remains unbiased: It is multiplied by a quantity called the
likelihood ratio. The hope is that the IS estimator is more e#cient in the sense
that the product of its variance and its computing cost is smaller than for the
regular estimator. The most di#cult problem in applying IS is (in general) to
figure out how to change the probability laws so that the variance gets reduced to an
acceptable level. Theoretically, there always exists a change of measure that reduces
the variance to zero, but it is usually much too complicated and impractical, as it
frequently leads to a non-Markovian process [Glynn and Iglehart 1989; Heidelberger
1995].
For general background on e#ciency improvement (or variance reduction), we
refer the reader to Bratley et al. [1987], Fishman [1996], Glynn [1994], and L'Ecuyer
[1994]. IS is well explained in Glynn and Iglehart [1989], Shahabuddin [1994],
Heidelberger [1995], Sadowsky [1996], and the several other references given there.
Application of IS to the simulation of communication systems is studied, e.g., by
Parekh and Walrand [1989], Chang et al. [1994], Chang et al. [1995], Bonneau
[1996], Falkner et al. [1999], and Heegaard [1998].
Chang et al. [1994] derived an asymptotically optimal change of measure, based
on the theories of e#ective bandwidth and large deviations , for estimating the probability
p that a queue length exceeds a given level x before returning to empty, given
that the queue is started from empty, for a single queue with multiple independent
arrival sources that satisfy a large deviation principle. Roughly, asymptotically
optimal means that the standard error of the IS estimator converges to zero exponentially
fast with the same decay rate (exponent) as the quantity to be estimated,
as a function of the level x. A precise definition can be found in Chang et al. [1994]
and Heidelberger [1995]. An asymptotically optimal change of measure does not
minimize the variance, but it can reduce it by several orders of magnitude. Chang
et al. [1994] extended their method to intree networks of queues, which are acyclic
tree networks where customers flow only towards the root of the tree. For intree
networks, assuming infinite bu#ers at all nodes, they obtained a lower bound on
the exponent in the (exponential) convergence rate of the variance of the IS esti-
mator, but they could not prove that this estimator is asymptotically optimal. The
Small Cell Loss Ratios in ATM via IS - 3
missing step for proving the latter is to show that the square mean converges exponentially
to 0 at the same rate (i.e., with the same exponent, and not faster) than
the variance. In a related paper, Chang [1995] obtained the exact exponent in the
convergence rate of P [q # > x] towards 0 as x #, where the random variable q #
is the steady-state queue length in an intree queueing network with infinite bu#ers.
This exponent is linked with exactly the same change of measure as in Chang et
al. [1994]. The probabilities p and P [q # > x] just described are closely related
to the CLR when x equals the bu#er size (it measures similar events), so it seems
reasonable to use the change of measure proposed by Chang et al. [1994] to estimate
the CLR as well. In fact, these authors applied their IS strategy to estimate the
steady-state average number of cell losses per unit of time, another quantity closely
related to the CLR, and observed large empirical variance reductions for examples
of queueing models with a single node and two nodes in series .
In this paper, we pursue these experiments by considering much larger networks,
some of them being non intree. We consider queueing networks having a large
number of nodes, fed by a large number of Markov-modulated ON/OFF sources
(up to 300 in our numerical examples). The nodes are organized in successive layers.
Each cell (or customer) goes through exactly one node of each layer, following a
path uniquely determined by its source. This type of queueing network models the
tra#c in an ATM switch [Dabrowski et al. 1999; Falkner et al. 1999; Giroux and
Ganti 1999]. We apply IS to estimate the CLR at any prespecified node of the
network, using a direct adaptation of the IS methodology of Chang et al. [1994].
The idea is to increase the tra#c to the target node by increasing the average
ON/OFF ratio for all the sources (and only those) feeding that node. The exact
change of measure is determined by a heuristic which turns out to be equivalent to
Algorithm 3.2 of Chang et al. [1994] if we neglect all the tra#c not directed towards
the target node and the corresponding sources, and if we assume that the average
arrival rate after IS does not exceed the service rate at the nodes upstream of the
target node. The latter condition is always satisfied in our examples, but if it was
not, we could simply enforce it by choosing a change of measure that gives an arrival
rate equal to the service rate at those nodes where the condition is violated, exactly
as in Algorithm 3.2 of Chang et al. [1994]. Falkner et al. [1999] have proposed an
additional heuristic, based on the concept of decoupling bandwidth introduced in
de Veciana et al. [1994], to reduce the change of measure when the e#ect of the
additional tra#c not directed towards the target node may be non-negligible.
For both small and large networks, we obtain large e#ciency improvements (em-
pirically), especially when the CLR is small. When the size of the bu#er at the
target node increases, IS seems to improve the e#ciency by a factor that increases
exponentially with the bu#er size. This is consistent with the conjecture that the
change of measure is asymptotically optimal.
It must be emphasized that this IS methodology is heuristic because (i) CLR is
not the same as p; (ii) it is still unproven if the IS estimator proposed in Chang et
al. [1994] is asymptotically optimal for p, and proving it for CLR in the presence
of finite bu#ers at all nodes appears quite messy; (iii) even if the IS estimator was
asymptotically optimal for the CLR, this would only be an asymptotic result. It
would not necessarily imply that the variance is reduced su#ciently and that the
estimator is practically acceptable for the concrete models that one is interested in.
P. L'Ecuyer and Y. Champoux
Actual experimentation with this IS methodology, with examples similar to real-life
problems of interest, is therefore needed to see how practical the method is, and
whether or not additional heuristics can improve on it. The aim of the present
paper is to report on such experimentation.
Our examples also illustrate the fact, typical of rare-event simulation contexts,
that the variance estimators tend to be more noisy (in terms of relative error) than
the mean estimators, even after IS is applied. One must therefore be careful when
comparing empirical variances, or with the interpretation of confidence intervals.
(This important fact is typically left unmentioned in papers that report empirical
results on rare-event simulation.) The noise in the variance estimator is not caused
by the IS methodology, but is due to the fact that the cell losses are rare events. In
fact, in our examples, IS improves the variance estimator by a much larger factor
than the mean estimator. This is along the lines of the results of Sadowsky [1993],
who has shown, in the context of estimating large deviations probabilities for a
sum of independent and identically distributed random variables (a much simpler
situation than our model), that an optimal exponentially twisted change of measure
stabilizes all the moments of the estimator, in addition to being asymptotically
optimal. These results have been generalized to a more abstract setup in Sadowsky
[1996].
Beck et al. [1999] and Dabrowski et al. [1998] also study the application of IS to
a discrete-time queueing network model of an ATM switch. Their model is very
general. They obtain the asymptotics of the tail of the queue size distribution in
steady-state, and they use that to propose a change of measure for estimating the
CLR at a given node. Their IS methodology is related to (but di#erent from) that
proposed by Chang et al. [1994] (see also Section 6.3).
We have limited the generality of the model in order to avoid excessive notation
and unnecessary complexity. The model is flexible enough to illustrate how the IS
strategy behaves in several important situations. In addition to our basic model,
we also made several experiments with IS for generalizations and other variants
of this model. The proposed IS scheme works fine for some variants and fails for
others. This is summarized in Section 5.5. In particular, the fact that each source
is assigned a fixed destination (in contrast to having random routing for each cell
or group of cells) is an important factor for the success of the method.
The geometric distribution of the ON and OFF periods for the sources (implied
by the Markov-modulated model) is certainly not always realistic. Statistical analyses
of broadband tra#c traces (for Ethernet, video, etc.) indicate that the tra#c
is often long-range dependent, and that a more representative model in this case
is obtained when the sojourn-time distributions in the ON/OFF states are heavy-tailed
(e.g., Pareto-type, with infinite variance) [Beran et al. 1995; Leland et al.
1994; Willinger et al. 1995; Neame et al. 1999]. Unfortunately, the IS methodology
used in this paper relies heavily on the exponential tails of the ON/OFF
distributions, and it is not obvious how (and if) it can be adapted to heavy-tailed
distributions. On the other hand, despite the empirical evidence of the limited
representativity of the Markov-modulated model in certain contexts, there remain
supporters of this model who argue, based on experiments, that in networks with
finite bu#ers the correlations of very long range are usually not important to model
anyway [Heyman and Lakshman 1996; Krunz and Makowski 1998]. In any case,
Small Cell Loss Ratios in ATM via IS - 5
we think that the Markov-modulated model is still an important model, and that a
better understanding of the e#ciency improvement methods proposed for it remains
worthwhile.
The model is defined is Section 2. Section 3 recalls the A-cycles method and the
batch-means method, which are used jointly to compute confidence intervals. In
Section 4 we explain how IS is applied to estimate the CLR at a given target node.
The change of measure is the same as in Chang et al. [1994] for intree networks.
However, our explanation of the heuristic di#ers from that of these authors. It is
given directly in terms of the CLR and is along the lines of the arguments given
in Heidelberger [1995], pages 62-63. Numerical results are reported in Section 5.
We also summarize our numerical experiments with several other variants of the
model. In Section 6, we consider various refinements of the basic IS scheme, and
test them empirically to see how much additional variance reduction they can bring.
We report both positive and negative results. Section 7 explains how the CLR can
be estimated in functional form, as a function of certain parameters of the model.
Section 8 gives a conclusion. A preliminary report of this work was presented in
L'Ecuyer and Champoux [1996] and additional numerical results and details can
be found in Champoux [1998].
2. THE MODEL
We consider an acyclic queueing network with 4 layers of nodes , as illustrated in
Figure
1. Each node is a single-server FIFO queue with finite bu#er size. The #-th
layer is called level # and the nodes at level 4 transmit cells to destinations . Levels
2 and 3 have m 2 nodes each, while levels 1 and 4 have nodes each. Each
level-2 node is fed by m 1 level-1 nodes, while each level-3 node feeds m 1 nodes
at level 4. Cells (i.e., packets of information) arrive at level 1, visit one node of
each level, in succession, then leave the network. Each node at level 1 is fed by
sources . These are assigned to specific destinations;
i.e., all the cells produced by a given source follow exactly the same path. The
trajectory of a cell at the switching stage between levels 2 and 3 is thus determined
by its fixed destination. The arrival sources are time-synchronized, but otherwise
independent, stochastically identical, discrete-time ON/OFF Markov modulated
processes. A source is OFF for a while, then ON for a while, then OFF again, and
so on. The source produces one cell per unit of time during an ON period, and none
during an OFF period. The durations of OFF and ON periods are independent
geometric random variables with means # 0 and # 1 , respectively, so the arrival rate
called the average burst size. If we denote
OFF and ON by 0 and 1, respectively, these assumptions imply that the state of
a source evolves as a discrete-time Markov chain with two states, 0 and 1, with
transition probability matrix
. (1)
These Markov chains comprise all the stochasticity of the model; everything else is
deterministic. The arrival sources are numbered from 1 to and the nodes
are numbered from 1 to 2m 2 (1+m 1 ), level by level. When two or more cells reach a
given node simultaneously, they are placed in the queue (the bu#er) by order of the
P. L'Ecuyer and Y. Champoux
number of the node or source where they come from. This deterministic ordering
rule is for simplification and tends to favor the cells coming from certain sources
and nodes. One could order the cells randomly instead, but that would have no
major qualitative impact on our results.
Fig. 1. An ATM Switch Modeled as 4 Layers of Queues with Finite Bu#er Sizes
All the nodes at level # have the same bu#er size B # and the same constant service
time 1/c # (so c # is the service rate). Whenever a cell arrives at a node where the
bu#er is full, it is lost and disappears from the network. Our aim is to estimate
the CLR at a given node of the network, say node q # at level # , where the CLR is
defined as
where NT (#) is the total number of cells reaching node q # during the time interval
(#) is the number of those cells that are lost due to bu#er overflow at
node q # . We assume that the total arrival rate is less than the service rate at each
node. That is, if the cells from m sources pass through a given node at level #, then
m# < c # , and this holds for all nodes.
To simplify the discussion, we assume that each c # is an integer. Since the bu#ers
are finite, the entire model is then a discrete-time Markov chain with finite state
space. It is also aperiodic, and the zero state (the state where all sources are OFF
and all the nodes are empty) is positive recurent and is accessible from every other
state. As a consequence, there exists a limiting distribution # over the state space
of that chain, defined as
Small Cell Loss Ratios in ATM via IS - 7
where # is an integer and S(#) is the state of the chain at time # .
This model could of course be generalized in several directions and our approach
would be easy to adapt for certain types of generalizations (see also Section 5.5).
For example, the bu#er sizes and constant service times can di#er between nodes
at a given level, di#erent sources can have di#erent transition probability matrices
R, a source could produce a cell only with some probability when it is ON, and
additional sources may feed the nodes at levels higher than 1. IS still works in
these situations. We keep our simpler model to avoid burying the key ideas under
a complicated notation. On the other hand, if the destinations were determined
randomly and independently for each cell, or for each ON period at each source,
finding an e#cient way of applying IS would be more di#cult. The fixed source-destination
assignment model is reasonable if we assume that a typical connection
between a source and a destination lasts for several orders of magnitude longer than
the service times 1/c # , and much longer than the time required for bu#er overflow
in the most likely path to overflow. This assumption is commonly made in the
literature [Dabrowski et al. 1999; Falkner et al. 1999; Giroux and Ganti 1999] and
by the switch manufacturers.
In the case where # = 3 or 4 and all the nodes at level # are statistically
identical, an alternative estimator would take the global (empirical) average CLR
for all nodes at level # . With a straightforward simulation approach (no IS), this
would yield a better estimator than concentrating on a single node q # . But with
IS, according to our empirical investigations, it is much better to concentrate on a
single node and increase only the tra#c to that node. With the latter, the likelihood
ratio is less noisy and there is less (undesired) overflow at the nodes of level # .
3. A REGENERATION APPROACH FOR CONFIDENCE INTERVALS
IS is generally easier to apply to a model defined over a short time horizon or when
the model's evolution can be decomposed into short regenerative cycles. Here, the
model is over an infinite horizon, and to decompose its trajectory into cycles, we
apply a generalization of the classical regenerative method introduced in Nicola
et al. [1993] and Chang et al. [1994], and called the A-cycle method. Let A be a
subset of the state space of the system. Here we take A as the set of states for
which the bu#er at q # is empty. Let #
the bu#er at q # is empty at time # but not at time # - 1}. These # i are the
successive hitting times of the set A by the Markov chain {S(# IN}. The pair
is also a positive recurrent aperiodic Markov chain over
a finite state space, so it has a pointwise limit distribution - #, say, as #. Then
the state of the chain at the hitting times # i has the pointwise limit distribution #,
over A, defined by:
where A is the complement of the set A.
The process over the time interval (# i-1 , # i ] is called the ith A-cycle. Let X i be
the number of cells reaching node q # during the ith A-cycle, and Y i be the number
of those X i cells that are lost due to bu#er overflow at q # . Let E # denote the
mathematical expectation over an A-cycle when the initial state (at the beginning
P. L'Ecuyer and Y. Champoux
of the A-cycle) has distribution #. One has:
In the limit, as the number of A-cycles increases, the average distribution of the
system states at the times # i approaches #. By taking the average of the Y i and X i
over the first n A-cycles, one obtains the consistent estimator of -:
. (4)
This estimator is biased because the initial state at time 0 is normally not generated
from # (this would be too di#cult) and because it is a ratio estimator. However,
the bias can be reduced by warming up the system, e.g., by running n 0 +n A-cycles
and discarding the first n 0 from the statistics.
The A-cycles are asymptotically identically distributed (with probability law #
for their initial state) but they are dependent . To reduce the dependence, and
also improve the normality, one can batch the cycles, as in the usual batch means
method. One then applies the standard methodology for computing a confidence
interval for a ratio of expectations, using the batch means as observations, and
obtain a confidence interval for - [Law and Kelton 2000]. In our experiments, we
fix the number of cycles per batch (and not the simulation time per batch), as
explained in Section 5.1.
4. APPLYING IMPORTANCE SAMPLING
When - is very small, the vast majority of the Y i 's in (4) are 0 and the relative error
of -
- (i.e., its standard deviation divided by -) blows up. In (3), the denominator
is easy to estimate, but the numerator is hard to estimate because it depends
on rare events. In fact, denoting - observing that Y 1 is a non-negative
integer, one has Var # [Y 1
Y
the squared relative error satisfies
Y
as Following Chang et al. [1994], we will use IS for the numerator of (3)
and not for the denominator.
denote the set of sources feeding q # . The IS strategy for increasing the
tra#c towards q # is to increase r 01 and r 11 in the matrix R, for all the sources that
belong to S # and only those, so that the total long run arrival rate at q # becomes
larger than the service rate. The system starts with an empty bu#er at q # (a
state in A) and the change remains in e#ect until the bu#er at q # empties again or
overflows. When the bu#er overflows, R is set back to its original for all the sources
until the bu#er at q # empties again, which marks the end of the A-cycle. We call
this an A-cycle with IS . Under this strategy, if the tra#c to q # can be increased
su#ciently, cell losses are no longer rare events. This can certainly be achieved if
is the cardinality of S # and c is the service rate at the
target node.
Small Cell Loss Ratios in ATM via IS - 9
It remains to decide how to change R. For this, we proceed in a standard way
[Heidelberger 1995]. For a real-valued parameter #, define
let #) be the spectral radius (largest eigenvalue) of #), and let (f 0 (#), f 1 (#
be the corresponding right eigenvector (the prime means transposed), so that
The eigenvalue #) can be written explicitly as
For IS, we will change R to the stochastic matrix
#)
This formulation is quite flexible, because the mean arrival rate from a source can
be set to an arbitrary value between 0 and 1 by choosing an appropriate #, and it
leads to important simplifications in the likelihood ratio over an A-cycle, as we will
see.
During a given A-cycle, let N ij be the number of times a source in S # goes from
state i to state j while using the probabilities 1. The
total number of transitions generated from -
R is then N 00 +N
where t is the number of time steps where IS is on. The state transitions of the
sources are assumed to occur right before the (discrete) times of cell production.
The number of cells generated for q # during the time interval (0, t] is thus N 01 +N 11 .
By a simple counting argument, assuming that the bu#er overflows at time t, one
can decompose
is the bu#er size at q # , Q t is the number of cells already generated and
on their way to node q # at time t, Q 0 is the corresponding number at time 0, F t
is the di#erence between the total capacity of service c # t of the server at q # during
(0, t] and the actual number of cells served at q # during that interval of time, and
H t is the number of cells headed to q # but lost due to bu#er overflow either at q #
or upstream during (0, t]. In (6), B # is the number of cells filling up the bu#er
and c # t - F t is the total number of cells served at q # during (0, t]. We denote
The transition probability of a source from state i to state j is r ij without IS,
so each such transition under IS contributes a factor r ij /-r ij to
the likelihood ratio associated with this change of probabilities. Since there are N ij
transitions from i to j, the likelihood ratio becomes
N00
r 01
r 01
r 11
N11
P. L'Ecuyer and Y. Champoux
#)
N11
where
It is well known that if V is a random variable defined over an A-cycle with initial
state that has distribution #, then E # [V
denotes the expectation
under the probabilities -
R, over an A-cycle with IS, with initial state drawn
from #. Thus, computing LY 1 over the A-cycle with IS yields an unbiased estimator
of One always has -
1. IS is e#ective (roughly) if L tends to have a
small variance conditional on Y 1 > 0. In other words, we would like to match the
small values of L with the positive values of Y 1 (i.e., the bu#er overflows) and the
large values of L with the no-overflow situations.
We still have the choice of # at our disposal. Since t is an unbounded random
variable likely to have significant variance, a standard strategy in this situation
(e.g., [Heidelberger 1995]) is to choose # so that t disappears from L, i.e, take
This # gives the same IS strategy as that provided in Chang et al. [1994] if we
consider only the sources that feed q # and if we neglect the possibility that the
arrival rate exceeds the service rate at upstream nodes. The latter condition
was satisfied in all the examples that we have considered, but when it is not, one
can reduce # to meet the condition exactly as in Chang et al. [1994]. Note that
#) is a strictly increasing and di#erentiable function of #
(see, e.g., Chang et al. [1994], Example 2.6), and that # -1
Therefore, this # exists if and only if m # > c # , which we assume (otherwise, one
cannot overload the node q # and overflows should be negligible anyway unless B #
is near 0). With # , the likelihood ratio becomes
The variance of the estimator of - Y is -
Y where
The squared relative error of the IS estimator satisfies
Y
Y
. (10)
The ratio of expectations in (10) is # 1, by the Cauchy-Schwarz inequality. Bounding
this ratio by a constant independent of B # would prove that the relative error
under IS is bounded, but we do not have a proof. Sadowsky [1991] was able to prove
that IS based on an optimal exponential twisting found by solving an equation similar
to (8) is asymptotically optimal for estimating p in a single GI/GI/m queue
Small Cell Loss Ratios in ATM via IS - 11
with finite bu#er. The results of Chang et al. [1994] generalize this to dependent
interarrival times, still for a single queue.
We continue the discussion with intuitive heuristic arguments suggesting that L
is likely to be small when Y 1 > 0, and that the variance of LY 1 is also likely to be
small. Firstly, since q # is stable without IS and since IS is stopped as soon as the
bu#er overflows, Y 1 is unlikely to take very large values (it should be possible to
prove that its distribution has an exponentially bounded tail under IS). Concerning
for the matrices R and the values of m # /c # that correspond to typical
examples (see Section 5), one usually has f 0
it is unlikely that less sources are ON under IS than without IS, because
the ON state has a higher steady state probability under IS, so N 01
extremely unlikely. Therefore, it seems reasonable to expect that W (# ) is less
than 1 with large probability and has a small variance, bounded independently of
. (In fact, W (# ) is bounded independently of B # because |N
by m # , but this bound can be very large unless m # is small.) Now it su#ces to
hope that the large negative values of # are rare enough to be negligible when
This hope appears reasonable for the following reasons. Since the sources
are overproducing cells headed towards q # when IS is ON, and that the initial state
at time 0 follows approximately the limiting distribution # without IS, one should
normally expect Q t # Q 0 , and it should be (intuitively) extremely unlikely that
. (Remember that Q 0 and Q t are the total number of cells in the upstream
nodes, in steady-state (roughly), without IS and under IS, respectively.) Moreover,
since the production rate of the sources exceeds the capacity at q # , the server at
q # is not expected to remain idle very long, so a large positive value of F t is very
unlikely. Since exp(-# B # ) is a constant, the above arguments suggest that L
should tend to be very small when Y 1 > 0. From another viewpoint, when Y
(no overflow), the value of # is likely to be slightly smaller than -B # , i.e., the
total number of cells produced during (0, t] should be slightly less than the total
service capacity c # t, because the bu#er at q # is empty when IS is turned OFF. This
suggests that L should tend to be larger than 1 when Y which is good news
in view of the fact that -
As a very crude heuristic, suppose we assume that e -# Y 1 W (# ) in (9) is
bounded by a constant K 1 independent of B # . This would give
in which case IS would provide the approximate variance reduction factor
Y
Y
Again, these arguments are only hand-waving heuristics and the real test is to see
how the variance is actually reduced on concrete examples. One may be tempted
to modify the IS scheme adaptively (e.g., by stopping IS earlier or later) in order
to reduce the variability of the quantity e -# Y 1 W (# ). We will return to this in
Section 6.
This discussion assumed implicitly that m # is not too large (e.g., for bounding
may expect the method to eventually break down as m # for
fixed B # , in view of the fact that the asymptotic overflow frequency for fixed B #
P. L'Ecuyer and Y. Champoux
generally di#ers from that when B # (e.g., [Shwartz and Weiss
1995]). Courcoubetis and Weber [1996] compare these asymptotics and give numerical
examples, for fluid source models, where the asymptotic based on large m #
gives a better approximation than the one based on large B # to the steady-state
probability that the queue exceeds B # . (The asymptotic based on large B # overestimated
the probability.) This suggests that the change of measure that we use might
perform poorly when m # is large and B # is small. On the other hand, the large
converges to the large B # asymptotic as B # for
their model. Empirically, we tried examples with m # up to 50 and the method still
worked fine. We also observed that taking # slightly less than # in our experiments
tended to improve the e#ciency when m # is large (see Section 6.1).
What about the variance of the variance estimator, with and without IS? They
can be compared by comparing -
Using the same crude
argument as in (11) above, one can conjecture the approximation
for some constant K 2 independent of B # . We expect not only the estimator itself
to be less noisy with IS than without, but also its sample variance to be less noisy,
and by a larger factor (at least if B # is large).
We now explain how the A-cycles are simulated to estimate both the numerator
and the denominator in (3), in the IS case. One simulates two versions of each A-
cycle, one with IS and the other without, both starting from the same initial state.
Thus, the A-cycles come in pairs. For the ith A-cycle pair, one first simulates an
A-cycle with IS, which provides an estimation L i Y i of the numerator, where L i and
Y i are the value of the L and the number of cell losses for this cycle. Then, the
state of the system is reset to what it was at the beginning of this A-cycle with IS,
and a second A-cycle is simulated to obtain an estimator X i of the denominator.
The final state of the no-IS A-cycle, which obeys approximately the distribution #,
is then saved and is taken as the initial state for the next pair of A-cycles. After a
warmup of n 0 cycles without IS, n pairs of A-cycles are thus simulated and the IS
estimator of - is
A confidence interval is computed using batch means as explained in Section 3.
Starting the two A-cycles of each pair from the same state means that we must
save or reset the entire state of the system after each cycle. This means copying
how many cells are at each node of the network, the destinations of these cells,
and the state (ON or OFF) of each source. We also memorize/reset the state of
each random number generator, so that the two A-cycles of a pair use common
random numbers. This tends to increase the correlation between L
to decrease the variance of - as a result.
Small Cell Loss Ratios in ATM via IS - 13
5. SIMULATION EXPERIMENTS
5.1 The Setup
For several examples and parameter sets, we ran the simulation first using the
standard approach without IS, for C A-cycles, and then with IS for C # pairs of
A-cycles. In each case, the values of C and C # were chosen so that the total CPU
time was about the same for both IS and no-IS, and these A-cycles were regrouped
batches. (For sensitivity analysis with respect to b, we tried di#erent
values of b ranging from 50 to 3200, for several examples, and found that the
variance estimates were practically independent of b, in that range, for the values
of C and C # that we use). For
Y j denote the samples
means of the X i and Y i (or of the X i and L i Y i , for IS), respectively, within batch j.
The tables that follow report the value of the CLR estimator - and of its variance
estimator
X, and -
Y
, and SXY are the sample means, sample
variances, and sample covariance of the -
respectively. The tables also
report the relative half-width -
2.57- of a 99% confidence interval on - (under
the normality assumption), the CPU time TCPU (in seconds) required to perform
the simulation, and the relative e#ciency (e#.), defined as -
These
values are all noisy estimates but give a good indication of what happens.
For the cases where no cell loss was observed in all the A-cycles simulated, we
put -
and the entries for the variance and e#ciency are left blank. The
simulation with IS takes more CPU time than no-IS for the same total number of
simulated cells, but the relative e#ciency takes both the variance reduction and
the overhead into account. Beware: E#ciencies and CPU times can be compared
within a given table, but not across the tables, because the models are di#erent
and the experiments were run on di#erent machines (SUN SparcStations 4, 5, and
20). Within each table, common random numbers were used for the corresponding
A-cycles across the di#erent lines of the table.
5.2 CLR Estimation at Level 2
Example 1. Let
vary the bu#er
. There are 50 sources feeding the target node q # (i.e.,
so the average arrival rate at q # is 50/101 # 0.495, while the service rate is 3.
With these numbers, we compute
increases the total arrival rate at q # from
0.495 to 14.48.
We took (note that the IS cycles
are much longer than the no-IS on the average, and their average length increases
most of them fill up the bu#er before emptying it again, whereas
for most of the no-IS cycles the bu#er empties after just a few cell arrivals). Table 1
gives the results. For not a single cell loss was observed, so
the estimates are useless. On the other hand, the relative error of the IS estimators
does not increase significantly as a function of B 2 , and these estimators can estimate
14 - P. L'Ecuyer and Y. Champoux
very small CLRs. The e#ciency decreases slowly with B 2 . (The outlier at
will be discussed later on.)
Example 2. Same as the preceding example, except that B 2 is now fixed at 512
and we vary the average burst size # 1 . For large # 1 , - is large and easy to estimate,
but not for small # 1 (the other parameters remaining the same). The results are in
Table
2. Without IS, cell losses were observed only for # 1 # 100, and even in that
case IS is more e#cient. The total arrival rate with IS decreases with
from 22.5 for 150. The squared relative error with IS
(not show in the table) is approximately constant as a function of # 1 .
Table
1. CLR estimation at level 2 for di#erent bu#er sizes
no-IS
128 2.8E-5 2.5E-11 45% 2828 0.0113
IS
128 3.0E-5 6.3E-13 7% 1675 0.838
512 2.5E-9 5.4E-20 24% 2593 0.043
768 3.7E-11 5.9E-22 170% 3108 0.001
Table
2. CLR estimation at level 2 for di#erent average burst sizes
no-IS
IS
50 2.5E-9 5.4E-20 24% 2593 0.043
100 7.2E-5 3.2E-12 6% 2445 0.659
An important question now arises: How noisy are the variance and e#ciency
estimates given in the tables? One way of estimating the distribution of the variance
and e#ciency estimators is to bootstrap from the b batch means, as follows. Put
the b pairs ( -
in a table. Draw b random pairs from that table,
with replacement, and compute the quantities - # 2 and e#. that correspond to this
sample of size b. Repeat this N times and compute the empirical distributions of
Small Cell Loss Ratios in ATM via IS - 15
the N values of - # 2 and of e#. thus obtained. These empirical distributions are
bootstrap estimators of the distributions of - # 2 and e#., and the interval between
the 2.5th and 97.5th percentiles of the empirical distribution is a 95% bootstrap
confidence interval for the variance of -
- or for the e#ciency. Table 3 gives the xth
percentiles Q x of the bootstrap distributions obtained from the results of Example 1,
2.5, 50, and 97.5, with
Table
3. Bootstrap quantile estimates for Example 1
128 4.3E-13 6.2E-13 8.8E-13 0.23 0.31 0.42
768 3.6E-25 5.9E-22 1.8E-21 2.4E-4 3.7E-4 3.7E-3
We already pointed out the very low empirical e#ciency of the IS estimator with
Table
1. A closer look at the 200 batch means -
that one
of the -
Y j in that case is all others are less than 10 -9 , except
one which is It seems that a rare event has happened within that
particular batch. We did not observe such outliers for the other values of B 2 , but
we found some in other examples, although rarely as excessive. The presence of
these outliers indicates that there remains an important tail in the distribution of
Y j with IS, despite the large reduction in the variance of -
This outlier has an
important e#ect not only on the variance and e#ciency estimators, but also on
the bootstrap distributions, as can be seen from Table 3 (compare the behavior
of the quantiles for those for the other values of B 2 ). To assess
the e#ect of the outlier, we repeated the bootstrap after removing it from the
sample (i.e., with the 199 remaining pairs), and obtained the following quantiles
4.5E-24). The e#ect is significant.
The numbers suggest that for the variance is highly overestimated, that
the e#ciency is underestimated, and that the bootstrap distribution is more widely
spread than the true distribution. To confirm these suspicions, we made 5 additional
replications of the entire experiment, independently, with IS. The
results, in Table 4, give an idea of the variability. Table 5 provides similar results
512. One can see that the e#ciency estimator is (unfortunately) noisy. On
the other hand, - is (fortunately) much less noisy, and this is reassuring. Another
reassuring empirical observation is that the distribution of the variance estimator is
skewed towards conservatism, in the sense that we observed large overestimation of
the variance (like here) from time to time, but did not observe large underestimation
of the variance. Moreover, the variance estimators tended to be more noisy when
large. Often, in practice, only crude estimates of - are sought (e.g., up
to a factor of 10 or so), so these variance estimators may su#ce. Otherwise, one
must increase the sample size. We warn the reader that in a situation where the
change of measure is not very e#ective, -
- might be noisy as well. This is in fact
what happens without IS. However, this never happened in our examples.
P. L'Ecuyer and Y. Champoux
Table
4. Five additional independent replications for
1.2E-11 1.1E-24 22% 3132 0.044
1.2E-11 2.1E-24 30% 3100 0.023
1.1E-11 8.0E-25 21% 3108 0.048
1.1E-11 1.8E-24 31% 3119 0.022
9.5E-12 2.9E-25 14% 3100 0.101
Table
5. Five additional independent replications for
3.1E-9 1.8E-19 35% 2592 0.020
2.5E-9 2.0E-20 15% 2587 0.115
2.4E-9 1.0E-20 11% 2588 0.223
5.3 CLR Estimation at Level 3
Example 3. Let
and we vary the bu#er size B
of the 60 sources to q # (so One node at level 2 is fed by 2 of these 6 hot
sources, while no other node at levels 1 and 2 is fed by more than 1 of them. Here,
and the total arrival rate at
q # is 6/21 without IS and 5.0 with IS. We take
The results appear in Table 6. IS works nicely while the no-IS observes no cell
loss except at the smallest bu#er size. With IS, the relative error and the relative
e#ciency are almost constant with respect to B # .
Example 4. Same as the preceding example, except that B 3 is fixed at 256
and we vary the average burst size # 1 . Table 7 gives the results. While no-IS has
di#culty to observe cell losses, IS gives reasonable estimations.
Table
6. CLR estimation at level 3 for di#erent bu#er sizes
no-IS
128 2.4E-5 1.1E-10 112% 7036 0.002
IS
128 4.1E-5 5.3E-12 14% 5779 0.056
768 2.5E-13 1.7E-28 13% 12930 0.029
Small Cell Loss Ratios in ATM via IS - 17
Table
7. CLR estimation at level 3 for di#erent average burst sizes
no-IS
100 2.1E-5 2.1E-10 178% 1134 0.007
IS
50 6.0E-7 7.2E-16 11% 1042 0.48
100 4.1E-5 6.7E-12 16% 881 0.28
5.4 CLR Estimation at Level 4
Example 5. Let
and we vary the bu#er size B
We assign 6 of the 300 sources to q # . They are distributed as in Example 3. Here,
and the total arrival rate at
q # is 6/41 without IS and 3.692 with IS. We take 000. The
results are in Table 8 and they resemble what was observed at level 3 (the e#ciencies
are smaller because we have a larger network and more sources to simulate). For this
example, we also varied # 1 with B 2 fixed at 512, and the results were qualitatively
similar to those of Table 7.
Table
8. CLR estimation at level 4 for di#erent bu#er sizes
no-IS
128 1.6E-3 7.5E-8 44% 3580 0.004
IS
128 1.1E-3 4.0E-9 15% 1881 0.15
5.5 Other Variants of the Model
We made several experiments with variants of the model to explore the e#ectiveness
of the proposed IS strategy in other (sometimes more realistic) situations.
The original model is called variant A. For variant B , the sources are no longer
a#ected to fixed destinations, but the destination of each cell is chosen randomly,
independently of other cells, uniformly over all destinations. Variant C is similar
P. L'Ecuyer and Y. Champoux
except that each burst (i.e., all the cells from a source during a given ON period) has
a random destination. The IS approach of Section 4 did very badly for variant B,
and gave improvement for variant C only when - was very small. An appropriate IS
strategy for these models should also change the probabilities over the destinations
to increase the tra#c towards q # . Variants B and C are not very realistic for ATM
switches, but the next 2 variants have higher practical interest.
In variant D, each node at level 3 has k bu#ers, the first one receiving the cells
originating from the sources 1 to the second one taking those from the
sources A server at level 3 takes cells
from those bu#ers according to either a round-robin or longest-queue-first policy.
In variant E, the sources produce two classes of cells: High priority constant bit
rate (CBR) cells and low priority variable bit rate (VBR) cells. The VBR sources
are Markov modulated as before, whereas the CBR sources have constant ON and
OFF periods (they are completely deterministic). Each node has two bu#ers, one
for the CBR cells and one for the VBR cells, and the CBR cells are always served
before the VBR ones.
The IS strategy of Section 4 works fine for the variants D and E: It provides
reasonable estimates for values of - that standard simulation cannot handle. We
also observed in our empirical results that the longest-queue-first policy gives a
CLR generally smaller than round robin.
6. REFINING THE IMPORTANCE SAMPLING SCHEME
6.1 Optimizing #
The IS approach of Section 4 provides a good change of measure, but based only
on a heuristic and asymptotic argument, not necessarily the optimal value of #
for a given bu#er size. Moreover, when choosing #, the approach does not take
into account the computational costs which may depend on #. To evaluate the
sensitivity with respect to #, we performed additional experiments where # was
varied around # , and the variance and e#ciency were estimated. As a general
rule, for the class of examples examined, we found that the optimal # was around
20% to 25% less than # , and increased the e#ciency by a factor between 2 to
compared with # , at level 2 or 3 where m # is typically large. At level 1 or 4, where
m # is usually small, the optimal # tends to be much closer to (and no significantly
better than) # . We emphasize that there is noise in these estimated factors, due to
the variance of the e#ciency estimators. However, the tendency persisted when we
replicated the experiments. Moreover, the # that gives the best e#ciency also tends
to reduce significantly the variance of the variance estimator. These improvements
are important, so it seems reasonable to use, e.g., of # at
levels 2 and 3, for networks similar to those that we have considered here, and
perhaps try to optimize # adaptively in a small neighborhood around that value,
during the simulation. Taking a smaller # is more conservative, in the sense that
it produces a smaller change of measure. Using # , which is more aggressive,
is very dangerous because the variance increases very fast with # in that area, and
may even become infinite for finite #. The next examples illustrate typical behavior
at levels 3 and 4.
Example 6. Let
Small Cell Loss Ratios in ATM via IS - 19
1/21. The node q # is fed by 6 sources, whose
tra#c passes through as in example 3. We take
and the results for di#erent values of # around # are in Table 9. Taking
improves the empirical e#ciency by a factor of approximately 25 compared with # .
By examining the data more closely, we found that the e#ciency improves because
the smaller # gives a smaller value of S 2
which is the dominant term in - # 2 .
Further replications showed similar results, with registering e#ciencies
15 to 60 times higher than # . The variance estimator was also much less noisy at
(where the value of - # was always between 3.6% and 4.2%) than at # .
Table
9. Comparing di#erent values of #, for
Example 7. Let
Only 2 sources feed the node q # .
Both sources feed the same node at level 3, but di#erent nodes at levels 1 and 2. We
take In this case, # = 0.0394, and we found empirically that taking
# brings no significant e#ciency improvement. We made similar experiments
with exactly the same data as in Example 5, with observed an
e#ciency improvement by a factor between 1.5 and 2.
6.2 Defining the A-Cycles Di#erently
Instead of starting the A-cycles when the bu#er at q # becomes empty, one can start
them when the number of cells in the bu#er crosses # upward, where # is a fixed
integer. There is essentially nothing to gain in that direction, however, because
when increasing # the no-IS A-cycles tend to become excessively long (typically,
the bu#er at q # remains nearly empty most of the time).
Another idea is to impose a lower bound, say t 0 , on the length of the A-cycles, to
get rid of the extremely short (and wasteful) A-cycles which tend to occur frequently
under both the IS and no-IS setup. The A-cycle ends at the maximum time between
t 0 and the first time when node q # becomes empty. How to choose t 0 ? We want to
choose it large enough to make sure that most A-cycles under IS see some overflow,
but not too large, so that the A-cycles end at the first return to the empty state
after overflow. According to our arguments in Section 4, if overflow occurs at
then the total production by the twisted sources up to time t 1 should be
approximately equal to the number of cells required to keep the server busy until
fill up the bu#er at node q # , that is, m # -
# is the
P. L'Ecuyer and Y. Champoux
average production rate of a twisted source. The additional time t 2 to empty the
bu#er (with IS turned o#) should satisfy . We want (roughly)
We suggest taking t 0 somewhere between 20% and 50% of the value of that upper
bound. In our experiments, this always gave e#ciency improvement. Since the
variance associated with the IS cycles is the dominant term in the variance of -
-,
a good strategy is to choose t 0 just large enough so that most of the IS cycles fill
up the bu#er. Taking t 0 too large (close to not a good idea because it
makes us spend too much time on the no-IS cycles without bringing much additional
variance reduction. Beyond a certain point, increasing t 0 eventually decreases the
e#ciency.
Example 8. We used the same data as in Example 5 (for
and with IS. For # , we have t 1 # 95 and t 2 # 300. For
we have a total arrival rate of 1.82 with IS, which give t 1 # 312 and
Table
give the results. With # , raising t 0 from 0 to 75 increases
the (empirical) e#ciency approximately by a factor of 4. With raising
t 0 from 0 to 150 improves the (empirical) e#ciency by a factor of more than 10.
This gain is related to the rapid increase of -
X, which decreases -
when t 0 is small. We made 2 additional replications of this experiment and the
results were similar, although the empirical e#ciency for # and t
and which suggests that the factor of e#ciency improvement from this
setup to more around 20 to 30 instead of 10. The variance
estimator is also (empirically) much less noisy with
Table
10. Imposing Lower Bounds on the A-cycle lengths, for
50 4.8E-5 8.8E-13 5.0 % 8.1 1.25 7089 0.37
100 4.8E-5 5.0E-13 3.8 % 15.8 2.43 10523 0.43
200 4.9E-5 7.6E-13 4.5 % 30.7 3.25 13890 0.23
50 4.8E-5 6.6E-13 4.4 % 8.1 1.24 4646 0.75
100 4.8E-5 3.0E-13 2.9 % 15.7 2.42 7886 0.98
200 4.8E-5 2.5E-13 2.6 % 30.7 3.24 12645 0.75
Example 9. Let
sources feed the target node
, as in Example 3, which gives an average arrival rate of 6/21 # 0.286 to that
Small Cell Loss Ratios in ATM via IS - 21
node. We run simulations for di#erent values of t 0 both with the - r ij associated to
arrival rate of 5.00, t 1 # 85, and t 2 # 150)
and
# 0.54, a total arrival rate of 3.26, t 1 # 200,
and t 2 # 150). Using t gave the best empirical
e#ciency in this case, about 20 times the empirical e#ciency observed with t
and # .
6.3 Stopping IS Earlier
Suppose that and that we use IS. When the target bu#er at q # overflows and
IS is turned o#, there may be several cells already in the network at previous levels,
and this may produce more overflow than necessary. Because of that, it could make
sense to turn o# IS earlier, e.g., when the total number of cells in bu#er q # or at
previous nodes but on their way to q # , reaches some threshold N 0 . Beck et al. [1999]
and Dabrowski et al. [1998] use this criterion for turning o# IS, with N Our
experiments with this idea showed no significant improvement compared with the
method which turns o# IS when q # overflows. With N seems to
reduce the e#ciency instead. Here is a typical illustration.
Example 10. Let
sources feed the node q # ,
which gives an arrival rate at q # of When IS is applied the arrival rate
increases to 1.5887. These 2 hot sources feed di#erent nodes at level 2. In Table 11,
Y is the average number of cell losses per cycle with IS and N
to turning o# IS when q # overflows. Taking N 0 between 520 and 600 appears to be
about as good as our usual method, but N 0 # 510 is definitely worse.
Table
11. Di#erent stopping criteria for IS
500 4.72E-9 6.7E-18 19.8% 1.51 0.188 3308 1.1
6.4 Retroactive Manipulations to Control the Overflow
The criterion for turning o# IS earlier, considered in the previous subsection, is
rather blind. Remember that all the randomness in our model is in the state
transitions of the sources. It is therefore possible, in principle, to compute at any
given point # in time whether or not there will be overflow at q # caused only by the
cells generated so far, and turn o# IS as soon as this happens (this is a stopping
time with respect to the filtration generated by the trajectory of S(-)). In this way,
IS is turned o# before the target bu#er fills up, but only when overflow is guaranteed
to occur. In practice, this can be implemented by actually running the simulation
until there is overflow, and then turning o# IS retroactively right after the time #
22 - P. L'Ecuyer and Y. Champoux
when all the cells having reached q # when the first cell overflows (at time #,
say) were already produced by a source. (This # remains a stopping time in our
model; retroaction would be used only to avoid too much computation at each time
step.) This is complicated to implement and implies significant overhead. Despite
spending a lot of time on experimenting with this idea, we were unsuccessful in
improving the e#ciency with it.
6.5 Combining IS with Indirect Estimation
Srikant and Whitt [1999] proposed the following indirect estimator of the CLR.
(This approach was presented by Ward Whitt during the keynote address of the
1997 Winter Simulation Conference.) The CLR at node q # satisfies
is the total (average) production rate of the m 0 sources
feeding node q # is the (average) output rate from node q # , 1/c # is the service
time at node q # , and # is the steady-state fraction of time where the server is busy
at node q # . The second equality follows from the conservation equation #/c # .
Using (13), - can be estimated indirectly by estimating #. Srikant and Whitt [1999]
showed that the indirect estimator brings substantial variance reduction in heavy
tra#c situations, especially for queues with several servers and random service
times, but not in light tra#c. In our context, the tra#c at q # is light, but becomes
heavy when IS is applied, so it was not clear to us a priori if the indirect estimator
combined with IS could help.
The results of our extensive numerical experiments can be summarized as follows.
For a single queue with several servers, without IS, the indirect estimator reduces
the variance by large factors when the total arrival rate exceeds the service capacity,
and increases the variance by large factors when the total arrival rate is much less
than the service capacity. This is true even for constant service times and single-server
queues, but less servers or less variability in the service times favors the
direct estimator. A larger bu#er at q # tends to accentuate the factor of variance
reduction or variance increase. When the indirect estimator was combined with IS,
we observed a variance increase instead of a variance reduction, even if the total
arrival rate after IS was larger than the service rate. An intuitive explanation seems
to be that because IS is turned o# as soon as the bu#er overflows, the condition
favoring the indirect estimator (sustained overloading at q # ) does not hold for a
large enough fraction of the time.
7. FUNCTIONAL ESTIMATION
So far we have considered the problem of estimating the CLR for fixed values of the
model parameters. But in real life one is often interested in a wide range of values
of the r ij 's and of the bu#er sizes. We now examine how the CLR can be estimated
in functional form, as a function of the matrix R, from a single simulation, and also
as a function of B # by re-using certain portions of the simulation.
Let R and -
R be as before, where -
R is the twisted version of R determined as in
Section 4, but suppose that we now want to estimate the CLR - for R replaced
by -
R, for several -
R in some neighborhood of R, by simulating pairs of A-cycles
with -
R and R only. This can be achieved as follows. One simulates pairs of A-
Small Cell Loss Ratios in ATM via IS - 23
cycles and computes X i , Y i , and the likelihood ratio L i for each pair just as before.
Afterwards, the estimators L i Y i and X i of the numerator and the denominator are
multiplied by the likelihood ratios
r 00
r 01
r 11
r 11
r 00
r 01
r 11
N #,
respectively, where N #
kl
and N #
kl
are the total number of transitions of the sources
from state k to state l during the A-cycle with IS and without IS, respectively. The
functional estimator of - is then
and it can be evaluated a posteriori for as many di#erent matrices -
R as desired, as
long as -
R is not too far away from R. The additional overhead during the simulation
amounts only to storing the values of N #
kl
and N #
kl
, together with those of X i and
all the pairs of A-cycles. This type of functional estimator based on a
likelihood ratio is discussed in a more general context by Rubinstein and Shapiro
[1993] and L'Ecuyer [1993], for example.
Example 11. We give an example of functional estimation at level 4. Let
sources to the node q # and take
R and run the simulation as usual, and then compute two functional estimators.
For the first one, # 1 is fixed and - is estimated as a function of # 0 , whereas for
the second one, # 1 fixed and - is estimated as a function of # 1 .
Tables
12 gives a partial view of the results for the first estimator.
The relative half-widths of pointwise 99% confidence interval, - #, remain reason-able
for a good range of values of # 0 and # 1 . If one is interested in a wider region,
that region can be partitioned into a few subintervals and a di#erent -
R can be used
for each subinterval.
For the estimation of - as a function of B # , one cannot use the likelihood ratio
approach, because B # is not a parameter of a probability distribution in the
model. However, observe that when an A-cycle is simulated, the sample path of
the system is independent of B # as long as there is no overflow at q # . Therefore,
when estimating - for several large values of B # , the initial part of the simulation
(until overflow occurs) does not have to be repeated for each value. One can start
with a single simulation (or sample path) and create a new subpath (or branch)
each time the number of cells at q # exceeds one of the bu#er sizes of interest. If
one is interested in N distinct values of B # , one eventually ends up with N parallel
simulations, but a lot of work is saved by starting these parallel simulations only
when needed. This type of approach is studied in more generality in L'Ecuyer and
V-azquez-Abad [1997]. In our experiments with this method, the savings in CPU
time were typically around 50%.
P. L'Ecuyer and Y. Champoux
Table
12. Functional estimation at level 4, for fixed # 1
500 2.4E-6 6.5E-14 27%
588 9.8E-7 4.0E-15 17%
714 3.5E-7 1.9E-16 10%
800 2.0E-7 4.3E-17 7%
909 1.1E-7 1.0E-17 7%
The development of Section 4 suggests an approximately linear relationship between
, at least asymptotically. Our empirical experiments confirm that
the linear model fits very well for large enough B # . We can therefore
recommend, for estimating - as a function of B # , to perform simulations at 4
or 5 large values of B # only, and fit a linear model to the observations (B # ,
-) by
least squares regression.
8. CONCLUSION
We have discussed how to implement IS for estimating the CLR in an ATM switch
model, by a direct adaptation of the approach proposed in Chang et al. [1994]. Our
extensive empirical experiments with large networks indicate that the method is
viable, at least for the type of model and examples that we have considered. IS
improves the CLR estimator and also improves, by a larger factor, the corresponding
variance estimator (although this variance estimator often remains noisy even with
IS, in which case the natural way of improving the reliability of the confidence
intervals would be to increase the sample size). We have also explored heuristic
refinements of the IS scheme. Some gave improvements with respect to the basic
scheme, others did not. Those that improved the e#ciency of the CLR estimator,
namely taking # slightly less than # and imposing a lower bound on the A-cycle
length, also (empirically) reduced the noise of the variance estimator.
This IS methodology can be adapted to other variants of the model, in addition
to those considered in Section 5.5, e.g., by replacing the geometric sojourn time
distributions in the ON and OFF states by other distributions with an exponential
tail. Statistical studies of tra#c traces suggest that heavy-tailed distributions,
with infinite variance, are more appropriate, although some people still argue that
the exponential-tailed distributions do the job well-enough in many cases. The IS
scheme considered in this paper does not appear easily adaptable to heavy-tailed
distributions. Finding an e#ective way of applying IS for such distributions remains
a challenging problem.
--R
A unified approach to fast teller queues and ATM.
Accelerated simulation of a leaky bucket controller.
A Guide to Simulation (Second
Estimation du taux de perte de r-eseaux ATM via la simulation et le changement de mesure
Sample path large deviations and intree networks.
Fast simulation of packet loss rates in a shared bu
Accelerated simulation of ATM switching fabrics.
Decoupling bandwidths for networks: A decomposition approach to resource management.
Fast simulation of networks of queues with e
Monte Carlo: Concepts
Quality Service in ATM Networks.
Multilevel splitting for estimating rare event probabilities.
Importance sampling for stochastic simulations.
Management Science
Fast simulation of rare events in queueing and reliability models.
ACM Transactions on Modeling and Computer Simulation
What are the implications of long-range dependence for VBR video tra#c engineering
Modeling video tra
Simulation Modeling and Analysis (Third
Two approaches for estimating the gradient in a functional form.
Importance sampling for large ATM-type queueing networks
On the self-similar nature of ethernet tra#c (extended version)
Application of the M/Pareto process to modeling broadband tra
Fast simulation of steady-state availability in non-markovian highly dependable systems
A quick simulation method for excessive backlogs in networks of queues.
Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization by the Score Function Method.
Large deviations and e
On the optimality and stability of exponential twisting in Monte Carlo estimation.
On Monte Carlo estimation of large deviation probabilities.
Importance sampling for the simulation of highly reliable markovian systems.
Large Deviations for Performance Analysis.
Variance reduction in simulations of loss models.
--TR
A guide to simulation (2nd ed.)
Importance sampling for stochastic simulations
On the self-similar nature of Ethernet traffic (extended version)
Importance sampling for the simulation of highly reliable Markovian systems
Efficiency improvement and variance reduction
Effective bandwidth and fast simulation of ATM intree networks
Fast simulation of rare events in queueing and reliability models
through high-variability
Fast simulation of packet loss rates in a shared buffer communications switch
What are the implications of long-range dependence for VBR-video traffic engineering?
Importance sampling for large ATM-type queueing networks
Two approaches for estimating the gradient in functional form
Fast simulation of networks of queues with effective and decoupling bandwidths
Simulation Modeling and Analysis
Quality of Service in ATM Networks
Functional Estimation with Respect to a Threshold Parameter via Dynamic Split-and-Merge
Multilevel Splitting for Estimating Rare Event Probabilities
Variance Reduction in Simulations of Loss Models
Application of the M/Pareto Process to Modeling Broadband Traffic Streams
--CTR
P. T. de Boer , D. P. Kroese , R. Y. Rubinstein, Rare event simulation and combinatorial optimization using cross entropy: estimating buffer overflows in three stages using cross-entropy, Proceedings of the 34th conference on Winter simulation: exploring new frontiers, December 08-11, 2002, San Diego, California
Rick Siow Mong Goh , Ian Li-Jin Thng, Twol-amalgamated priority queues, Journal of Experimental Algorithmics (JEA), v.9 n.es, 2004
Victor F. Nicola , Tatiana S. Zaburnenko, Efficient heuristics for the simulation of population overflow in series and parallel queues, Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, October 11-13, 2006, Pisa, Italy
Sandeep Juneja , Victor Nicola, Efficient simulation of buffer overflow probabilities in jackson networks with feedback, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.15 n.4, p.281-315, October 2005
Victor F. Nicola , Tatiana S. Zaburnenko, Efficient importance sampling heuristics for the simulation of population overflow in Jackson networks, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.17 n.2, p.10-es, April 2007 | variance reduction;importance sampling;ATM;rare events |
379727 | Creating trading networks of digital archives. | Digital archives can best survive failures if they have made several c opies of their collections at remote sites. In this paper, we discuss how autonomous sites can cooperate to provide preservation by trading data. We examine the decisions that an archive must make when forming trading networks, such as the amount of storage space to provide and the best number of partner sites. We also deal with the fact that some sites may be more reliable than others. Experimental results from a data trading simulator illustrate which policies are most reliable. Our techniques focus on preserving the ``bits'' of digital collections; other services that focus on other archiving concerns (such as preserving meaningful metadata) can be built on top of the system we describe here. | INTRODUCTION
Digital materials are vulnerable to a number of different
kinds of failures, including decay of the digital media, loss
due to hackers and viruses, accidental deletions, natural dis-
asters, and bankruptcy of the institution holding the collec-
tion. Archives can protect digital materials by making several
copies, and then recover from losses using the surviving
copies. Copies of materials should be made at different, autonomous
archives to protect data from organization-wide
failures such as bankruptcy. Moreover, cooperating archives
can spread the cost of preservation over several institutions,
while ensuring that all archives achieve high reliability. Several
projects [4, 12, 24, 10] have proposed making mutiple
copies of data collections, and then repeatedly checking
This material is based upon work supported by the National
Science Foundation under Award 9811992.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
JCDL'01, June 24-28, 2001, Roanoke, Virginia, USA.
those copies for errors, replacing corrupted materials with
pristine versions.
A key question for a digital archive participating in a replication
scheme is how to select remote sites to hold copies of
collections. The archivist must balance the desire for high
reliability with factors such as the cost of storage resources
and political alliances between institutions. To meet these
goals, we propose that archives conduct peer-to-peer (P2P)
data trading: archives replicate their collections by contacting
other sites and proposing trades. For example, if archive
A has a collection of images it wishes to preserve, it can
request that archive B store a copy of the collection. In re-
turn, archive A will agree to store digital materials owned by
archive B, such as a set of digital journals. Because archive
A may want to make several copies of its image collection,
it should form a trading network of several remote sites, all
of which will cooperate to provide preservation.
In previous work [5], we have studied the basic steps involved
in trading and the alternatives for executing these
steps. For example, in one step a local site selects a trading
partner from among all of the archive sites. This requires
the local site to choose some strategy for picking the best
partner. In another step, the local site asks the partner
to advertise the amount of free space it is willing to trade.
Then, the local site can determine if the partner will trade
enough space to store the local site's collections. We summarize
our conclusions from this previous study for these
and other issues in Section 2.2 below.
In this paper, we discuss how a digital archive can use
and extend these basic trading building blocks to provide
preservation services. Archives must take into consideration
real-world issues that impact the decisions they make
while trading. For example, an archive may have budgetary
constraints that limit the amount of storage it can provide.
Storage resources cost more than just the expense of buying
disk space. In particular, an archive must also provide
working servers, administrators to maintain those machines,
network access to the servers, and so on. Here, we study how
the amount of storage a site provides impacts its ability to
trade and the number of copies it is able to make.
Another issue that archives must confront is that they
may choose trading partners for a number of reasons beyond
simply achieving the highest reliability. For example,
the libraries of a particular state university system may be
directed to cooperate by the university's board of regents.
We call such a grouping of sites a trading cluster. The cluster
may be large enough to serve the needs of its member
sites, or sites may need to seek binary inter-cluster links
with other archives to expand their trading networks. We
examine the ideal cluster size as well as the number of inter-cluster
links that must be formed to compensate for a too-small
trading cluster.
A site also may have to deal with trading partners that are
more or less reliable than itself. For example, a very reliable
site must decide whether to trade with all archives or only
with those that also have high reliability. We examine these
issues to determine how sites can make the best decisions in
the face of varying site reliabilities.
Other researchers have examined using redundancy to protect
against failures in systems such as RAID [21], replicated
file systems [8], and so on. Our work is similar to these systems
in that we use replication, we balance resource allocation
and high reliability, and we attempt to ensure high
data availability.
Unlike these previous systems, our data trading scheme
is focused on respecting the differences between individual
digital archives, even as these archives cooperate to achieve
reliability. Thus, a primary concern of ours is site auton-
omy. Archivists should be able to decide who they trade
with, what types of collections they store and how much
storage they provide. Such local decisions are not as important
in a system such as RAID, in which a central controller
makes all of the decisions. Archives also may have differing
reliability goals, such that one archive is willing to expend
more resources and expects correspondingly higher reliability
in return. It may therefore be important to consider
different policies for high and low reliability sites, such that
both kinds of sites can protect their data. Similarly, different
archives may experience different rates of failure, and
an archive may wish to take these failure rates into account
when replicating collections. An array of similar components
(such as RAID) does not face this issue. Finally, an archivist
has unique concerns that are not addressed in traditional
systems. It is often important to establish the provenance
of collections, and this task is difficult if the collections are
moved from site to site frequently or without the archivist's
control. An archivist may also wish to keep collections con-
tiguous, so that they can be served to users as a complete
unit. Our trading mechanism is flexible enough to address
all of these concerns, from autonomy to contiguous collec-
tions, while still providing a great deal of protection from
failures.
In this paper, we examine how a digital archive can preserve
its collections by forming and participating in P2P
trading networks. In particular, we make several contributions
ffl We present a trading mechanism that can be used
by an archive to reliably replicate data. This mechanism
is tuned to provide the maximum reliability for
the archive's collections, and can be extended if necessary
in consideration of individual archivists' needs
and goals.
ffl We identify how to configure an archive for trading by
examining the amount of storage that the site should
provide and the number of copies of collections a site
should try to make.
ffl We examine the impact of trading with remote partners
chosen for political reasons, as opposed to trading
with all archive sites. We also discuss the optimal trading
network size, and examine when an archivist may
wish to seek out additional trading partners.
ffl We discuss how an archive might trade with sites that
have different site reliabilities, or rates of failure, by
adjusting its trading policies to take these reliabilities
into account. We also discuss the importance of accurately
estimating the reliabilities of other sites.
In order to evaluate each of these issues, we have used a simulator
that conducts simulated trading sessions and reports
the resulting reliability. Our concern is primarily in selecting
remote sites for storing copies of archived collections.
Once trades have been made and collections are distributed,
archivists can use other existing systems to detect and recover
from failures, enforce security, manage metadata, and
so on. Other projects have examined these issues in more
detail [4, 22, 17, 23, 19]. It is also possible to enhance our
basic techniques to deal with digital objects which change
over time, or trades with sites that provide a specialized
service (such as storage for a fee). In ongoing work, we are
extending our model to provide negotiation for access services
(such as search) in addition to storage services. We are
also extending our model to deal with trades of other com-
modites, such as money or processing power, in addition to
storage space.
This paper is organized as follows. In Section 2 we discuss
the basic trading mechanism, as well as extensions to the
basic mechanism for trading networks of digital archives.
Section 3 presents evaluations of alternative trading policies
using simulation results. Section 4 discusses related work,
and in Section 5 we present our conclusions.
2.
Data trading is a mechanism for replicating data to protect
it from failures. In this section, we summarize the techniques
used in data trading. We also discuss the extensions
and enhancements to data trading that are needed to use the
mechanism for digital archives. A full discussion of the basic
data trading algorithm, as well as analysis of the tradeoffs
involved in tuning the algorithm, is presented elsewhere [5].
2.1 Archival services
Our model of a digital archiving service contains the following
concepts:
Archive site: an autonomous provider of an archival storage
service. A site will cooperate with other autonomous
sites that are under the control of different organizations to
achieve data replication. The focus of this paper is the decisions
made by a particular archive site; we refer to this site
as the local site.
Digital collection: a set of related digital material that
is managed by an archive site. Examples include issues of
a digital journal, geographic information service data, or
a collection of technical reports. Although collections may
consist of components such as individual documents, we consider
the collection to be a single unit for the purposes of
replication. Here, we assume that all collections are equally
important and require the same effort to preserve.
Archival storage: storage systems used to store digital
collections. Some of the storage, called the public storage,
is dedicated to remote sites that have concluded trades with
the local site, and is used to store collections owned by the
remote sites. An archive site must decide how much public
Site A
Site C
Figure
1: Reliability example.
storage P total to provide. Here, we assume that a site uses a
storage factor F , such that if the site has N bytes of archived
data, it purchases F \Theta N total storage space. The site uses
N bytes of this space to store its own collections, and has
space to trade away.
Archiving clients: users that deposit collections into the
archive, and retrieve archived data. When a client deposits
a collection at an archive site, that site is said to "own" the
collection, and takes primary responsibility for protecting it.
Trading network : a local site must connect to remote sites
and propose trades. In the general case, any site can connect
to any other site. In a digital archiving domain, it may be
more desirable to select a set of "trusted" sites to trade with.
This issue is discussed in more detail below.
Automation: The archive should operate as automatically
as possible, while allowing librarians or archivists to oversee
its operation and adjust its configuration. Thus, an archiving
site may automatically replicate copies of a digital col-
lection, but would do so according to the desired goals and
constraints important to the administrators.
These concepts are used to study the properties of a trading
network primarily concerned with protecting the data
itself from loss. While we do not consider other archival
concerns (such as preserving access or establishing chain of
custody) for simplicity, our model can be extended to deal
with such concerns.
Each archive site can fail (lose data), and we model the
possibility of failures as a site reliability: a number which
indicates the probability that the site does not experience
data loss. Although site reliabilities may change over time,
here we assume for simplicity that reliabilities are constant.
Given site reliabilities and a placement of copies of collections
at these sites, we can calculate two values:
ffl global data reliability: the probability that no collection
owned by any site is lost.
ffl local data reliability: the probability that no collection
owned by a particular site is lost.
Thus, global data reliability measures the success of the
trading mechanism as a whole, while local data reliability
measures the success of decisions made by an individual site
participating in data trading. For example, consider Figure
1. This figure shows three sites, each of which owns one
collection (shown boxed), while storing copies of collections
owned by other sites. Let us assume that the site reliability
of each site is 0.9, that is, each site has a ten percent chance
of experiencing data loss. In one possible scenario, sites B
and C fail but site A does not, while in another scenario,
none of the sites fail. We can calculate global data reliability
by examining all possible scenarios of sites failing or
surviving; in this case, there are eight possibilities. For each
scenario, we assign the score "0" if at least one collection is
lost, or "1" if no data is lost. Thus, in the scenario where
sites B and C fail, collection 2 is lost and we assign the score
We then weight each score by the probability of the sce-
nario; the situation where B and C fail but A does not will
occur with probability 0:1 \Theta 0:1 \Theta 0:9 = 0:009. Finally, we
sum the weighted scores to find the expected probability of
data loss. The distribution of collections shown in Figure 1
has a global reliability of 0.981, indicating that there is less
than a two percent chance of data loss.
We can calculate local data reliability in much the same
way, except that we only consider the collections owned by
a particular site when assigning scores. For example, if we
wish to calculate the local reliability of site A, we examine
all possible scenarios, and assign a score of "0" if collection 1
is lost, or "1" if collection 1 is preserved. In this way, we can
calculate the local data reliability of site A and site B to be
0.99, while the local data reliability of site C is 0.999. Site
C enjoys higher data reliability because it has made more
copies of its collection.
We can interpret local and global data reliabilities as the
probability that data will not be lost within a particular
interval, say, one year. Then, we can calculate the expected
number of years before data is lost, known as the mean time
to failure (MTTF). An increase in reliability from 0.99 to
actually represents an increase in the MTTF from 100
years to 1000 years. Because MTTF better illustrates the
results of a particular policy by giving an indication of how
long data will be protected, we report our simulation results
in Section 3 in terms of MTTF.
In this paper we are primarily concerned about evaluating
the choices made by individual sites, and preserving the
autonomy of those sites. Therefore, we will examine data
trading from the perspective of local data MTTF. In previous
work [5] we have assumed that all sites have the same
probability of failure, but here we consider the possibility
that different sites have different reliabilities.
2.1.1 The trading network
There are two reasons why a local site may choose a particular
remote site as a P2P trading partner. First, the remote
site may have a reputation for high reliability. Second,
there may be political or social factors that bring several autonomous
archives together. An archive must make trades
that take both reliability and politics into account.
We refer to the set of potential trading partners for a local
site as that site's trading network. In our previous work we
have assumed that a site's trading network includes all other
archive sites. However, a local site may participate in one
or more clusters, or sites that have agreed to form partnerships
for political, social or economic reasons. For example,
all of the University of California libraries may join together
in one cluster. A local site may also have individual inter-cluster
links for political or reliability reasons. If an archive
at say MIT is well known for high reliability, one of the University
of California libraries may form a partnership with
MIT in addition to the California cluster. Once a site has
found trading partners, it can continue to consider politics
and reliability when proposing trades. In Section 2.2 we
discuss how a site can use site reliabilities to select sites for
individual trades.
There are two challenges that face a site when it is constructing
a trading network. The first challenge is deciding
how many sites should be in the network, and what inter-cluster
partnerships to form. The second challenge in constructing
a trading network is estimating the site reliabilities
of other sites. One possible method is to examine the past
behavior of the site. Sites with many failures are likely to
have more failures in the future, and are assigned a lower site
reliability than sites that rarely fail. Another method is to
examine components of the archive's storage mechanism [6].
Sites that use disks that are known to be reliable or security
measures that have successfully protected against hackers
should be given a higher site reliability. A third possibility
is to use the reputation of the site or institution hosting the
site. Thus, even the perceived reliability of a site can be
influenced by political or social factors.
We evaluate the ideal size for trading clusters, and give
guidelines for how many inter-cluster partnerships should
be formed in Section 3. We also examine the impact of site
reliability estimates in that section.
2.2 Conducting trades
When a client deposits a collection at an archive site, the
site should automatically replicate this collection to other
sites in the trading network. This is done by contacting
these sites and proposing a trade. For example, if site A is
given a collection of digital journals, site A will then contact
other sites and propose to give away some of its local archival
storage to a site willing to store a copy of the journals.
We have developed a series of steps for conducting trades
in previous work [5]. These steps are summarized in the
DEED TRADING algorithm shown in Figure 2. This is a
distributed algorithm, run by each site individually without
requiring central coordination. A deed represents the right
of a local site to use space at a remote site. Deeds can be
used to store collections, kept for future use, transferred to
other sites that need them, or split into smaller deeds. When
a local site wants to replicate a collection, it requests from a
remote site a deed large enough to store the collection. If the
remote site accepts, the local site compensates the remote
site with a deed to the local site's space. In the simplest
case, the deed that the local site gives to the remote site is
equal to the deed that the remote site gives to the local site.
There are other possibilities; see below.
Several details of the DEED TRADING algorithm can be
tuned to provide the highest reliability:
!S?: The trading strategy !S? dictates the order in
which other sites in the trading network will be contacted
and offered trades. The best strategy is for a site to trade
again with the same archives it has traded with before. This
is called the clustering strategy, because a site tries to cluster
its collections in the fewest number of remote sites. If
there are several sites that have been traded with before, the
local site selects the remote site holding the largest number
of the local site's collections. If there is still a tie, or if there
are no previous partners, the local site chooses a remote site
randomly. For the special case where sites have small storage
factors (e.g. 2), the best fit strategy is best. Under
best fit, the remote site with the smallest advertised free
space is chosen. In [5] we examine several other strategies,
such as worst fit, where the site with the most advertised
space is preferred. If different sites have different relia-
bilities, as we assume in this paper, it is possible to adjust
the strategy to reflect those reliabilities; see below.
!A?: A site must decide how much of its storage space
to offer for trades. The best advertising policy !A? is the
data-proportional policy, where a site advertises some multiple
y of the total amount of data N owned by the site. If the
amount of remotely owned data stored so far is Pused , and
the amount of free public space is Pfree , then the advertised
amount is:
MIN(N \Theta y \Gamma Pused ; Pfree)
Thus, the amount of advertised space is the total amount of
"available" public space minus the amount of public space
used so far, except that a site cannot advertise more public
space than it has free. Our experiments show that the best
setting for y is is the site's archival
storage factor (see Section 2.1).
!U?: If a local site has a deed for a remote site, it can
use that deed to make a copy of any collections that fit in
the deed but do not already exist at the remote site. A
site must decide when to use a deed that it holds to make
more copies of collections. The aggressive deed use policy,
which provides the highest reliability, dictates that a site will
use its deeds to replicate as many collections as possible, in
order of rareness. Thus, a site holding a deed will use it
to replicate its "rarest" collection (the collection with the
fewest copies) first. If some of the deed is left over, the site
will make a copy of the next rarest collection, and so on.
These collections are replicated even if they have already
met the replication goal !G?.
!R?: If a site is unable to make !G? copies of a collection
CL , it can try to trade again in the future to replicate
the collection. The active retries policy says that a site will
not wait to be contacted by other sites to make copies of CL ,
but instead will run DEED TRADING again after some interval
to replicate CL . A site must choose an appropriate
event to trigger the retry; for example, the site may wait
one week before trying again.
DEED TRADING also uses the following policies, which
are investigated in this paper:
!G?: A site tries to make !G? copies of a collection.
Once this target is met, the site does not have to make any
more trades. Appropriate values of !G? are discussed in
Section 3.
!D?: The deed that L gives to R may or may not be
the same size as the deed that R gives to L. In our previous
work, we have assumed that the two deeds were of equal
size. Here, we investigate the possibility that the deed size
is influenced by the site's reliability. This issue is discussed
below.
2.2.1 Adapting trading policies for differing site reli-
abilities
We can extend the basic trading framework presented
in [5] (summarized above) to allow a local site to use the estimated
reliabilities of its partners in order to make good trading
decisions. There are two aspects of DEED TRADING
that could be modified based on site reliabilities: the trading
strategy !S?, and the deed size policy !D?.
One way to change the trading strategy !S? is to look
only at site reliabilities when making trades. In the highest
reliability strategy, a site seeks to trade with partners that
have the best reliability. The idea is to make trades that
will best protect the local site's collections. In contrast, the
lowest reliability strategy seeks out sites with the worst reli-
ability. Although each trade may be less beneficial, the low
reliability sites may be more desperate to trade than high
reliability sites, meaning that the local site can make more
copies of its collections. Finally, the closest reliability strategy
seeks to find the sites with reliability closest to the local
site's; the local site must then estimate its own reliability.
I. The local site L repeats the following until it has made !G? copies of collection CL , or until all sites in the trading
network have been contacted and offered trades:
1. Select a proposed deed size
2. Select a remote site R in the trading network according to the trading strategy !S?.
3. If L has a deed for R then:
(a) If the deed is large enough to store CL , then use the deed to make a copy of CL at R. Return to step I.
(b) Otherwise, set size(existing deed).
4. Contact R and ask it to advertise its free space !A?R .
5. If !A?R ! DL then:
(a) Contact sites holding deeds for R. Give those sites deeds for local space (at L) in return for the deeds for R.
Add these deeds to the existing deed L holds for R. Adjust DL downward by the total amount of the newly
acquired deeds.
(b) If L cannot obtain enough deeds this way, then it cannot trade with R, and returns to step I.
6. R selects a deed size DR according to the deed size policy !D?.
7. If L's advertised free space !A?L ! DR , the trade cannot be completed. Return to step I.
8. The trade is executed, with L acquiring a deed of size DL for R's space, and R acquiring a deed of size DR for
L's space.
9. L uses its deeds for size R to store a copy of CL .
II. If the goal of !G? copies for CL is not met, L can try this process again at some point in the future, according to the
retry policy !R?.
III. At any time a site may use a deed that it posesses to replicate its collections, according to its deed use policy !U?.
Figure
2: The DEED TRADING algorithm.
Another way to change the trading strategy is to use site
reliabilities in combination with other factors. In the clustering
strategy, the local site chooses the remote site holding
the most copies of collections owned by the local site. In the
weighted clustering strategy, the local site weights the number
of collections by the reliability of the site. For example,
site A (reliability 0.5) might hold three collections while site
(reliability 0.9) might hold two collections. We consider
the partnership value of site A to be 0:5 \Theta
the partnership value of site B is 0:9 \Theta
B is chosen. Other strategies could be weighted in a similar
manner. In the case of best fit and worst fit, we can multiply
the advertised space by the site's reliability, and use the
weighted value in the best fit or worst fit calculations. In
this way, we are calculating the "expected" amount of space
at the remote site based on the probability that the space
will actually be available.
The deed size policy !D? can use reliabilities to encourage
a "fair" trade between sites. Under the (previously
studied) same size policy, the local site and remote site exchange
deeds that are the same size. However, if the reliabilities
of the two sites differ, then a deed for the more
reliable site may be considered "more valuable," and the
less reliable site will have to give a larger deed to com-
pensate. We can denote the site reliability of site i as P i ,
and the size of the deed that the site gives in trade as D i .
Then, we can calculate the reliability-weighted value of the
deed as P i \Theta D i . The weighted size policy dictates that the
reliability-weighted values of the exchanged deeds must be
equal, e.g. if the local site L trades with the remote site
R then PL \Theta DL = PR \Theta DR . The local site chooses a
deed size based on the collection it wants to replicate, so
the size of the deed that the remote site must give in return
is
A local site must be able to estimate the site reliability
of its trading partners (and possibly itself) in order to make
decisions which take reliability into account. We can denote
site i's estimate of site j's reliability as P i;j . In an ideal
situation, each site could calculate reliabilities exactly, such
that However, it is difficult to predict which
sites will fail, and thus reliability estimates may be innaccu-
rate. A local site can use information about a remote site's
reputation, any previous failures, and the reliability of the
storage components to estimate the reliability. Thus, it is
likely that sites which are in fact highly reliable are known
to be reliable, while more failure prone sites are known to
be less reliable. In other words, P i;j - P j .
In Section 3.3 we examine the reliability resulting from
trading strategies that account for reliability and the impact
of the same size and weighted size policies. We also examine
the effects of innaccurately estimating site reliabilities.
3. RESULTS
3.1 The data trading simulator
In order to evaluate the decisions that a local site must
make when trading, we have developed a simulation system.
This system conducts a series of simulated trades, and the
resulting local data reliabilities are then calculated. Table 1
lists the key variables in the simulation and the initial base
values we used; these variables are described below.
The simulator generates a trading scenario, which contains
a set of sites, each of which has a quantity of archival
storage space as well as a number of collections "owned" by
Variable Description Base values
S Number of sites 2 to 15
F Site storage factor 2 to 7
PMAX reliability
Pest P i estimate interval 0 to 0:4
CperSMAX collections per site
CsizeMAX collection size
Ctot Total data at a site CtotMIN to CtotMAX
CtotMAX of Ctot
Replication goal 2-15 copies
Trading strategy 9 strategies tried
!D? Deed size policy same size and
weighted size
Table
1: Simulation variables.
the site. The number of sites S is specified as an input to the
simulation. The number of collections assigned to a site is
randomly chosen between CperSMIN and CperSMAX , and
the collections assigned to a site all have different, randomly
chosen sizes between CsizeMIN and CsizeMAX . The sum
of the sizes of all of the collections assigned to a site is the
total data size Ctot of that site, and ranges from CtotMIN
to CtotMAX . The values we chose for these variables represent
a highly diverse trading network with small and large
collections and sites with small or large amounts of data.
Thus, it is not the absolute values but instead the range of
values that are important.
The archival storage space assigned to the site is the storage
factor F of the site multiplied by the Ctot at the site.
In our experiments, the values of F at different sites are
highly correlated (even though the total amount of space
differs from site to site). By making all sites have the same
F , we can clearly identify trends that depend on the ratio
of storage space to data. Therefore, we might test the reliability
that results from a particular policy when all sites
use 2. In this case, one site might have 400 Gb of data
and 800 Gb of space, while another site might have 900 Gb
of data and 1800 Gb of space. The scenario also contains a
random order in which collections are created and archived.
The simulation considers each collection in this order, and
the "owning" site replicates the collection. A site is considered
"born" when the first of its collections is archived. A
site does not have advance knowledge about the creation of
other sites or collections. Our results represent 200 different
scenarios for each experiment.
We model site failures by specifying a value
that site i will not fail. This value reflects not only the
reliability of the hardware that stores data, but also other
factors such as bankruptcy, viruses, hackers, users who accidentally
delete data, and so on. In our experiments, we
consider the situation where all sites are relatively reliable
(e.g. 0:8 - P i - 0:99) as well as the case where some sites
are quite unreliable (e.g. 0:5 - P i - 0:99). To consider site
reliability estimates, we assume that site i's estimate P i;j of
site j's reliability is randomly chosen in the range P j \Sigma Pest .
Figure
3: The trading goal and storage capacity.
3.2 Local configuration issues
An archive site should have enough space to store the
collections deposited by local clients. In order to participate
in data trading, a site also needs extra public storage space
that it trades away. We call the ratio of total space to locally
owned collections the storage factor F . In this section we
examine the best value of F , which indicates the appropriate
amount of extra storage a site must provide.
A related issue is the number of copies of collections that
a site will attempt to make. If more copies are made, higher
reliability results. However, remote sites must have enough
storage to hold all of the copies, and the local site must have
enough public storage space to trade away to make these
copies. In other words, the goal !G? number of copies is
related to the storage factor F .
To examine the relationship between !G? and F , we
tested a situation where 15 archive sites replicate their col-
lections; each site had a reliability of 0.9. We varied F in
the range 2 - F - 6 and tested goals from 2 to 15 copies.
The results are shown in Figure 3. Note that the vertical
axis in this figure has a logarithmic scale, and that there are
separate data series for As expected, providing
more storage increases the local reliability. The best
reliability (11,000 years MTTF) is obtained when
sites try to make five copies. (We are mainly concerned with
finding the policy that has the highest reliability, regardless
of the actual magnitude of the MTTF value.) Trying to
make more copies results in decreased reliability because
there is not enough space to make more than five copies of
every site's collections. If one site tries to make too many
copies, this site uses up much of the available space in the
trading network, resulting in decreased reliability for other
sites.
Sites may wish to purchase less space than six times the
amount of data for economic reasons. Our results show that
with 4, sites can achieve 2,000 years
MTTF, and with can achieve 360 years MTTF
if the goal is three copies. Therefore, while buying a lot
of space can provide very high reliability, archives can still
protect their data for hundreds of years with a more modest
investment.
Figure
4: Trading strategies.
3.3 Trading policies that consider reliability
Archive sites can use reliability information about other
sites to make trading decisions (Section 2.2). First, we examined
trading strategies by running simulations where each
site had different reliabilities; site reliabilities were randomly
chosen in the range 0:5 - P i - 0:99. In this experiment,
there were 15 sites, each with a storage factor of
a target !G? of three copies. We also assumed (for the
moment) that each site was able to predict site reliabilities
accurately, so that P . The results are shown in Figure
4. (For clarity, not all strategies are shown; the omitted
strategies are bounded by those in the figure.) Recall that
the clustering strategy is to trade again with previous trading
partners, the closest reliable strategy is to trade with
sites of reliability close to that of the local site, and the
least reliable strategy is to prefer the least reliable site. The
results indicate that the clustering strategy is best for sites
with relatively low reliability, but that sites with
are better off using the closest reliability strategy. For exam-
ple, a site with achieves a local data MTTF of 540
years using closest reliability, versus 110 years MTTF resulting
from clustering. These results assume that all sites are
using the same strategy. We ran another experiment where
the high reliability sites (P used one strategy, but
the lower reliability sites used another. These results (not
shown) confirm that it is always best for the high reliability
sites to use the closest reliable strategy, and for the low reliability
sites to use clustering. We ran similar experiments
with reached the same conclusions,
although the range of high reliability sites that should use
closest reliability was P i - 0:9.
High reliability sites clearly benefit by trading among them-
selves, so that every trade they initiate places a copy of a
collection at a very reliable site. If low reliability sites were
to try to trade only among themselves, they would lose reliability
by excluding the benefits of trading with high reliability
sites. If low reliability sites were to try to trade
preferentially with the high reliability sites (as in the highest
reliability strategy), they would quickly find the high
reliability sites overloaded. Therefore, the best strategy is
to make as many trades as possible in a way that is neutral
Figure
5: The deed size policy.
to the remote sites' reliability, and this is what the clustering
strategy does. The high reliability sites will not seek out low
reliability sites to make trades, but will accept trade offers
made by those sites.
In order to use strategies that depend on site reliabilities,
a site must be able to estimate the reliabilities of itself and
its trading partners. We examined the importance of accuracy
in these estimates by allowing the probability estimate
interval Pest to vary. The failure probability P i of each site is
selected at random from the range 0:5 - P i - 0:99, and sites
with used closest reliability while other sites used
clustering. Each local site i's estimate of the remote site
j's reliability was randomly chosen in the range P j \Sigma Pest .
The results (not shown) indicate that the best reliability results
in the ideal case: when the estimates are completely
accurate. As long as sites are able to make estimates that
are within seven percent of the true value, their local data
reliability is quite close to the ideal case. However, as the error
increases beyond seven percent, the local data reliability
drops. For example, when estimates are innaccurate by
percent, archives using closest reliability can only achieve a
local MTTF of 200 years, versus 500 in the ideal case. If sites
can estimate a site reliability close to the true value, they can
usually separate high reliability archives from low reliability
archives, and select the high reliability sites for trading. If
estimates are very innaccurate (e.g. by 25 percent or more)
very high reliability sites (e.g. P i - 0:94) achieve better reliability
using the clustering strategy. However, moderately
reliable sites (0:8 - P i - 0:94) still achieve better MTTF
with the closest reliability strategy.
Another policy that can take site reliabilities into account
is the deed size policy !D?. We have compared the
weighted size policy with the same size policy in an experiment
with 15 sites, where 0:5 - P i - 0:99, the storage factor
4, and the target !G?= 3. The results are shown
in
Figure
5. (In this experiment, the high reliability sites,
used the closest reliability strategy, and other sites
used clustering.) The figure indicates that the weighted size
policy, which considers deeds from reliable sites to be more
valuable, is good for high reliability sites 0:8). For
example, a site with
using the weighted size policy, a 14 percent increase over
Figure
The impact of estimating site reliabilities.
the same size policy MTTF of 210 years. In contrast, low
reliability sites are hurt by the weighted size policy, with
as much as a 50 percent decrease in MTTF (from 25 years
to 12 years) when reliability sites are the
beneficiary of the weighted size policy because they receive
more space in trades, and the most reliable sites can demand
the most space from other sites. These results indicate that
it may be better for low reliability sites to avoid paying the
high penalties of the weighted size policy by trading only
with other low reliability sites. However, the results (not
shown) of another experiment we conducted indicate that
it is still better for low reliability sites to try to trade with
high reliability archives, even when the weighted size policy
is used. If the low reliability sites ignore the high reliability
sites by using closest reliability instead of clustering, they
experience an average decrease in local data MTTF of 15
percent (from 16 years to 14 years).
Once again, we have examined the effect of estimating re-
liabilities. Figure 6 shows the impact on local data MTTF
versus the accuracy of the estimates. In this experiment,
estimated reliabilities randomly
in the range P j \Sigma Pest such that a larger Pest resulted in a
larger average error (shown on the horizontal axis in Figure
6). These results show that high reliability sites suffer
when estimates are innacurate, while low reliability sites
benefit. This is because a low reliability site can be mistaken
for a high reliability site, and thus can get larger deeds from
its trading partners. Similarly, high reliability sites can be
mistakenly judged to have less reliability, and must accept
correspondingly smaller deeds. Nonetheless, most high reliability
still achieve higher MTTF
under the weighted size policy than under the same size pol-
icy, even when estimates are as much as percent wrong
on average.
In summary, if some archives are more reliable than others
ffl Highly reliable sites should trade among themselves.
However, if site reliability estimates are off by 25 percent
or more, then the clustering strategy is better.
ffl Less reliable sites should continue to use clustering.
Figure
7: The impact of cluster size.
ffl Highly reliable sites can use the weighted size policy
to extract larger deeds from low reliability sites.
ffl Less reliable sites should try to trade using the same
size policy, but should continue to trade with highly
reliable sites even if the weighted size policy is used.
3.4 The trading network
In this section, we investigate the ideal trading network
size. Specifically, we examine the effects of clusters, or
groupings of sites that cooperate for political or social rea-
sons. If the cluster is not large enough to serve a site's
trading needs, the site will have to seek inter-cluster partnerships
to expand the trading network. Note that in previous
sections, we assumed a local site could potentially trade
with any remote site. Even with the clustering strategy, any
site was eligible to become a trading partner. In this section
we consider the case where clusters are pre-ordained.
In order to determine the ideal cluster size, we ran a simulation
in which 15 archive sites were divided into N clusters,
2:::7. In this experiment, each cluster is fully
isolated : there are no inter-cluster links. Thus, when
all sites trade with each other, but when are
three clusters of five sites, and sites trade only within a clus-
ter. We examined the case where
well as !G?=5. The results are shown in Figure
7. When space is tight a cluster of about 5 sites
provides the best reliability (with a MTTF of 630 years). In
contrast, when there is more space
seven sites is the best cluster size, with a MTTF of 26,000
years. In both cases, larger clusters are actually detrimen-
tal, decreasing the local data reliability of the member sites.
Large clusters mean that a member site must trade with
many other archives, and this can cause some sites to become
overloaded; thus their public storage becomes filled
up. When this happens, the overloaded sites are less able
to make trades, and their reliability suffers. Therefore it is
not necessary or even desirable to form very large clusters
in order to achieve reliability.
If sites participate in trading clusters that are smaller than
the ideal size, they can seek inter-cluster partnerships to
enhance reliability. We have simulated a situation where 12
Figure
8: Inter-cluster partnerships,
Figure
9: Inter-cluster partnerships,
sites were divided into small clusters, and each site randomly
chose partners outside of its own cluster. Figure 8 shows
the results for local data reliability
is plotted against the number of inter-cluster partnerships
per site. The results show that smaller clusters must seek
out many inter-cluster partnerships to achieve the highest
reliability. Thus, sites in clusters of three or fewer archives
must find roughly seven partners in other clusters, while
clusters with four sites should find roughly five additional
partners. Even sites in relatively large clusters (e.g. with six
sites) can benefit by seeking four inter-cluster partnerships.
Seeking too many inter-cluster partners can hurt reliability.
A local site may try to find partners outside the cluster, but
unless the partners are fully integrated into the cluster, then
the local site must field all of the partner's trading requests,
and quickly becomes overloaded. Similarly, when
inter-cluster partnerships are beneficial. Our results, shown
in
Figure
indicate that for clusters of less than five sites,
six or seven inter-cluster partnerships are needed to achieve
the best reliability.
In summary:
ffl Sites in clusters of about five archives (for
seven archives (for achieve the highest reliability
ffl Sites in smaller clusters can seek inter-cluster partnerships
to improve their reliability.
ffl If a cluster is too large or if a site has too many inter-cluster
partners, reliability can suffer.
4. RELATED WORK
The problems inherent in archiving data are well known
in the digital library community [11]. Researchers have confronted
issues such as maintaining collection metadata [23,
17], dealing with format obsolescence [25, 19, 14], or enforcing
security policies [22]. These efforts complement attempts
to simply "preserve the bits" as exemplified by projects like
SAV [4], Intermemory [12], LOCKSS [24], or OceanStore [10].
The work we present here can be used to replicate collections
in order to best preserve the bits, and can be augmented if
necessary (e.g. with a metadata management scheme.)
Many existing data management systems use replication
to provide fault tolerance. However, these systems tend to
focus on access performance and load balancing [7, 26, 27],
whereas we are primarily concerned about reliability. Sites
using our clustering strategy attempt to emulate mirrored
disks [2]. In contrast, database systems tend to prefer a
strategy called chained declustering [15], which trades some
reliability for better load balancing after a failure [18]. Digital
archives, which are primarily concerned with preserva-
tion, prefer the more reliable mirrored disks; hence, they
use the clustering strategy. Moreover, we are concerned
with placing archived data that is not likely to change, and
therefore are not as concerned as previous researchers with
the ability to correctly update distributed replicates [1, 13].
Thus, while a distributed transaction protocol could be added
if necessary, efficient or correct updates are less important
than preserving the data.
Other systems (such as Coda [16] or Andrew [9]) use replication
in the form of caching: data is moved to the users
to improve availability. Then, if the network partitions, the
data is still readable. Our goal is to place data so that it is
most reliably stored, perhaps sacrificing short term availability
(during network partitions) for long term preservation.
Specifically, Andrew and Coda eject data from caches when
it is no longer needed. Our scheme assumes that data is
never ejected.
The problem of optimally allocating data objects given
space constraints is well known in computer science. Distributed
bin packing problems [20] and the File Allocation
Problem [3] are known to be NP-hard. Trading provides a
flexible and efficient way of achieving high reliability, without
the difficulties of finding an optimal configuration.
5. CONCLUSIONS
In this paper, we have examined how archives can use
and extend peer-to-peer data trading algorithms to serve
their data preservation needs. This provides a reliable storage
layer that can be enhanced with other services (such
as format migration or authenticity verification) to create a
complete archiving solution. In order to trade effectively, a
site must make certain policy decisions. We have provided
guidelines for selecting the amount of storage a local site
must provide. We have presented and evaluated trading
policies that exploit site reliability estimates, significantly
improving reliability. In particular, we have shown that high
reliability sites should trade amongst themselves, while low
reliability sites should try to trade their collections using the
clustering strategy. Finally, we have examined the impact
of trading clusters shaped by political and social concerns,
and how many extra trading partners a member of such a
cluster must find to achieve the highest reliability.
Acknowledgements
The authors would like to thank the anonymous reviewers
for their helpful comments.
6.
--R
A fault tolerant replicated storage system.
Transaction monitoring in Encompass
Multiple file allocation in a multiple computer system.
Implementing a reliable digital object archive.
Peer to peer data trading to preserve information.
Modeling archival repositories for digital libraries.
Data allocation in a dynamically reconfigurable environment.
Replication in the Harp file system.
Andrew: A distributed personal computing environment.
OceanStore: An architecture for global-scale persistent storage
Preserving digital information: Report of the Task Force on Archiving of Digital Information
Towards an archival intermemory.
The dangers of replication and a solution.
Digital Rosetta Stone: A conceptual model for maintaining long-term access to digital documents
Chained declustering: A new availability strategy for multiprocessor database machines.
disconnected operation in the coda file system.
An event-aware model for metadata interoperability
Distributed virtual disks.
Information preservation in ARIADNE.
Knapsack Problems: Algorithms and Computer Implementations.
A case for redundant arrays of inexpensive disks (RAID).
Permanent web publishing.
Ensuring the longevity of digital documents.
An adaptive data replication algorithm.
--TR
Andrew: a distributed personal computing environment
Knapsack problems: algorithms and computer implementations
Replication in the harp file system
Cluster-based file replication in large-scale distributed systems
Disconnected operation in the Coda File System
The dangers of replication and a solution
Petal
An adaptive data replication algorithm
OceanStore
Data Allocation in a Dynamically Reconfigurable Environment
Chained Declustering
A Fault Tolerant Replicated Storage System
Implementing a Reliable Digital Object Archive
Policy-Carrying, Policy-Enforcing Digital Objects
Modeling Archival Repositories for Digital Libraries
An Event-Aware Model for Metadata Interoperability
Towards an Archival Intermemory
--CTR
Bruce R. Barkstrom , Melinda Finch , Michelle Ferebee , Calvin Mackey, Adapting digital libraries to continual evolution, Proceedings of the 2nd ACM/IEEE-CS joint conference on Digital libraries, July 14-18, 2002, Portland, Oregon, USA
Mayank Bawa , Brian F. Cooper , Arturo Crespo , Neil Daswani , Prasanna Ganesan , Hector Garcia-Molina , Sepandar Kamvar , Sergio Marti , Mario Schlosser , Qi Sun , Patrick Vinograd , Beverly Yang, Peer-to-peer research at Stanford, ACM SIGMOD Record, v.32 n.3, September
Brian F. Cooper , Mayank Bawa , Neil Daswani , Sergio Marti , Hector Garcia-Molina, Authenticity and availability in PIPE networks, Future Generation Computer Systems, v.21 n.3, p.391-400, 1 March 2005
Brian F. Cooper , Hector Garcia-Molina, Peer-to-Peer Data Preservation through Storage Auctions, IEEE Transactions on Parallel and Distributed Systems, v.16 n.3, p.246-257, March 2005
Brian F. Cooper , Hector Garcia-Molina, Peer-to-peer data trading to preserve information, ACM Transactions on Information Systems (TOIS), v.20 n.2, p.133-170, April 2002 | digital archiving;replication;data trading;preservation;fault tolerance |
379790 | Parallel Implementation of a Central Decomposition Method for Solving Large-Scale Planning Problems. | We use a decomposition approach to solve three types of realistic problems: block-angular linear programs arising in energy planning, Markov decision problems arising in production planning and multicommodity network problems arising in capacity planning for survivable telecommunication networks. Decomposition is an algorithmic device that breaks down computations into several independent subproblems. It is thus ideally suited to parallel implementation. To achieve robustness and greater reliability in the performance of the decomposition algorithm, we use the Analytic Center Cutting Plane Method (ACCPM) to handle the master program. We run the algorithm on two different parallel computing platforms: a network of PC's running under Linux and a genuine parallel machine, the IBM SP2. The approach is well adapted for this coarse grain parallelism and the results display good speed-up's for the classes of problems we have treated. | Introduction
Despite all recent progresses in optimization algorithms and in hardware, model builders
keep on generating larger and larger optimization models that challenge the best existing
technology. This is particularly true in the area of planning problems where the quest for
more realism in the description of the systems dynamics or the uncertainty, naturally yields
huge models. One way to increase the computation power is to resort to parallel compu-
tations. However, there are serious obstacles to parallel implementations of optimization
algorithms. First, genuine parallel machines are still very expensive; most users don't have
access to such high technology equipment. Secondly, adapting optimization softwares to
parallel computations is often quite complicated; again, this is a real issue for most practitioners
In this paper we want to demonstrate that for some classes of problems, there exist solutions
that do not require sophisticate hardware, nor major adaptations of existing software. Yet,
they achieve interesting speed-up's that allow to solve problems that would be intractable
on standard sequential machines with regular optimization software. We used three key
ingredients. The first idea is to use decomposition to break the initial problem into many
independent smaller size problems that can be simultaneously solved on different processors.
The second idea is to use clusters of dedicated PC's to get a loose parallel environment.
Finally, to overcome possible instability in the decomposition process, we base the decomposition
scheme on the analytic center cutting plane method. Our second goal in this paper is
to demonstrate that the parallel implementation of the central decomposition scheme scales
well.
To illustrate the case, we consider three classes of realistic problems. Some problems in those
classes are so large that they do not seem solvable by a direct sequential approach, even on
the most powerful workstations. The first class of problems pertains to energy planning: the
problems are formulated as block-angular linear programs which are solved via Lagrangian
relaxation. The other class deals with the planning of excess capacity to achieve survivability
in telecommunication networks. The capacity problems are formulated as structured linear
programs that are solved by Benders decomposition scheme. Finally, we considered few
more block-angular linear programming problems which arise in the area of Markov decision
problems. Even though the problem sizes are not excessive, the problems are numerically
challenging and thus make up interesting test problems.
To achieve efficiency with parallel computations, one needs to organize computations so as to
have all processors simultaneously busy. In a decomposition approach, computations alternate
between the master problem and the subproblems. Thus, unless one could implement
parallel computations for the master problem itself, all processors but one are idle when
the code deals with the master program. Consequently, a necessary condition for efficient
implementation is to work on problems where the master program is comparatively small
with respect to the subproblems. The applications we present in this paper are of that type.
The idea to use parallel computation via decomposition is quite natural and is certainly not
new [7, 11]. However, there are not so many reports of successful implementations, probably
because of the bad reputation of standard decomposition schemes. Dantzig-Wolfe decomposition
[9], or its dual counterpart Benders decomposition [6] are reputed to be unstable. Both
schemes can be viewed as a special implementation of the classical Kelley-Cheney-Goldstein
[26, 8] cutting plane algorithm. Though Kelley's cutting plane method often performs well, it
is also known to be very slow and even to fail on some instances. Nemirovskii and Yudin [35]
devised an illuminating example based on a resilient oracle that confirms the possibly slow
convergence of Kelley's method. To remedy this flaw, various schemes have been proposed in
the literature: some are based on a regularized scheme [38, 27, 29]; others use central query
points such as Levin's center of gravity, the center of the maximum inscribed ellipsoid, the
volumetric center and, lastly, the analytic center.
In this paper we use the last method. The use of the analytic center was suggested by
Huard [23] in his celebrated linearized method of centers. Many years later the idea surfaced
again: Renegar's [36] polynomial path-following algorithm for LP 1 . Sonnevend [42]
suggested to use analytic centers in connection with general convex programming. Several
authors realized the potential of this concept in relation with the cutting plane approach to
nondifferentiable optimization [43, 17] or mixed integer programming [33]. The idea has been
refined and implemented in a sophisticate code [20], named accpm (analytic center cutting
plane method). Its current implementation is based on the projective algorithm [25, 15].
Under extensive testing, accpm has proved to be powerful and robust [3, 16]. We consider
it a promising candidate for a parallel implementation.
In this paper, we address the parallel treatment of subproblems. In our computations we
used two very different parallel machines: a cluster of 16 Pentium Pro PC's and the IBM SP2
of the University of Geneva with 8 available nodes for scientific parallel experimentations.
The first system is particularly attractive, due to its very low cost and its ease of installation.
The second system is a commercial machine optimized for parallel computations.
The paper is organized as follows. In Section 2 we briefly introduce three decomposable appli-
cations: energy planning problems and Markov decision models that are both well adapted to
telecommunications network survivability problems that suit
Benders decomposition. In Section 3, we address the issues of decomposition approaches
specialized to these two classes of problems. In Section 4, we give a unified view of cutting
plane methods to solve decomposed problem and two specialized algorithms: the classical
Kelley-Cheney-Goldstein [26, 8] cutting plane method and the analytic center cutting plane
method. In Section 5, we address the basic issues in the parallel implementation of the
decomposition scheme when applied to the three applications presented earlier. In Section
6.1, we present our numerical results. Finally, in Section 7 we give our conclusions.
Application problems
2.1 Energy planning and CO 2 abatement
MARKAL (MARKet ALlocation) is a standard energy systems model. It has been developed
under the aegis of the International Energy Agency and is currently implemented in more
than 40 countries [4]. MARKAL is a multi-period linear programming model. The objective
function in the LP is the discounted sum, over the horizon considered (usually between
years), of investment, operating and maintenance costs of all technologies, plus the cost
of energy imports. The minimization of this objective function, subject to the constraints
1 LP is the acronym for linear programming. We shall use it throughout the paper.
describing the energy system gives an optimal investment plan for the selected technologies.
An extension consisted to link existing MARKAL models of several countries (Belgium, Great
Britain, Germany, Switzerland) under the joint constraint of a maximum CO 2 emission in
accordance with the recommendation of the Environment and Development Conferences in
Rio (1992) and in Kyoto (1997). This coupling of models provides a basis for discussing the
impact of a tax on pollution emittant and also fair transfers between countries [4]. The individual
MARKAL models are linear programming problems with 5,000 to 10,000 constraints
and up to 20,000 variables. Linking few such models creates large block-structured linear
programs. The algebraic formulation of this class of problems takes the form
maximize
subject to
(1)
dimensions. The matrices A i relate to the
common resource: the CO 2 emission. The matrices B i are the country MARKAL models.
The structure of this global problem is named primal block-angular. It is made up of several
independent blocks of constraints, and of a small set of constraints that couple all variables.
Similar problems have also been studied in [34] where MESSAGE models were used. These
linear programs were even larger as a single problem has a size of over 40,000 rows and 73,000
variables. The coupling of 8 such models yields a linear program (of about 330,000 rows and
600,000 variables) that goes beyond the limits of the standard, state-of-the-art software for
workstations.
Multiregional planning problems suit quite well the type of parallelism we want to use.
They are loosely coupled (7 to 20 linking constraints) and have small number (3 to 8) of
subproblems, each of considerable size (up to 40,000 rows). In a sequential implementation
time spent in the master problem remains only a small fraction-about 2%-of the overall
solution time. Unfortunately, subproblems may differ considerably in their sizes and in the
predicted computational effort to solve them.
2.2 A Markov decision model
The second test problem in the class of block-angular linear programs was generated from a
Markov decision problem for systems with slow and fast transition rates. Abbad and Filar [1]
showed how one can reformulate such problems as block-angular problems. Filar and Haurie
[12] applied this approach to manufacturing problems. The particular problems we solved
were communicated to us by Day, Haurie and Moresino. The problems have subproblems
and 17 coupling constraints. The problems we solved are not immense. However, they are
numerically difficult and unstable. The variables are probabilities that sometimes take very
small values, but, in the mean time, those small values have a very large impact on the global
solution. State-of-the-art codes applied in a frontal manner (i.e., without decomposition)
encounter serious difficulties even for smaller instances of these problems. The problems we
solved are not real application problems. They were included in our experiments as being
particularly challenging.
2.3 Survivablity in telecommunications network
Survivability is a critical issue in telecommunications. Individual components of a telecommunications
network are prone to failure. Those events interrupt traffic and cause severe
damage to the customers. A network is survivable if it can cope with failures. This raises
a number of issues regarding the topology of the network, its capacity and the message
rerouting policy in case of a failure. In the present case, we are interested in the planning
of capacity on a network with a fixed topology. This facet of the telecommunications survivability
problem was first presented by Minoux [31] who formulated it as a large linear
programming model under the following assumption: when a failure occurs, the entire traffic
may be changed to restore feasibility through planned excess capacity. However, telecommunications
operators have a strong preference for minimal changes in the traffic assignment. In
[30], Lisser et al. considered the same model with the proviso that reroutings are restricted
to that segment of the traffic that was interrupted by the failure. They solved the optimization
problem via a Lagrangian relaxation. Their approach does not lead to easy parallel
implementation, because the master program is very large and computationally much more
demanding than the subproblems. We shall present in the subsequent sections an alternative
decomposition approach. For a survey on related survivability problems, see [2].
We give now a formal description of the problem. A telecommunications network is an
undirected graph loops and parallel edges. In the normal operational
state, the traffic assignment meets regular demands between origin-destination pairs and is
compatible with the capacity of the network. We consider that the initial traffic assignment
in normal condition is given and fixed.
When some component of the network-edge or node-fails, the traffic through this component
is interrupted: it must be rerouted through alternative routes in the network. We
consider that at most one component can fail at a time. We denote by S the set of failure
states. Clearly, the cardinality of S cannot exceed 2 jEj. For a state s 2 S, the operational
part of the network consists of a subnetwork of G of working
components. When the state s originates from the failure of the node v 2 V , then G(s) is
the subgraph G n v of G; when it originates from the failure of the arc a 2 E, then G(s) is
the subgraph G n a of G. For each state, one defines, from the initial traffic assignment, a
set of new demands k 2 R s between pairs of nodes. Those conditional demands are created
by the traffic implicated in the failures; they are described, for each k 2 R s , by a vector
single origin/destination pair.
Let K
be the vector of unused capacity in the normal operational state.
This unused capacity can be utilized to route the conditional demands. If necessary, more
capacity can be installed at a given positive unitary cost . The
objective of the capacity planning problem consists in finding the least capacity investment
cost that allows the rerouting of all conditional demands k 2 R s for each state s 2 S.
The standard flow model applies to directed graphs. In that case, the mass-balance equations
establish an algebraic relation between in-flow and out-flow. However, telecommunications
networks are essentially undirected graphs: messages can travel on a link [i; j] in either
directions. To retrieve the usual flow formulation on directed graphs, we replace each arc
[i; j] by a pair of opposite arcs (i; but we must remember that the capacity
2 This is to account for the possible case of some edge with no assigned traffic.
usage of an arc is the sum of the flows on the two opposite arcs. This is the basis for our
problem formulation.
Given an arbitrary orientation on the graph G, we define for each s 2 S the node-arc adjacency
matrix N s of G(s). Then, the matrix (N s ; \GammaN s ) is the adjacency matrix of the graph
formed of the two graphs G(s) and the one opposite to it. Let x k
ij be the flow component of
commodity k on the arc (i; j). The mass balance equations involve the vectors
and
components. With these definitions, one can
state our problem as follows:
min
s.t.
k2Rs
The inequalities (2a) are capacity constraints which impose that in any operational state
s the flow through an arc [i; j] does not exceed the capacity. Equations (2b) are the node
balance equations for each demand k resulting of an operational state s 2 S.
3 Problem reformulation via decomposition
Decomposition often means two different things. Firstly, decomposition is a mathematical
technique to transform an initial large-scale problem into one of smaller dimension. Secondly,
decomposition refers to the algorithm that is used to solve the transformed problem. In this
section we consider the mathematical transformation only.
The usual transformation techniques in decomposing large-scale optimization problems are
the Lagrangian relaxation and Benders decomposition. To paraphrase Lagrangian relaxation,
let us assume the problem of interest is a maximization one and that the constraints can be
split into two sets: simple constraints and complicating ones. The method goes as follows: the
complicating constraints are relaxed, but the objective is modified to incorporate a penalty
on the relaxed constraint. The penalty parameters are nothing else than dual variables
associated with the relaxed constraints. Solving this problem yields an optimal value that is
a function of those dual variables. Under appropriate convexity and regularity assumptions,
this function is convex and its minimum value coincides with the optimal one of the original
problem. This defines a new problem, equivalent to the original one, but having a variable
space of smaller dimension. In the linear programming case, the convexity and regularity
assumption hold; besides, the method is just the same Dantzig-Wolfe decomposition.
To paraphrase Benders decomposition, let us assume the problem of interest is a minimization
one and that the variables can be split into two sets. By fixing the first set of variables, one
obtains a problem whose optimal solution depends on the variables in the first set only. The
transformed problem consists in minimizing this function. Under appropriate convexity and
regularity assumptions, this function is convex and its minimum value coincides with the
optimal value of the original problem. Again, the minimization of this function defines a
new problem that is equivalent to the original one, but has dimension of the first set of
variables.
In linear programming, the two schemes are dual of one another. If the first set of constraints
(or dually of variables) has small dimension, the transformation yields a new equivalent
problem, much smaller in size, but whose components - objective function and/or constraints
are implicitly defined. That is, these components are given by the optimal values (objective
and variables) of appropriate optimization problems. The next subsections provide detailed
examples of such transformations, and emphasize the similarity of the transformed problem
in the two cases.
Even though the new problem has smaller dimension, it is not likely to be easy. Difficulties
arise first from the nondifferentiability of the objective: derivatives are replaced by subgra-
dients, which do not provide the same quality of approximation. The second difficulty comes
from the computation of the subgradients themselves: they are obtained from the solution
of an auxiliary optimization problem. To achieve efficiency and accuracy, one must resort to
advanced algorithmic tools. Section 4 describes one such tool.
3.1 Lagrangian relaxation of block-angular programs
A natural class of structured problems eligible to decomposition are large-scale block-angular
linear programs of the following form
maximize
subject to
dimensions.
We associate with (3) the partial Lagrangian function
Assuming problem (3) has an optimal solution, we may, by duality, replace it with
min u
where L(u) is the optimal value of
maximize \Gammaha; ui
subject to B i x
In the usual parlance, Problem (4) is the master program while Problem (5) is the subprob-
lem. Note that the feasible set of the subproblem is independent of u. Since the objective is
the point-wise maximum of linear functions in u, it is convex in u.
Let us observe that subproblem (5) is separable; in consequence, the partial Lagrangian is
additive
where, for
Problem (7) is always feasible, and either has an optimal solution, or is unbounded. Moreover,
Its domain is
We can now rewrite problem (4)
Let us say a word about the oracle. Assume first that u 2 domL be an optimal
solution of (7). Then, the optimality cut follows directly from the definition of L i in (7)
Assume now that u 62 domL problem (7) is unbounded. Let d i (u) be a ray along
which
We then have the feasibility cut
that excludes u as not being part of the domain of L i .
In practical implementations, it might be useful to work with a relaxation of the subproblems.
More precisely, assume that the subproblems are solved with an ffl-precision, ffl ? 0. That is,
we generate a point - x i that is feasible for the subproblems
Thus for an arbitrary v, we have the ffl-subgradient inequality
The ffl-subgradients can be used to define a weaker LP relaxation. This relaxation has been
successfully used [21] in a sequential implementation of the decomposition. We also use it
in the parallel implementation.
The application of Lagrangian relaxation to the two block-angular classes of problems that
we described in Sections 2.1 and 2.2 is obvious. For the chosen problems, the number of
coupling constraints is very small: hence, the variable space of the transformed problem is
also very small.
3.2 Benders decomposition
Linear programs addressed in the previous section display a primal block-angular structure in
which a set of complicating constraints links the independent blocks. A similar formulation
can be given in which a subset of variables links the independent blocks. Consider the
problem
subject to T i
dimensions. The constraint matrix of this linear program
displays a dual block-angular structure.
For a given x 0 , let Q(x 0 ) be the optimal value of the subproblem
minimize
subject to W i x
Then we can replace (10) with
subject to x 0 - 0: (12)
The function Q(x 0 ) is additive
is the value of
subject to W i x
In other words, the subproblem (11) is separable. It is interesting to consider the dual of
maximize
subject to W T
The objective value is defined as the point-wise maximum of linear forms in x 0 : it is thus
convex. Note that the constraints in (15) are independent of x 0 . We can safely assume that
(15) is feasible, otherwise the original problem is either unbounded or infeasible.
Problem (14) may be infeasible for some values of x 0 . Assume first that it is feasible. Then
it has an optimal solution x be the dual optimal solution.
In view of (15), the optimality cut writes
Assume now that (14) is not feasible. Having assumed that the dual (15) is feasible, then it
is unbounded. Let d i ray along which the dual objective hb
equivalently we have the feasibility cut
that excludes being part of the feasible domain of (10).
Similarly to the Lagrangian relaxation, the exact optimality requirement may be replaced
with the ffl-optimality; we then obtain the weaker inequality
Cutting plane methods to solve decomposed problems
In the formulation of the transformed problem, the Lagrangian relaxation and Benders decomposition
generate problems that are usually difficult despite the relatively small dimension
of the variable space. The difficulty stems from the fact that the problems are essentially
nondifferentiable. Their solving requires advanced algorithmic tools. In this section we concentrate
on the solution method. More specifically, we present the generic cutting plane
method, and its analytic center variant, to solve the nondifferentiable problem of interest.
Cutting plane methods apply to the canonical convex problem
where X ae R n is a closed convex set, X 0 ae R n is a compact convex set, f : R n 7! R is a
convex function.
We assume that the set X 0 is defined explicitly, while the function f and the set X are
defined by the following oracle. Given -
the oracle answers either one statement:
1. - x is in X and there is a support vector
2. - x is not in X and there is a separation vector a such that
Answers of the first type are named optimality cuts, whereas answers of the second type are
feasibility cuts.
The successive answers of the oracle to a sequence of query points x k 2 X 0 can be used to
define an outer polyhedral approximation of the problem in the epigraph space. Let fx j g, j 2
kg, be the sequence of query points. The set K is partitioned into
where K 1 and K 2 correspond to optimality cuts and feasibility cuts respectively. Let the
oracle answer at x j be (d
be the best recorded function value. The polyhedral
approximation defines, in the epigraph space (x; z), the set of localization:
h-
Clearly, F k contains the optimal solutions of the initial problem.
The k-th step of the generic cutting plane algorithm is as follows.
1. Pick
2. The oracle returns the generic cut hd the cut is
optimality one, and 0 otherwise.
3. Update F k+1 := F k "
Specific cutting plane algorithms differ in the choice of the query point x k . Let us focus
on two strategies. Kelley's strategy consists in selecting (x; g.
This strategy is possibly the simplest, but by no means the only alternative. For a more
comprehensive discussion on nondifferentiable optimization methods, we refer to [28, 18].
To define the analytic center strategy let us introduce the slacks
and the associated potential
log
is the barrier function associated with the set X 0 . The analytic center strategy
selects
For a concise summary of the method and its convergence properties, see [18].
In practice, it is often the case that the function f is additive and the set X is the intersection
of many sets. As shown in Section 3.1, this is the case in Lagrangian relaxation, when
the Lagrangian dual turns out to be the sum of several functions L i (each one of them
corresponding to a subproblem) and the feasible set X is the intersection of the domains of
these functions. In such a case, the oracle returns separate information for each function L i .
We also deal with the similar situation in Benders decomposition with separable subproblem,
see Section 3.2.
Let us reformulate Problem (16) as
For any x 2 X 0 the oracle associated with this formulation provides the answers
1. - x is in X and there are support vectors
2. - x is not in X and there are separation vectors a i such that
The above oracle can be viewed as a collection of independent oracles, one for each function
f i and one for each X i . This feature has two beneficial consequences. Firstly, the disaggregate
formulation much improves the performance of Kelley's cutting plane method [24] and
accpm [10]. Secondly, and most importantly in our case, disaggregation naturally allows
parallel computations.
5 Parallel implementation of cutting plane methods
Decomposition is an obvious candidate for parallel computation [7, 11]. All independent
oracles can be examined in parallel to reduce the time needed to gather information from
the oracle. To make sure that this scheme is computationally advantageous, the ratio of
the times spent to solve the consecutive master problems and to solve (sequentially) the
subproblems associated with the oracle should be small. The smaller this ratio, the larger
the part of work can be done in parallel. Such a situation often appears in decomposition of
real-life problems. There are two different cases which may render the oracle expensive:
1. there are relatively few computationally involved independent oracles; or
2. there are many possibly very cheap independent oracles.
Both cases can benefit from parallel computation on a machine with relatively few, but
powerful, parallel processors. Moreover, we expect that decomposition working in any of
the two above-mentioned conditions will be made scalable when implemented on parallel
machine. Roughly speaking, scalability means here that the ratio of computing times between
the sequential algorithm and a parallel algorithm increases linearly with the number of
subproblems (as long as that does not exceed the number of parallel processors available),
see [7, 11].
A related notion used extensively in the performance evaluation of parallel algorithms is the
speed-up. It is the ratio of the solution time when the problem is solved on one processor machine
to that of its solution on k-processor parallel machine. Ideally, for scalable algorithms,
speed-up's keep close to k for k-processor machine.
5.1 The block-angular application problems
The applications presented in Sections 2.1 and 2.2 have relatively small number of sub-problems
varying from 3 to 16. These subproblems may have large size but, unfortunately,
may significantly differ in size among themselves. Consequently, it is not always possible to
achieve a good load balancing in the parallel code.
5.2 Survivablity in telecommunications network
The survivability problem (2) is a block-angular linear program with two types of coupling
elements: the constraints (2a) couple the independent blocks in (2b), while the variable y
couples the jSj blocks of otherwise independent constraints (2a). In [30], the choice was to
apply a Lagrangian relaxation to (2a). Note that those constraints are rather numerous. In
this paper we apply Benders decomposition by fixing y as primary variable. We obtain a
problem of type (12). In this application, the variable space y has small dimension, but, as
we shall see, the subproblems are rather involved.
Since there is no cost associated with the rerouting flows x, the subproblems of type (14)
take one of the two values 0 or +1. Formally, one can write:
s.t.
k2Rs
This problem is essentially a feasibility problem. It is named the compatible multicommodity
flow problem. Although the problem involves relatively few commodities 3 , its dimension is
not small.
Owing to the block angular structure of (18), one can apply a decomposition scheme to solve
it. In our case we introduce an auxiliary problem to handle the first phase
min
s.t.
k2Rs
This problem has an optimal solution that is zero, if L s or takes a positive finite
value if L s To solve this first phase problem, we use a Lagrangian decomposition
scheme by dualizing the constraints (19a). If the optimal solution of (19) is positive, then
the optimal dual variables associated with the dualized constraints can be used to build a
feasibility cut. (For more details on compatible multicommodity flows, see Minoux [32].)
The Lagrangian relaxation of the compatible multicommodity flow problem (19) is relatively
easy to solve, and Kelley's cutting plane scheme performs well.
In contrast, Problem (12) in the y-space is difficult, with a costly oracle. Indeed, we deal
here with a nested (two-level) decomposition scheme. The subproblems of the higher level
decomposition, i.e., problems (19), are themselves decomposed. For this reason the analytic
center cutting plane method is a method of choice to solve it, as it achieves greater stability
and efficiency.
Let us now turn our attention to load balancing among independent processors. The oracle is
made of relatively many independent oracles, each of them being an independent compatible
multicommodity flow problem. The number of independent oracles is jSj, a number that
may be rather large (over a hundred elements). The computations can be distributed on the
3 The number of commodities is equal to the number of different origin/destination communications which
are interrupted by the network component failure.
various computing units. Conceivably, the computation load varies from one oracle to the
other, i.e., from one element of S to the other. If we use as many computing units as jSj,
the computing load will be unbalanced. Parallel computation will not be scalable. However,
we are not interested in very large parallel machines, but rather in clusters of workstations
involving at most a few dozens of them. Then, we face the issue of distributing evenly work
among the computing units.
PSfrag replacements
CPU
time
Figure
1: Load balancing
We proceed as follows. At the beginning, the subproblems are
cyclically distributed among the nodes. When all subproblems
have been solved, the subproblems are sorted according to the
time needed to solve them. The time spent on each node is
computed, as well as the average time - t for a node. Tasks are
then transferred from the nodes where the time spent is above
the average to those where it is below the average as illustrated
in
Figure
1. For each node, the tasks are processed in increasing
order of completion time. This is done as long as the total time
of the tasks processed so far is below - t. When the time spent
on one node is above the average, the procedure reaches a point where a few tasks are left.
These jobs are transferred to the nodes for which the time spent is below average.
Note that as the decomposition scheme progresses, the relative difficulty of the subproblems
may vary. For this reason, we choose to rebalance the load dynamically at each iteration.
6 Numerical results
6.1 Computational resources
To perform the numerical experiments we used two different parallel machines: a cluster
of linked with fast ethernet and a parallel supercomputer IBM SP2.
The latter is an expensive machine intended to cover the need of a large institution, whereas
the former is a cheap form of parallelism that benefits from recent advances in fast ethernet
technology. We briefly expose some salient characteristics of both machines.
Following the idea of the Beowulf project [5, 37], the cluster of 16 Pentium Pro PC's runs
under LINUX operating system (RedHat 4.2 distribution). The cluster is linked with fast
ethernet that allows a 100MB/s transfer rate. Six machines have 384 MB RAM (one of them
is the master) and the remaining 10 machines have 64MB RAM each. We installed MPICH,
a public domain version of MPI, to handle communications between processors.
The IBM RS/6000 Scalable POWER parallel System (RS/6000 SP2) at the University of
Geneva has 15 nodes. Each of them is a complete RS/6000 workstation with CPU, disk
and memory that runs its own operating system AIX 4.1. The processors are standard
architecture RS/6000 running at 77MHz on one "wide" node and at 66MHz on
fourteen "thin" nodes. The nodes are divided into three pools: one dedicated to interactive
jobs; another for intensive input/output jobs and a last one for parallel-production work.
The last pool was used for the numerical tests; it contains eight processors (all of them are
thin nodes). Four of these nodes have 192 MB of RAM memory, the remaining four nodes
have 128 MB RAM. The SP2 machine belongs to a message-passing family. Access to the
switch is made possible by call to a library provided by the constructor. Standard libraries
such as PVM [14] or MPI [13, 41] are also available. The PVM and MPI libraries are portable
to other platforms, a definite advantage in our view.
The implementation of accpm corresponds to the one described in [20]. The code combines
three programming languages: C, C++ and FORTRAN 77. For the cluster of PC's we compiled
them with GNU compilers: gcc, g++ and g77, respectively. On IBM SP2 machine, we
compiled them with the AIX compilers: mmxlc, mmxlC and mmxlf, respectively. In both
cases all compilations were done with a default optimization level (-O option).
6.2 Programming language for parallel computations
As claimed earlier, our goal was to keep our code general and portable to many parallel computing
platforms. Thus, we implemented parallel communication with the Message Passing
Interface (MPI) library [22, 13, 41]. An important advantage of this parallel communication
model is that it can be implemented on any multiprocessor machine as well as on a cluster of
independent machines loosely coupled by a network of the Ethernet family. Public domain
implementation of MPI is available for the most popular workstations [22]. Moreover, the
MPI standard has been accepted by commercial vendors and is implemented on a majority
of parallel computers (in particular, on SP2 of IBM). Consequently, one can develop an implementation
using MPI, test it on a network of workstations or a network of PC's, and then
port it to a sophisticate parallel machine.
6.3 Solving the block-angular linear programs
6.3.1 Problem characteristics
We solved a set of realistic block-angular linear programs with the number of subproblems
varying from 3 to 16.
MARKAL is the multi-country, multi-period energy planning model with the global constraint
limiting the emission of CO 2 [4]. ENERGY are the energy planning problems developed
at IIASA [34]. There are two variants of these problems: a small and a large one, named
ENERGY-S and ENERGY-L, respectively. Few variants of the large model are considered.
They differ in the number of subproblems that vary from 2 to 8. MACHINES problems
are the decentralized versions of the Markov decision models [12]. We solve one small, two
medium, and one large model of that type.
The problem statistics are collected in Table 1. Columns 2, 3 and 4 specify for every problem:
the size of the master, i.e., the number of coupling constraints m 0 , the number of rows that
are active at the optimum, and the number of GUB rows which is the same as the number
of subproblems p, respectively. The following columns characterize the subproblems. They
report the total number of rows, columns and nonzero elements in all p subproblems. Several
problems have well balanced subproblems, the other have not. The last two columns give an
insight into this balance: they specify the minimum and the maximum column size of the
subproblems.
Problem Master Subproblems Balance
rows active GUB rows cols nonz min cols max cols
Table
1: Statistics of the block-angular decomposable problems.
Problem Calls Cuts Time
MARKAL 26 78 2530
ENERGY-L4 28 112 112391
MACHINES-S 11 176 172
MACHINES-L 9 144 532
Table
2: Performance of sequential accpm on block-angular LPs.
6.3.2 Numerical results
We solved the problems to the relative precision . For all problems considered here,
the master program is very small compared to each subproblem. As a result, the time to
compute analytic centers is negligible. A key element in the performance is the number
of times the master program calls the oracle. We concentrate on that element and neglect
internal iterations in the master program.
Our first results pertain to a sequential implementation of the decomposition code on one
node of the cluster of PC's. We use the code [21] with the analytic center cutting plane
method [20]. Table 2 exhibits the results for problems listed in Table 1. We report: the
number of outer iterations, the total number of cuts handled by accpm and the overall CPU
time in seconds required to reach the optimal solution. The subproblems are solved with
hopdm (Higher Order Primal-Dual Method) code [19] and benefit from an interior point
warm start technique.
The problems in the family ENERGY-L are particularly demanding with respect to memory.
Each subproblem needs more than 80 MB of storage. Their solution time with the sequential
code is also considerable.
Problem SubPb 1 Proc. 2 Processors 4 Processors 8 Processors
MACHINES-S
MACHINES-L
Table
3: Performance of the parallel decomposition of block-angular LPs.
The parallel decomposition was implemented on the cluster of 16 PC's running under Linux.
All problems could all be solved when the appropriate number of processors, 1, 2, 4 or 8,
was used. In Table 3 we report the CPU times in seconds required to solve each problem
and the speed-up with respect to the solution time required by the sequential code. We did
not include the results of running the problems with p subproblems on a parallel machine
with more than p processors (we put "nr" to mark that the problems were not run). Clearly,
with our parallel decomposition approach one cannot expect speed-up's to exceed p.
The results collected in Table 3 show that our parallel decomposition reaches reasonable
speed-up's: from 1.52 to 1.94 when 2 processors are used, about 3 when 4 processors are
used, and about 5 on 8-processor parallel machine. These results seem more than satisfactory
for the problems we solved. The best speed-up's have been obtained for ENERGY problems
which have very well balanced subproblems (cf. Table 1).
The Markov decision problems do not scale so well, due to the discrepancy in the size
and the difficulty of the subproblems. Yet the speed-up's of about 5 on an 8-processor
machine seem fairly satisfactory. Although these problems are not very large for today's
LP codes, they are very difficult due to the presence of low probability transitions rates
(very small coefficients in the matrix) with important global consequences. An attempt of
solving a stright undecomposed formulation breaks down commercial LP solvers. The use
of decomposition spreads numerical difficulties across subproblems and the overall problem
can easily be solved.
6.4 Survivability of telecommunications networks
6.4.1 Problem characteristics
The tests have been performed with problems of different nature: eight of them were generated
with a random-problem generator (see [40]); other are real-life disguised problems
(T1, T2, T3). In
Table
4 we report their characteristics. For each network, we first give
the number of nodes (Nodes), then the number of arcs (Arcs) and the number of rout-
mBasic data Failure data
Nodes Arcs Routings Failures Cond. Demand
Table
4: Characteristics of the survivability problems
ings (Routings) 4 . The latter characteristics concern the network in the normal operational
state. We next indicate features that are related to the failure states: the number of failures
5 and the overall number of conditional demands resulting from the failures
(Cond. Demand). Let us observe that each problem involves a large number of routings.
To give an idea of the problem dimension, it is possible to reformulate the survivability
problem as a large LP [31]. The size of each problem for a compact LP formulation involving
mass-balance equations can be found in Table 5. The size is computed according to the
formulas: the number of rows is
k2Rs
while the number of columns is
k2Rs
Let us observe that P239 has the largest dimension: it requires more than 1:5 million variables
and about 2 hundred thousand constraints. Such a large dimension results from an important
number of failures and conditional demands.
4 Recall that the routings are fixed data in the problem.
5 The theoretical number of failures is equal to jSj + jN j. However, some arcs may have no regular flow on
them and their failure has no consequence. We consider in Failures only failures that force a rerouting of
messages.
Problem Variables Constraints
P22 3676 1332
P53 92301 16061
Table
5: Problem characteristics
6.4.2 Numerical results
Just as in the case of the block-angular linear programs, the time spent in computing the
analytic center in the master program is relatively small, if not negligible, with respect to
the time spent in the subproblems. The main difference with the previous set of experiments
concerns the number of subproblems. They are in far greater number; they are also smaller,
but more involved as they are themselves solved via a decomposition scheme. Again, the key
efficiency factor in the overall performance is the number of calls to the oracle.
The experiments were conducted on 8 processors 6 of the IBM SP2. As we mentioned pre-
viously, the subproblem needs solving multicommodity network flow problems for which
Kelley's cutting plane method performs well. We therefore needed an efficient simplex to
solve the subproblems. Unfortunatly, when the numerical experiments were made, no commercial
simplex was running under the operating system Linux (this is no longer true).
Consequently, we did not solve these problems on our cluster of PC's.
For each problem, in Table 6 we provide the number of calls to the oracle (each call corresponds
to the solving of several compatible multiflow problems), the CPU time in seconds for
the oracle (t o ), the master (t m ) and the whole algorithm (t t ), as well as the time speed-up (one
processor compared with 8 processors). Note that the total time t t includes input/output
operations that add up to the oracle and the master computing times.
Let us first observe that the number of iterations grows regularly with the number of network
arcs. Note that the time spent in the master of accpm does not exceed 2%; most of the
time is spent in the oracle, as we previously remarked. All problems, even the largest, could
be solved in a reasonable time.
Problems P227 and P239 were not solved sequentially, due to time constraints. Conse-
6 This is the maximum number of processors available for parallel experiments on this machine.
quently, speed-up's were not computed for them; in these cases we wrote ``nc'' in the corresponding
column. For all other problems, we computed the speed-up's. The figures run
from 3.8 for the smaller problem to 7 for the larger ones. Note that problems P22 and
are much smaller than the other instances. Yet the speed-up's are 3.8 and 5.3 respectively.
We feel that similar ratios could be achieved on larger problems with more processors, giving
hope that much larger problems could be solved in practice.
Ideally the running time of parallel application should decrease linearly when the number
of processors increases. In Figure 2, we attempt to reveal the influence of the number of
processors on the overall running time for two problems.
Problem Calls Cuts CPU (secs) Speed-up
P53 26 41 137.7 0.35 138 4.42
PB5 26 282 1215.9 8.78 1225 5.85
Table
Figure
2 concerns the problem P145. In Figure 2 (a) we display the overall running time
according to the number of processors that were used in parallel. We can observe a large
decrease in the running time: from more than 15190 to less than 2450 seconds. Figure 2 (b)
translates this into speed-up: the dashed line represents perfect scalability, that is the linear
decrease of time compared to the number of processors, whereas the pure line represents the
observed speed-up. The pure line stays close to the bold one. It stems from the figure, that
the scalability is rather good on this example. In fact, for 8 processors the speed-up is 6:2.
We have not been able to measure the speed-up for larger problems than P145; this would
have required freezing the parallel machine for a considerable time. We were not allowed
to do so. However, we can conjecture that we should expect similar-if not better-speed-
up's for the larger problems. A large number of failures allows more freedom in the load
balancing: it is intuitively quite clear that it should be easier to obtain equal times when
dividing a larger number of tasks.
CPU
Time
Number of Processors
(a)2468
Improvement
Number of Processors
Observed
Optimal
(b)
Figure
2: CPU time and speed-up as a function of the number of nodes
Conclusions
We have shown in this paper computational advantages resulting from the parallelisation
of the oracle in a decomposition using central prices. We have applied the code to solve
two different types of problems: with small number of computationally expensive independent
subproblems and with a large number of relatively cheap subproblems. In both cases
remarkable speed-up's have been obtained. In the latter, our parallel code reaches scalability.
Two realistic applications, in energy planning and in telecommunications, as well as a challenging
Markov decision problem have been shown to take advantage of our parallel implementation
of the analytic center cutting plane method.
An interesting alternative is to modify the original accpm method to have the master program
activated any time it is idle and a subproblem has been solved on another processor.
Reciprocally, solution of a subproblem starts as soon as a processor becomes available, similarly
to the idea of "asynchronous" parallel decomposition of [39]. The dual prices to be used
are the most recent output of the master program. In that way one may expect to increase
the efficiency of decomposition and possibly improve the speed-ups of the algorithm.
Acknowledgment
We are grateful to Jemery Day, Alain Haurie and Francesco Moresino for providing us with
challenging Markov decision problems arising in manufacturing.
--R
Algorithms for singularly perturbed limiting average Markov control problems
A cutting plane method from analytic centers for stochastic programming
A multinational MARKAL model to study joint implementation of carbon dioxide emission reduction measures
Beowulf: A parallel workstation for scientific computation.
Partitioning procedures for solving mixed-variables programming prob- lems
Parallel an Distributed Computations
Newton's method for convex programming and Tchebycheff approximation
The decomposition algorithm for linear programming
On the comparative behavior of Kel- ley's cutting plane method and the analytic center cutting plane method
Optimal ergodic control of singularly perturbed hybrid stochastic systems
MPI: A message-passing interface standard
PVM: Parallel Virtual Machine - A User's Guide and Tutorial for Networked Parallel Computing
A polynomial Newton method for linear programming
Solving nonlinear multicommodity flow problems by the analytic center cutting plane method
Decomposition and nondifferentiable optimization with the projective algorithm
Interior point methods for nondifferentiable optimiza- tion
HOPDM (version 2.12) - a fast LP solver based on a primal-dual interior point method
Warm start and
User's guide to MPICH
Resolution of mathematical programming with nonlinear constraints by the methods of centers
the impact of formulation on decomposition
A new polynomial-time algorithm for linear programming
The cutting plane method for solving convex programs
A survey of bundle methods for nondifferentiable optimization
in Handbooks in Operations Research and Management Science
New variants of bundle meth- ods
Survivability in telecommunication net- works
Optimum synthesis of a network with non-simultaneous multicommodity flow requirements
Solving combinatorial optimization problems using Karmarkar's algorithm
Informational Complexity and Efficient Methods for Solution of Convex Extremal Problems
based on Newton's method
Beowulf: Harnessing the power of parallelism in a pile-of-PCs
A regularized decomposition method for minimizing a sum of polyhedral functions
Routing and Survivability Optimization Using a Central Cutting Plane Method
MPI: the Complete Reference
"centre"
A potential reduction algorithm allowing column generation
--TR
A new polynomial-time algorithm for linear programming
A regularized decomposition method for minimizing a sum of polyhedral functions
A polynomial-time algorithm, based on Newton''s method, for linear programming
Parallel and distributed computation: numerical methods
Nondifferentiable optimization
Decomposition and nondifferentiable optimization with the projective algorithm
Solving combinatorial optimization problems using Karmakar''s algorithm
Parallel decomposition of multistage stochastic programming problems
Multicommodity network flows
PVM: Parallel virtual machine
A cutting plane method from analytic centers for stochastic programming
New variants of bundle methods
Solving nonlinear multicommodity flow problems by the analytic center cutting plane method
MPI
Warm Start and MYAMPERSAND#949;-Subgradients in a Cutting Plane Scheme
--CTR
Laura Di Giacomo , Giacomo Patrizi, Dynamic Nonlinear Modelization of Operational Supply Chain Systems, Journal of Global Optimization, v.34 n.4, p.503-534, April 2006 | cutting plane method;parallel computation;analytic center;decomposition;real-life problems |
380322 | Efficient Local Search for DAG Scheduling. | AbstractScheduling DAGs to multiprocessors is one of the key issues in high-performance computing. Most realistic scheduling algorithms are heuristic and heuristic algorithms often have room for improvement. The quality of a scheduling algorithm can be effectively improved by a local search. In this paper, we present a fast local search algorithm based on topological ordering. This is a compaction algorithm that can effectively reduce the schedule length produced by any DAG scheduling algorithm. Thus, it can improve the quality of existing DAG scheduling algorithms. This algorithm can quickly determine the optimal search direction. Thus, it is of low complexity and extremely fast. | Introduction
Scheduling computations onto processors is one of the crucial components of a parallel processing
environment. They can be performed at compile-time or runtime. Scheduling performed
at compile-time is called static scheduling; scheduling performed at runtime is called dynamic
scheduling. The
exibility inherent in dynamic scheduling allows adaptation to unforeseen appli-
cations' requirements at runtime. However, load balancing suers from run-time overhead due to
load information transfer among processors, load balancing decision-making process, and communication
delay due to task relocation. Furthermore, most runtime scheduling algorithms utilize
neither the characteristics information of application problems, nor global load information for
load balancing decision. The major advantage of static scheduling is that the overhead of the
scheduling process is incurred at compile time, resulting in a more e-cient execution time environment
compared to dynamic scheduling. Static scheduling can utilize the knowledge of problem
characteristics to reach a well-balanced load.
We consider static scheduling algorithms that schedule an edge-weighted directed acyclic graph
(DAG), also called task graph or macro-data
ow graph, to a set of homogeneous processors to
minimize the completion time. Since the static scheduling problem is NP-complete in its general
forms [6], and optimal solutions are known in restricted cases [3, 5, 7], there has been considerable
research eort in this area, resulting in many heuristic algorithms [19, 24, 4, 25, 20, 2, 14]. In this
paper, instead of suggesting a new scheduling algorithm, we present an algorithm that can improve
the scheduling quality of the existing scheduling algorithms by using a fast local search technique.
This algorithm, called TASK (Topological Assignment and Scheduling Kernel), systematically
minimizes a given schedule in a topological order. In each move, the dynamic cost of a node is
used to quickly determine the search direction. It can eectively reduce the length of a given
schedule.
This paper is organized as follows. In the next section, we review DAG scheduling algorithms.
In Section 3, the local search technique is described. The random local search algorithm is
discussed in Section 4. In Section 5, we propose a new local search algorithm, TASK. Performance
data and comparisons are presented in Section 6. Finally, Section 7 concludes this paper.
Scheduling
A directed acyclic graph (DAG) consists of a set of nodes fn 1
connected by a set of
edges, each of which is denoted by e i;j . Each node represents a task, and the weight of node n i ,
w(n i ), is the execution time of the task. Each edge represents a message transferred from one
node to another node, and the weight of edge e i;j , w(e i;j ) is equal to the transmission time of
the message. The communication-to-computation ratio (CCR) of a parallel program is dened as
its average communication cost divided by its average computation cost on a given system. In a
DAG, a node that does not have any parent is called an entry node, whereas a node that does not
have any child is called an exit node. A node cannot start execution before it gathers all of the
messages from its parent nodes. In static scheduling, the number of nodes, the number of edges,
the node weight, and the edge weight are assumed to be known before program execution. The
weight between two nodes assigned to the same processing element (PE) is assumed to be zero.
The objective in static scheduling is to assign nodes of a DAG to PEs such that the schedule
length or makespan is minimized without violating the precedence constraints. There are many
approaches that can be employed in static scheduling. In the classical approach [13], also called
list scheduling, the basic idea is to make a priority list of nodes, and then assign these nodes
one by one to PEs. In the scheduling process, the node with the highest priority is chosen for
scheduling. The PE that allows the earliest start time is selected to accommodate this node.
Most of the reported scheduling algorithms are based on this concept of employing variations in
the priority assignment methods, such as HLF (Highest level First), LP (Longest Path), LPT
(Longest Processing Time) and CP (Critical Path) [1, 24, 15]. In the following we review some
of contemporary static scheduling algorithms, including MCP, DSC, DLS, and CPN methods.
The Modied Critical Path (MCP) algorithm is based on the as-late-as-possible (ALAP) time
of a node [24]. The ALAP time is dened as critical is the
length of the critical path, and level(n i ) is the length of the longest path from node n i to an exit
node, including node n i [5]. The MCP algorithm was designed to schedule a DAG on a bounded
number of PEs. It sorts the node list in the increasing ALAP order. The rst node in the list
is scheduled to the PE that allows the earliest start time, considering idle time slots. Then the
node is deleted from the list and this operation repeats until the list is empty.
The Dominant Sequence Clustering (DSC) algorithm is designed based on an attribute for a
task graph called the dominant sequence (DS) [25]. A DS is dened for a partially scheduled task
graph as the path with the maximum sum of communication costs and computation costs in the
graph. Nodes on the DS are considered to be relatively more important than others. The ready
nodes with the highest priority will be scheduled rst. Then the priorities of the child nodes of
the scheduled node will be updated and this operation repeats until all nodes are scheduled. The
dynamic cost is used to quickly determine the critical path length. This idea has been incorporated
into our TASK algorithm to reduce its complexity.
The Dynamic Level Scheduling (DLS) algorithm determines node priorities by assigning an
attribute called dynamic level (DL) to each node at every scheduling step [20]. DL is the dierence
between the static level and message ready time. DLS computes DL for each ready node on all
available processors. Suppose DL(n i ; J) is the largest among all pairs of ready nodes and available
processors. Schedule n i to processor J . Repeat this process until all nodes are scheduled.
Recently, a new algorithm has been proposed by using the Critical Path Node (CPN) [16].
This algorithm is based on the CPN-dominate priority. If the next CPN is a ready node, it is
put in the CPN-dominate list. For a non-ready CPN, its parent node n y with the smallest ALAP
time is put in the list if all the parents of n y are already in the list; otherwise, all the ancestor
nodes of n y are recursively included in the list before the CPN node is in the list. The rst node
in the list is scheduled to the PE that allows the earliest start time. Then the scheduled node
is removed from the list and this operation repeats until the list is empty. The CPN-dominate
algorithm utilizes the two important properties of DAG: the critical path and topological order.
It potentially generates a good schedule.
Although these algorithms produce relatively good schedules, they are usually not optimal.
Sometimes, the generated schedule is far from optimal. In this paper, we propose a fast local
search algorithm, TASK, to improve the quality of schedules generated by an initial scheduling
algorithm.
Local search was one of the early techniques for combinatorial optimization. It has been applied
to solve NP-hard optimization problems [12]. The principle of local search is to rene a given
initial solution point in the solution space by searching through the neighborhood of the solution
point. Recently a number of e-cient heuristics for local search, i.e., con
ict minimization [8, 21],
random selection/assignment [22, 23], and pre- and partial selection/assignment [22, 23], have
been developed.
There are several signicant local search solutions to the scheduling problems. The SAT1
algorithm was the rst local search algorithm developed for the satisability problem during the
later '80s [8, 9, 10, 11]. This scheduling problem is well-known as a Max-Satisability problem.
A local search solution to the SAT problem was applied to solve several large-scale industrial
scheduling problems.
Two basic strategies have been used in a local search. The rst one is a random search, in
which the local search direction is randomly selected. If the initial solution point is improved,
it moves to the rened solution point. Otherwise, another search direction is randomly selected.
The random strategy is simple and eective for some problems, such as the n-queens problem [21].
However, it may not be e-cient for other problems such as the microword length minimization [18]
and DAG scheduling problem.
The second strategy utilizes certain criteria to nd a search direction that will most likely
lead to a better solution point. In the microword length minimization [18], a compatibility class
is considered only when moving some nodes from the class may reduce the cost function. This
strategy eectively reduces the search space by guiding the search toward a more promising
direction. The local search algorithm presented in this paper uses this strategy. With carefully
selected criteria, a local search for DAG scheduling becomes very e-cient and the scheduling
quality can be improved signicantly.
4 Random Local Search Algorithm
A number of local search algorithms for scheduling have been presented [16, 17]. A random local
search algorithm for DAG scheduling, named FAST, was given in [16] (see Figure 1). In this
algorithm, a node is randomly picked and then moved to a randomly selected PE. If the schedule
length is reduced, the move is accepted. Otherwise, the node is moved back to its original PE.
Each move, successful or not, takes O(e) time to compute the schedule length, where e is the
number of edges in the graph. To reduce its complexity, a constant MAXSTEP is dened to
limit the number of steps so that only MAXSTEP nodes are inspected. The time taken for
the algorithm is proportional to eMAXSTEP . MAXSTEP is set to be 64 [16]. Moreover,
randomly selected nodes and PEs may not be able to signicantly reduce the length of a given
schedule. Even if the MAXSTEP is equal to the number of nodes, leading to a complexity of
O(en), the random search algorithm still cannot provide a satisfactory performance.
do f
pick a node n i randomly
pick a PE P randomly
move n i to PE P
if schedule length does not improve
move n i back to its original PE
while (searchstep++ < MAXSTEP )
Figure
1: A random local search algorithm, FAST.
The FAST algorithm has been modied in [17], which is shown in Figure 2. The major
improvement is that it uses a nested loop for a probabilistic jump. The total number of search steps
is MAXSTEPMAXCOUNT . MARGIN is used to reduce the number of steps. MAXSTEP
is set to 8, MAXCOUNT to 64, and MARGIN to 2 [17]. A parallel version of the FAST
algorithm is named FASTEST. A speedup from 11.93 to 14.45 on 16 PEs has been obtained for
FASTEST [17].
5 Local Search with Topological Ordering for Scheduling
We propose a fast local search algorithm utilizing topological ordering for eective DAG schedul-
ing. The algorithm is called TASK (Topological Assignment and Scheduling Kernel). In this
algorithm, the nodes in the DAG are inspected in a topological order. In this order, it is not
required to visit every edge to determine whether the schedule length is reduced. The time spent
on each move can be drastically reduced so that inspecting every node in a large graph becomes
feasible. Also, in this order, we can compact the given schedule systematically.
For a given graph, in order to describe the TASK algorithm succinctly, several terms are
repeat
do f
pick a node n i randomly
pick a PE P randomly
move n i to PE P
if schedule length does not improve
move n i back to its original PE and increment counter;
otherwise set counter to 0;
while (searchstep++ < MAXSTEP and counter < MARGIN);
Schedule length of schedule S */
endif
Randomly pick a node from the critical path and
move it to another processor;
until (searchcount++ > MAXCOUNT);
Figure
2: The modied FAST algorithm.
dened as follows:
largest sum of communication and computation costs at the top level of node
i.e., from an entry node to n i , excluding its own weight w(n i ) [26].
largest sum of communication and computation costs at the bottom level of node
i.e., from n i to an exit node [26].
- The critical path, CP, is the longest path in a DAG. The length of the critical path of a DAG
is
is the node set of the graph.
The TASK algorithm is applied to a previously scheduled DAG. In this case, a scheduled DAG
is constructed, which contains scheduling and execution order information [25]. To enforce the
execution order in each PE, some pseudo edges (with zero weights) are inserted to incorporate
the initial schedule into the graph. The above denitions of tlevel, blevel, and the critical path
are still applied to the scheduled DAG. Then we dene more terms:
been scheduled on PE pe(n i ).
be the predecessor node that has been scheduled immediately before node n i on PE
pe(n i ). If node n i is the rst node scheduled on the PE, p(n i ) is null.
be the successor node that has been scheduled immediately after node n i on PE pe(n i ).
If node n i is the last node scheduled on the PE, s(n i ) is null.
procedure TASK (DAG Schedule)
begin
initialization */
Construct a scheduled DAG;
for node i := 0 to n 1 do
longest path in DAG;
/* search */
while there are nodes in DAG to be scheduled do
begin
pick a node with Max L(n i );
for each PE k
obtain L k (n i ) by moving n i to PE k;
t := pick a PE with Min L k , where
/* if no improvement */
let node n i stay at PE pe(n i );
/* if there are improvements */
else begin
move node n i from PE pe(n i ) to PE t;
modify pseudo edges in DAG;
propagate tlevel of n i to its children;
mark n i as being scheduled;
Figure
3: TASK: Topological Assignment and Scheduling Kernel, a local search algorithm based
on topological ordering for fast scheduling.
A sketch TASK algorithm is shown in Figure 3 and the detailed description of the TASK
algorithm in Figure 4. One of characteristics of this TASK algorithm is its independence from the
algorithm that was used to generate the initial schedule. A node is labeled as n i , and its current
PE number is pe(n i ). As long as the initial schedule is correct and every node n i has available
application of the local compaction algorithm guarantees that the
new schedule of the graph is better than or equal to the initial one.
The input of the algorithm is a given DAG schedule generated by any heuristic DAG scheduling
algorithm. First, a scheduled DAG is constructed. A pseudo edge may be added with zero
communication time; that is, no data are transferred along the edge. Step 2 computes the value
of blevel of each node in the scheduled DAG and initializes tlevel for entry nodes. All edges are
marked unvisited. The variable next k points to the next node that has not been inspected in PE
k. Initially, none of nodes is inspected so next k points to the rst node in PE k.
In Step 3, a ready node n i with the maximum value L(n selected
for inspection. Ties are broken by tlevel(n i ); for the same tlevel(n i ), ties are broken randomly. A
node is ready when all its parents have been inspected. In this way, the nodes are inspected in a
topological order. Although other topological orders, such as blevel, tlevel, or CPN-dominate can
be used, tlevel + blevel has been shown to be a good indicator for the order of inspection [24, 25].
To inspect node n i , in Step 4, the value L(n re-calculated for
each PE. To conduct the recalculation at PE k, node n i is pretended to be inserted right in front
of next k . Here, tlevel(n i ) can be varied if any of its parent nodes was scheduled to either PE k
or PE pe(n i ). Similarly, blevel(n i ) can be varied if any of its child nodes was initially scheduled
to either PE k or PE pe(n i ). Because the tlevels of its parent nodes are available and the blevels
of its child nodes are unchanged, the value of L(n i ) in every PE can be easily computed. The
values indicate the degree of improvement by a local search. With the new L(n i )'s recalculated
for every PE, node n i is then moved to the PE that allows the minimum value of L(n i ). If node
n i has been moved to PE t, the corresponding pseudo edges are modied in Step 5. The tlevel of
n i is propagated to its children so that when a node becomes ready, its tlevel can be computed.
This process continues until every node is inspected.
The TASK algorithm satises the following properties.
Theorem 1. The critical path length LCP will not increase after each step of the TASK algorithm.
Proof: The L(n i ) of node n i is determined by the longest path that includes n i . Assume
increases as a result of moving node n i . Then, n i and n j must be on the same
path from an entry node to an exit node. Because L(n j ) increases, this path must be the longest
Step 1. Constructing a scheduled DAG:
For each node n i that is not the last node in a PE
exists no e i;j , create a pseudo edge e i;j from n i to n j with w(e i;j
Step 2. Initialization:
For each node n i
compute blevel(n i ) by considering pseudo edges
if it is an entry node, mark n i as ready and initialize tlevel(n i
Mark every e i;j as unvisited
For each PE k
let next k point to the rst node in the PE
Step 3. Selection:
Pick the ready node n i with the highest value of L(n
ties are broken by tlevel(n i ); for the same tlevel(n i ), ties are broken randomly
Step 4. Inspection:
For each PE k, recompute L k (n i ) by assuming n i be moved to PE k and inserted before next k
Find a PE t such that L t (n
Step 5. Compaction:
will stay at PE t */
let next
else /* move node n i from PE
delete edge e l;i if it is a pseudo edge
delete edge e i;m if it is a pseudo edge
if no edge e l;m previously exists
create a pseudo edge e l;m with w(e l;m it as visited
let s(n l
it is a pseudo edge
create a pseudo edge e x;i if no edge e x;i previously exists
create a pseudo edge e i;y if no edge e i;y previously exists
let s(n x
Step 6. Propagation of tlevel
For each child node of node n i , say n
mark edge e i;j as visited
if all incoming edges of n j are marked as visited
mark n j as ready and compute tlevel(n j )
Repeat Steps 3-6 until all nodes are inspected
Figure
4: The detailed description of the TASK algorithm.
path that includes n j and it determines the value of L(n j ). If this path determines the value of
Otherwise, a longer path determines L(n i ) and L(n i ) > L(n j ). In each
not increase and L(n i ) LCP . Thus, L(n j ) LCP . Since the L value of every
node is not larger than LCP , LCP will not increase. 2
If n i is a node on a critical path, reduction of its L(n i ) value implies the reduction of the
critical path length of the entire graph. (It may not immediately reduce the critical path length
in the case of parallel critical paths.) If n i is not a node on a critical path, reducing its L(n i )
value does not reduce the critical path length immediately. However, it increases the possibility
of length reduction in a later step.
In the TASK algorithm, tlevel and blevel values are reused so that the complexity in determining
L is reduced. The following theorems explain how the topological order makes the complexity
reduction possible.
Theorem 2. If the nodes in a DAG are inspected in a topological order and each ready node is
appended to the previous node list in the PE, the blevel of a node is invariant before it is inspected
and the tlevel of a node is invariant after it is inspected.
Proof: If node n i is not inspected, then the topological order implies that all descendants of
are not inspected. Therefore, the blevel of n i is not changed since the blevel of all descendants
of n i are not changed. Once n i is inspected, then the topological order implies that all ancestors
of n x have been inspected. Because a node is always appended to the previous scheduled nodes
in the PE, the tlevel of an inspected node remains unchanged. 2
Following a topological order of node inspection, we can localize the eect of edge zeroing on
the L value of the nodes that have not been inspected. After each move, only the tlevel of currently
inspected node is computed instead of computing tlevels and blevels of all nodes. Therefore, the
time spent on computing L values is signicantly reduced.
Theorem 3. The time complexity of the TASK algorithm is O(e e is the number
of edges, n is the number of nodes, and p is the number of PEs.
Proof: Insertion of pseudo edges in the rst step costs O(n). The second step spends O(e)
time to compute the blevel values. The third step costs O(n) for nding the highest L value. The
main computational cost of the algorithm is in step 4. Computing the L value of each node costs
inspecting every edge connected to n i , where D(n i ) is the degree of node n i . For n
steps, the cost is P
To complete inspection of a node, a target PE must be
selected from all the p PEs, resulting in the cost of O(np). Therefore, the total cost is O(e
The TASK algorithm shares some concepts with the DSC algorithm [25]. The topological
order is used to avoid repeated calculation of the dynamic critical path so that the complexity
can be reduced. The task selection criteria of tlevel+blevel has been used in the MD [24] and
DSC algorithms. It measures the importance of a node for scheduling and is proven as an e-cient
criteria of node selection. The TASK algorithm is dierent from the DSC algorithm in many
aspects. DSC is an algorithm that schedules a DAG onto an unbounded number of clusters,
whereas TASK is a local search algorithm that improves an existing schedule on a bounded
number of processors. Although both DSC and TASK algorithms aim to reduce schedule length,
DSC realizes it by merging clusters, whereas TASK realizes it by moving nodes among processors.
In DSC, the merging of clusters is based on the gain in reduction of edges between a node and
its parents. TASK goes one step further by considering the possible gain in reduction of edges
between the node and its children, which potentially results in a better and more e-cient decision.
In the following, we use an example to illustrate the operation of the TASK algorithm.
Example 1.
Assume the DAG shown in Figure 5 has been scheduled to three PEs by a DAG scheduling
algorithm. The schedule is shown in Figure 6(a), in which three pseudo (dashed) edges have been
added to construct a scheduled DAG: one from node n 6
to node n 8
, one from node n 3
to node
, and one from node n 4
to node n 5
(not shown in Figure 6(a)). The schedule length is 14. The
blevel of each node is computed as shown in Table 1. Tables 2 and 3 trace the tlevel
values for each step. In Table 2, \ p
" indicates the node with the largest L value and is to be
inspected in the current step. In Table 3, \*" indicates the original PE and \ p
" the PE where
the node is moved to.
First, there is only one ready node, n 1
, which is a CP node. Its L value on PE 0 is L 0 (n 1
14. Then the L values on other PEs are computed: L 1 (n 1
shown in Table 3. Thus, node n 1
is moved from PE 0 to PE 2, as shown in
Figure
6(b). The LCP of the DAG is reduced to 12. In iterations 2, 3, and 4, moving nodes
, and n 2
does not reduce any L value. In iteration 5, node n 6
is moved from PE 0 to PE 1
as the L value is reduced from 12 to 11, as shown in Figure 6(c). In the following ve iterations,
nodes
do not move.
Figure
5: A DAG for Example 1.
Time
Time
(a) (b) (c)2next
next
next
next
next
next
nextnext
next
Figure
An example of TASK's operations.
Table
1: The Initial blevel Value of Each Nodes for Example 1
blevel 14 9 9
Table
2: The L Values of Ready Nodes for Selecting a Node To Be Inspected
Iteration
Table
3: The L Values of Node n i on Each PE to Select a PE
Iteration Node PE
9+6 =15 9+4 =13 6+5 =11*
6 Performance study
In this section, we present the performance results of the TASK algorithm and compare the TASK
algorithm to the random local search algorithm, FAST. We performed experiments using synthetic
DAGs as well as real workload generated from the Gaussian elimination program.
We use the same random graph generator in [17]. The synthetic DAGs are randomly generated
graphs consisting of thousands of nodes. These large DAGs are used to test the scalability
and robustness of the local search algorithms. These DAGs were synthetically generated in the
following manner. Given N , the number of nodes in the DAG, we rst randomly generated
the height of the DAG from a uniform distribution with the mean roughly equal to
N . For
each level, we generated a random number of nodes, which were also selected from a uniform
distribution with a mean roughly equal to
N . Then, we randomly connected the nodes from
the higher level to the lower level. The edge weights were also randomly generated. The sizes of
the random DAGs were varied from 1000 to 4000 with an increment of 1000. Three values of the
communication-computation-ratio (CCR) were selected to be 0.1, 1, and 10. The weights of the
nodes and edges were generated randomly so that the average value of CCR corresponded to 0.1,
1, or 10. Performance data are the average over two hundreds graphs.
We evaluated performance of these algorithms in two aspects: the schedule length generated by
the algorithm and the running time of the algorithm. Tables 4 and 5 show the comparison of the
modied FAST algorithm [17] adn the TASK algorithm on 4 PEs and 16 PEs, respectively, where
\CPN" is the CPN-Dominate algorithm, \FAST" the modied FAST algorithm, and \TASK"
the TASK algorithm. The comparison is conducted for dierent sizes and dierent CCRs. The
CPN-Dominate algorithm [16] generates the initial schedules. For the schedule length, the value
in the column \CPN" is the length of the initial schedule; the value in the column \+FAST" is for
initial scheduling plus the random local search algorithm; and the value in the column \+TASK"
is for initial scheduling plus the TASK algorithm. The column \sd" following each schedule value
is its standard deviation. The columns \%" following \+FAST" and \+TASK" are the percentage
of improvement in the initial schedule. The running times of the CPN-Dominate algorithm, the
modied FAST algorithm and the TASK algorithm are also shown in the tables. It can be seen
that TASK is much more eective and faster than FAST. The search order with the L value is
superior to the random search order. In Table 5, for CCR=10 on 16 PEs, the improvement ratio
drops. In this case, the degree of parallelism to exploit is maximized and there is not much to do
with it. The FAST algorithm is about two orders of magnitude slower than TASK, partly because
256. The FASTEST algorithm running on 16 PEs is faster, but
still one order of magnitude slower than TASK.
Table
4: Comparison for Synthetic DAGs with CPN as Initial Scheduling Algorithm (4 PEs)
# of CCR Schedule length Running time (sec)
nodes CPN sd +FAST sd % +TASK sd % CPN FAST TASK
1000
2000
Table
5: Comparison for Synthetic DAGs with CPN as Initial Scheduling Algorithm (16 PEs)
# of CCR Schedule length Running time (sec)
nodes CPN sd +FAST sd % +TASK sd % CPN FAST TASK
Table
Comparison for Synthetic DAGs with DSC as Initial Scheduling Algorithm (4 PEs)
# of CCR Schedule length Running time (sec)
nodes DSC sd +FAST sd % +TASK sd % DSC FAST TASK
1000
28 4.5 2756 28 12.4 0.60 16.1 0.16
2000
Table
7: Comparison for Synthetic DAGs with DSC as Initial Scheduling Algorithm (16 PEs)
# of CCR Schedule length Running time (sec)
nodes DSC sd +FAST sd % +TASK sd % DSC FAST TASK
Tables
6 and 7 show the comparison with DSC [25] as the initial scheduling algorithm. The
cluster merging algorithm shown in [26] maps the clusters to processors. The CPN-Dominate
algorithm generates a better schedule for DAGs with smaller CCR, and DSC is more e-cient
when CCR is large. For smaller CCR, DSC is not very good. Therefore, TASK produces a large
improvement ratio. On the other hand, DSC is particularly suited for large CCR, and TASK is
unable to improve much from its result. In general, less improvement can be obtained by the
TASK algorithm for a better schedule. This is because a good schedule leaves less room for
improvement. The TASK algorithm normally provides uniformly consistent performance. That
is, the schedule produced by TASK does not depend much on the initial schedule.
We also tested the local search algorithms with the DAGs generated from a real application,
Gaussian elimination with partial pivoting. The Gaussian elimination program operates on ma-
trices. The matrix is partitioned by columns. The nest grain size of this column partitioning
scheme is a single column. However, this ne-grain partition generates too many nodes in the
graph. For example, the ne-grain partition of a 1k1k matrix generates a DAG of 525,822 nodes.
To reduce the number of nodes, a medium-grain partition is used. Table 8 lists the number of
nodes in dierent matrix sizes and grain sizes (number of columns). The CCR is between 0.1 and
0.8. These graphs are generated by the Hypertool from an annotated sequential Gaussian elimination
program [24]. The comparisons of the FAST algorithm and the TASK algorithm on dierent
DAGs and dierent number of PEs are shown in Tables 9 and 10, where Table 9 uses CPN as
the initial scheduling algorithm and Table 10 uses DSC as the initial scheduling algorithm. In
general, a cluster algorithm such as DSC performs well when communication of a DAG is heavy.
Therefore, it generates better schedules for Gaussian elimination. TASK performs better than
FAST in most cases and is much faster than FAST.
7 Conclusion and Future Works
A local search is an eective method for solving NP-hard optimization problems. It can be
applied to improve the quality of existing scheduling algorithms. TASK is a low-complexity, high-performance
local search algorithm for static DAG scheduling. It can quickly reduce the schedule
length produced by any DAG scheduling algorithms. By utilizing the topological order, it is much
faster and of much higher quality than the random local search algorithm.
We have demonstrated that TASK was able to reduce drastically the schedule length produced
by some well-known algorithms such as DSC and CPN. In the future work, a comparison with
the best scheduling algorithms such as MCP [24] will be conducted. A preliminary comparison
showed that a small improvement was observed since the MCP produces very good results already.
Table
8: The number of nodes in dierent matrix sizes and grain sizes for Gaussian elimination
Matrix size 1k1k 2k2k
Grain size 64
# of nodes 138 530 2082 8258 530 2082 8258 32898
Table
9: Comparison for Gaussian elimination with CPN as Initial Scheduling Algorithm
Matrix Grain # of Schedule length Running time (sec)
size size PEs CPN +FAST % +TASK % CPN FAST TASK
Table
10: Comparison for Gaussian elimination with DSC as Initial Scheduling Algorithm
Matrix Grain # of Schedule length Running time (sec)
size size PEs DSC +FAST % +TASK % DSC FAST TASK
Acknowledgments
This research was partially supported by NSF Grants CCR-9505300 and CCR-9625784, NSERC
Research Grant OGP0046423, NSERC Strategic Grant MEF0045793, and NSERC Strategic Grant
STR0167029. We would like to thank the anonymous reviewers for their constructive comments.
--R
A comparison of list scheduling for parallel processing systems.
Applications and performance analysis of a compile-time optimization approach for list scheduling algorithms on distributed memory multiprocessors
Scheduling parallel program tasks onto arbitrary target machines.
Task Scheduling in Parallel and Distributed Systems.
Computers and Intractability: A Guide to the Theory of NP-Completeness
Rinnoy Kan.
Parallel algorithms and architectures for very fast search (PhD Thesis).
How to solve Very Large-Scale Satis ability problems
Average time complexities of several local search algorithms for the satis
Local search for satis
Parallel sequencing and assembly line problems.
A comparison of multiprocessor scheduling heuristics.
Grain size determination for parallel processing.
FASTEST: A practical low-complexity algorithm for compile-time assignment of parallel programs to multiprocessors
Microword length minimization in microprogrammed controller synthesis.
Partitioning and Scheduling Parallel Programs for Multiprocessors.
A compile-time scheduling heuristic for interconnection-constrained heterogeneous processor architectures
A programming aid for message-passing systems
DSC: Scheduling parallel tasks on an unbounded number of processors.
PYRROS: Static Task Scheduling and Code Generation for Message-passing Multiprocessors
--TR
--CTR
Savina Bansal , Padam Kumar , Kuldip Singh, An Improved Duplication Strategy for Scheduling Precedence Constrained Graphs in Multiprocessor Systems, IEEE Transactions on Parallel and Distributed Systems, v.14 n.6, p.533-544, June | multiprocessors;fast local search;complexity;quality;DAG scheduling |
380864 | On optimal slicing of parallel programs. | Optimal program slicing determines for a statement S in a program &pgr; whether or not S affects a specified set of statements, given that all conditionals in &pgr; are interpreted as non-deterministic choices.Only recently, it has been shown that reachability of program points and hence also optimal slicing is undecidable for multi-threaded programs with (parameterless) procedures and synchronization [23]. Here, we sharpen this result by proving that slicing remains undecidable if synchronization is abandoned---although reachability becomes polynomial. Moreover, we show for multi-threaded programs without synchronization, that slicing stays PSPACE-hard when procedure calls are forbidden, and becomes NP-hard for loop-free programs. Since the latter two problems can be solved in PSPACE and NP, respectively, even in presence of synchronization, our new lower bounds are tight.Finally, we show that the above decidability and lower bound properties equally apply to other simple program analysis problems like copy constant propagation and true liveness of variables. This should be contrasted to the problems of strong copy constant propagation and (ordinary) liveness of variables for which polynomial algorithms have been designed [15, 14, 24]. | INTRODUCTION
Static program slicing [27] is an established program reduction
technique that has applications in program under-
standing, debugging, and testing [26]. More recently, it has
also been proposed as a technique for ameliorating the state-explosion
problem when formally verifying software or hardware
[13, 10, 4, 18]. The goal of program slicing is to identify
and remove parts of the program that cannot (potentially)
influence certain value(s) at certain program point(s) of in-
terest. The latter is called the slicing criterion.
There is a vast amount of literature on slicing sequential
languages (see the references in Tip's survey [26]). A crucial
idea found in many variations is to perform slicing by means
of a backwards reachability analysis on a graph modeling
basic dependences between instructions. This approach has
been pioneered by Ottenstein and Ottenstein [21] who proposed
to use a structure called PDG (Program Dependence
Graph). APDG captures two kinds of dependences, data dependences
and control dependences. Intuitively, a statement
S is data dependent on another statement T if T updates a
variable that can be referenced by S. For example, if S is
x := e and T is y := f , then S is data dependent on T if
y appears in e and there is a path from T to S in the program
on which no statement updates y. Control dependence
captures which guards (of branching statements or loops)
may determine whether a statement is executed or not. Its
formal definition can be found, e.g., in [26].
The first who considered static slicing of concurrent languages
was J. Cheng [3]. In recent years the interest in this
problem has increased due to the proliferation of concurrent
languages. There has been work in connection with slicing
JAVA-like languages [10, 28], VHDL [13, 4], and Promela
[18], the input language of the Spin model checker. All these
articles have in common that slicing is again approached as a
backwards reachability problem but on some extended form
of PDG (called Process Dependence Net [3], Multithreaded
Dependence Graph [28], etc. These structures model further
dependences besides data and control dependences that
may arise in concurrent programs of the considered kind.
One such dependence is interference dependence [17, 10]. A
statement S is interference dependent on a statement T in
another thread if the two threads may run in parallel and
there is a variable updated by T and referenced by S. This
captures the situation that in a parallel execution of the two
threads, S may be executed after T in such a way that the
shared variable is not overwritten in between. Interference
dependence may be interpreted as a kind of data dependence
arising from interleaved execution. Other kinds of dependences
represent the data flow induced by message passing
and the control flow induced by synchronization operations.
A program slicing algorithm must be sound : it must not
slice away parts of the program that a#ect the given slicing
criterion. Ideally, a slicer should remove as much of the program
as possible without sacrificing soundness. Weiser [27]
showed already that the problem of determining whether or
not a slice is statement-minimal is undecidable [26, p. 7].
The problem is that it is undecidable whether a condition
found in the program may be true (or false) on some execution
path. Dataflow analysis in general su#ers from this
problem and the common remedy is to ignore conditions altogether
when defining feasible paths. In other words, conditional
branching is interpreted as non-deterministic branch-
ing, a point of view adopted in this paper. We call a slicer
optimal if it determines a statement-minimal slice under this
abstraction.
In the sequential, intraprocedural case (i.e. in single proce-
dures), PDG-based slicing is e#cient and optimal. Optimality
can also be achieved in the sequential, interprocedural
case by solving a context-free reachability problem on the
System Dependency Graph (SDG) of the program in question
[11]. This analysis can be done in polynomial time [26].
For concurrent languages with procedures and synchronization
primitives even reachability is undecidable by a recent
result of Ramalingam [23]. This implies that also optimal
slicing cannot be decidable. In this paper, we consider optimal
slicing for concurrent languages but drop the facility of
synchronization. As a consequence, reachability as well as
reverse reachability become decidable-even polynomial [5,
6, 24]. Our new result is that optimal slicing remains unde-
cidable. We refine this new undecidability result by proving
optimal slicing to be PSPACE-hard in case that there are
no procedure calls, and still NP-hard if also loops are aban-
doned. The latter two lower complexity bounds are optimal,
as they match the corresponding upper bounds.
We conclude that all e#cient slicing algorithms for concurrent
languages are doomed to be sub-optimal (unless
P=PSPACE). Our results are shown under very weak assumptions
on the concurrent language. Intuitively, they exploit
a weakness of interference dependence only. As no
synchronization properties are exploited, our results point
to a more fundamental limitation for slicing concurrent languages
than Ramalingam's and hence are applicable to a
much wider range of concurrency scenarios.
Finally, we consider related program analysis problems,
copy constant propagation and true liveness of variables,
and exhibit similar undecidability and complexity results as
for slicing thereby strengthening recent results [20]. In a certain
sense, this comes as a surprise, as only slightly simpler
analysis questions, namely, strong copy constant propagation
and (ordinary) liveness of variables can be optimally
solved in polynomial time [15, 14, 24].
2. A MOTIVATING EXAMPLE
Before we turn to the technical results, let us discuss a
small example that illustrates that backwards reachability
in the dependence graph can give sub-optimal results when
join
b := a
a
join
b := a
a
(a) CFG-like representation (b) Data and interference dependences
Figure
1: An illustrative example.
slicing parallel programs. Consider the program
a :=
In Fig. 1 (a) a control flow graph-like representation of the
program is shown and in (b) the data and interference de-
pendences. We are interested in slicing w.r.t. variable c at
the write instruction. (We always use write instructions in
this paper to mark the slicing criterion clearly and conve-
niently; this is the only purpose of write instructions here).
Clearly, the instruction a := 1 is backwards reachable in the
dependency graph. But there is no execution of the program
that realizes all dependences in this path and therefore an
optimal slicer must remove a := 1. In order to see this consider
that in an execution b := 0 must be executed either
before or after c := b in the parallel thread. If it is executed
before c := b then it kills the propagation from b := a
to c := b. If it is executed after c := b then the subsequent
statement c := 0 kills the propagation from c := b to
write(c). Our undecidability and hardness results exploit
that propagation can be prohibited in this way by means
of re-initializations. Krinke [17] also mentions that 'inter-
ference dependence is not transitive' and gives an example
that is, however, of a less subtle nature than our example.
He, too, does not consider synchronization operations and
presents an optimal algorithm for the intraprocedural parallel
case. His algorithm is worst-case exponential but he gives
no hardness proof. Our PSPACE-hardness result explains-
by all what we believe about PSPACE-hardness-why he
could not find a polynomial algorithm.
3. PARALLEL PROGRAMS
We consider a prototypic language with shared memory,
atomic assignments and fork/join parallelism. Only assignments
of a very simple form are needed: x := k where k is
either a constant or a variable.
A procedural parallel program comprises a finite set Proc
of procedure names containing a distinguished name Main.
Each procedure name P is associated with a statement #P ,
the corresponding procedure body, constructed according to
the following grammar, in which Q ranges over Proc\{Main}
and x over some given finite set of variables:
We use the syntax procedure P ; #P end to indicate the
association of procedure bodies to procedure names. Note
that procedures do not have parameters.
The specific nature of constants and the domain in which
they are interpreted is immaterial; we only need that 0 and
are two constants representing di#erent values, which-by
abuse of notation-are denoted by 0 and 1 too. In other
words we only need Boolean variables. The atomic statements
of the language are assignment statements x := e that
assign the current value of e to variable x, 'do-nothing' statements
skip, and write statements. Write statements signify
the slicing criterion. A statement of the form Q denotes a
call of procedure Q. The operator ; denotes sequential composition
and # parallel composition. The operator # represents
non-deterministic branching and loop # end stands
for a loop that iterates # an indefinite number of times.
Such construct are chosen in accordance with the common
abstraction from conditions mentioned in the introduction.
We apply the non-deterministic choice operator also to finite
sets of statements; #1 , . , #n} denotes #1 #n . The
ambiguity inherent in this notation is harmless because # is
commutative, associative, and idempotent semantically.
Note that there are no synchronization operations in the
language. The synchronization of start and termination inherent
in fork- and join-parallelism is also not essential for
our results; see Section 7.
Parallelism is understood in an interleaving fashion; assignments
and write statements are assumed to be atomic.
A run of a program is a maximal sequence of atomic statements
that may be executed in this order in an execution
of the program. The program
x, for example, has the three runs #x := 1, x := y, y := x#,
#x := 1, y := x, x := y#, and #y := x, x := 1, x := y#. We denote
the set of runs of program # by Runs(#).
4. INTERPROCEDURAL SLICING
In the remainder of this paper we adopt the following definition
of the (optimal) slicing problem as a decision problem.
An instance comprises a (non-deterministic, parallel) program
#, a slicing criterion C (given by the write-instructions
in the program) and a statement S in #. The problem is to
decide whether S belongs to the optimal slice of # with respect
to C. The slicing problem is parameterized by the
class of programs considered.
Theorem 1. Parallel interprocedural slicing is undecidable
It is well-known that the termination problem for two-
counter machines is undecidable [19]. In the remainder of
this section, we reduce this problem to an interprocedural
slicing problem thereby proving Theorem 1.
4.1 Two-Counter Machines
A two-counter machine has two counter variables c0 and
c1 that can be incremented, decremented, and tested against
zero. It is common to use a combined decrement- and test-
instruction in order to avoid complications with decrementing
a zero counter. The basic idea of our reduction is to
represent the values of the counters by the stack height of
two threads of procedures running in parallel. Incrementing
a counter is represented by calling another procedure in the
corresponding thread, decrementing by returning from the
current procedure, and the test against zero by using di#er-
ent procedures at the first and the other stack levels that
represent the possible moves for zero and non-zero counters,
respectively. It simplifies the argumentation if computation
steps involving the two counters alternate. This can always
be enforced by adding skip-instructions that do nothing except
of transferring control.
Formally, we use the following model. A two-counter machine
M comprises a finite set of (control) states S. S
is partitioned into two sets
{q1 , . , qm}; moves involving counter c0 start from P and
moves involving counter c1 from Q. Execution commences
at a distinguished start state which, w.l.o.G., is p1 . There
is also a distinguished final state, w.l.o.G. pn , at which execution
terminates. Each state s # S except of the final
state pn is associated with an instruction I(s) taken from
the following selection:
. c i :=
. if c
or
. goto s # (skip),
if s # Q. Note that this condition captures that moves
alternate.
Execution of a two-counter machine M is represented by
a transition relation #M on configurations #s, x0 , x1# that
consist of a current state s # S and current values x0 #
0 and x1 # 0 of the counters. Configurations with
pn are called final configurations. We have #s, x0 , x1#M
only if one of the following conditions is
valid for
.
x1-i .
.
.
.
Thus, each non-final configuration has a unique successor
configuration. We denote the reflexive transitive closure of
#M by # M and omit the subscript M if it is clear from
context.
Execution of a two-counter machine commences at the
start state with the counters initialized by zero, i.e. in the
configuration #p1 , 0, 0#. The two-counter machine terminates
if it ever reaches the final state, i.e. if #p1 , 0, 0#
#pn , x0 , x1 # for some x0 , x1 . As far as the halting behavior
is concerned we can assume without loss of generality that
both counters are zero upon termination. This can be ensured
by adding two loops at the final state that iteratively
procedure
loop
else . } #
procedure
loop
else . goto q l }
procedure KillAllP ;
Figure
2: Definition of P0 and P #=0 .
decrement the counters until they become zero. Obviously,
this modification preserves the termination behavior of the
two-counter machine. Note that for the modified machine
the conditions "#p1 , 0, 0#pn , x0 , x1# for some x0 , x1 "
and "#p1 , 0, 0#pn , 0, 0#" are equivalent. We assume in
the following that such loops have been added to the given
machine.
4.2 Constructing a Program
From a two-counter machine as above we construct a parallel
program, #M . For each state pk # P the program uses
a variable xk and for each state q l # Q a variable y l . Intu-
itively, xk holds the value 1 in an execution of the program
this execution corresponds to a run of the two-counter
machine reaching state pk , and similarly for the y l .
The main procedure of #M reads as follows:
procedure Main;
procedure Init ;
We will consider slicing with respect to variable xn at the
write-instruction (slicing criterion). The construction is
done such that the initialization x1 := 1 belongs to the optimal
slice if and only if M terminates. This shows Theorem 1.
The goal of the construction can also be reformulated as follows
because the initialization x1 := 1 is the only occurrence
of the constant 1 in the program and all other assignment
statement only copy values or initialize variables by 0.
terminates if and only if
xn may hold 1 at the write-statement. (1)
The initialization of all variables except x1 by 0 reflects that
p1 is the initial state. For each of the two counters the
program uses two procedures, P0 and P #=0 for counter c0
procedure
loop
l else . } #
procedure
loop
else . goto p l }
procedure
Figure
3: Definition of Q0 and Q #=0 .
and Q0 and Q #=0 for counter c1 . Their definition can be
found in Fig. 2 and 3. We describe P0 and P #=0 in detail in
the following, Q0 and Q #=0 are completely analogous.
Intuitively, P0 and P #=0 mirror transitions of M induced by
counter c0 being =0 and #=0, respectively, hence their name.
Each procedure non-deterministically guesses the next tran-
sition. Such a transition involves two things: first, a state
change and, secondly, an e#ect on the counter value. The
state change from some pk to some q l is represented by copying
xk to y l via an auxiliary variable p and re-initializing xk
by zero as part of KillAllP . The e#ect on the counter value
is represented by how we proceed:
. For transitions that do not change the counter we jump
back to the beginning of the procedure such that other
transitions with the same counter value can be simulated
subsequently. This applies to skip-transitions
and test-decrement transitions for a zero counter, i.e.
test-decrement transitions simulated in P0 .
. For incrementing transitions we call another instance
of P #=0 that simulates the transitions induced by the
incremented counter. A return from this new instance
of P #=0 means that the counter is decremented, i.e. has
the old value. We therefore jump back to the beginning
of the procedure after the return from P #=0 .
. For test-decrement transitions simulated in P #=0 , we
leave the current procedure.
This behavior is described in a structured way by means of
loops and sequential and non-deterministic composition and
is consistent with the representation of the counter value by
the number of instances of P #=0 on the stack.
The problem with achieving (1) is that a procedure may
try to 'cheat': it may execute the code representing a transition
from p i to q j although x i does not hold the value 1. If
this is a decrementing or incrementing transition the coincidence
between counter values and stack heights may then
be destroyed and the value 1 may subsequently be propagated
erroneously. Such cheating may thus invalidate the
'if' direction.
This problem is solved as follows. We ensure by appropriate
re-initialization that all variables are set to 0 if a procedure
tries to cheat. Thus, such executions cannot contribute
to the propagation of the value 1. But re-initializing a set of
variables safely is not trivial in a concurrent environment.
We have only atomic assignments to single variables avail-
able; a variable just set to 0 may well be set to another value
by instructions executed by instances of the procedures Q0
and Q #=0 running in parallel while we are initializing the
other variables. Here our assumption that moves involving
the counters alternate comes into play. Due to this assumption
all copying assignments in Q0 and Q #=0 are of the form
q := y i or x j := q (q is the analog of the auxiliary variable
p). Thus, we can safely assign 0 to the y i in P0 and P #=0 as
they are not the target of a copy instruction in Q0 or Q #=0 .
After we have done so, we can safely assign 0 to q; a copy
instruction q := y i executed by the parallel thread cannot
destroy the value 0 as all y i contain 0 already. After that
we can safely assign 0 to the x i by a similar argument. This
explains the definition of KillAll P .
4.3 Correctness of the Reduction
From the intuition underlying the definition of #M , the
'only if' direction of (1) is rather obvious: If M terminates,
i.e., if it has transitions leading from #p1 , 0, 0# to #pn , 0, 0#,
we can simulate these transitions by a propagating run of
#M . By explaining the definition of KillAllP , we justified the
'if' direction as well. A formal proof can be given along the
lines of the classic Owicki/Gries method for proving partial
correctness of parallel programs [22, 8, 1]. Although this
method is usually presented for programs without procedures
it is sound also for procedural programs. In the Ow-
icki/Gries method, programs are annotated with assertions
that represent properties valid for any execution reaching
the program point at which the assertion is written down.
This annotation is subject to certain rules that guarantee
soundness of the method.
Specifically, we prove that just before the write-instruc-
tion in #M the following assertion is valid:
Validity of this assertion implies the 'if' direction of (1). The
details of this proof are deferred to Appendix A.
Our proof should be compared to undecidability of reachability
in presence of synchronization as proved by Ramalingam
[23], and undecidability of LTL model-checking for
parallel languages (even without synchronization) as proved
by Bouajjani and Habermehl [2]. Both proofs employ two
sequential threads running in parallel. Ramalingam uses
the two recursion stacks of the threads to simulate context-free
grammar derivations of two words whose equality is enforced
by the synchronization facilities of the programming
language. Bouajjani and Habermehl use the two recursion
stacks to simulate two counters (as we do) whose joint operation
then is synchronized through the LTL formula. Thus,
both proofs rely on some kind of "external synchronization"
of the two threads - which is not available in our scenario.
Instead, our undecidability proof works with "internal syn-
chronization" which is provided implicitly by killing of the
circulating value 1 as soon as one thread deviates from the
intended synchronous behavior.
5. INTRAPROCEDURAL SLICING
The undecidability result just presented means that we
cannot expect a program slicer for parallel programs to
be optimal. We therefore must lower our expectation. In
dataflow analysis one often investigates also intraprocedural
problems. These can be viewed as problems for programs
without procedure calls. Here, we find:
Theorem 2. Parallel intraprocedural slicing is PSPACE-complete
In a fork/join parallel program without procedures, the
number of threads potentially running in parallel is bounded
by the size of the program. Therefore, every run of the program
can be simulated by a Turing machine using just a
polynomial amount of space. We conclude that the intraprocedural
optimal parallel slicing problem is in PSPACE.
It remains to show that PSPACE is also a lower bound on
the complexity of an optimal intraprocedural parallel slicer,
i.e. PSPACE-hardness. This is done by a reduction from
the Regular Expression Intersection problem. This
problem is chosen in favor of the better known intersection
problem for finite automata as we are heading for structured
programs and not for flow graphs.
An instance of Regular Expression Intersection is
given by a sequence r1 , . , rn of regular expressions over
some finite alphabet A. The problem is to decide whether
non-empty.
Lemma 1. The Regular Expression Intersection
problem is PSPACE-complete.
In fact, PSPACE-hardness of the Regular Expression
Intersection problem follows by a reduction from the acceptance
problem for linear space bounded Turing machines
along the same lines as in the corresponding proof for finite
automata [16]. The problem remains PSPACE-complete if
we consider expressions without #.
Suppose now that A = {a1 , . , ak}, and we are given n
regular expressions r1 , . , rn . In our reduction we construct
a parallel program that starts n+1 threads #0 , . , #n after
some initialization of the variables used in the program:
procedure Main;
The threads refer to variables x i,a and y i (i # {0, . , n},
a # A). Thread #0 is defined as follows.
The statement KillAll0 that is defined below ensures that all
variables except y0 are re-initialized by 0 irrespective of the
behavior of the other threads as shown below.
For induced by the regular
expression r i . It is given by #
defined by induction on r as follows.
The statement KillAll i re-initializes all variables except y i .
This statement as well as statements KillX j and KillXY j on
which its definition is based are defined as follows.
Again it is not obvious that thread # i can safely re-initialize
the variables because the other threads may arbitrarily in-
terleave. But by exploiting that only copy instructions of
the form y j := x j-1,a and x j,a := y j with j #= i are present
in the other threads this can be done by performing the
re-initializations in the order specified above. 1 Two crucial
properties are exploited for this. First, whenever a := b
is a copying assignments in a parallel thread, variable b is
re-initialized before a. Therefore, execution of a := b after
the re-initialization of b just copies the initialization value 0
from b to a but cannot destroy the initialization of a. Sec-
ondly, in all constant assignments a := k in parallel threads
such that no other values can be generated.
Altogether, the threads are constructed in such a way that
the following is valid.
only if
belongs to the optimal slice. (2)
In the following, we describe the intuition underlying the
construction and at the same time prove (2).
The threads can be considered to form a ring of processes
in which process # i has processes # i-1 as left neighbor and
# i+1 as right neighbor. Each thread # i
a word in L(r i ); thread #0 guesses some word in A # . The
special form of the threads ensures that they can propagate
the initialization value 1 for xn,a 1 if and only if all of them
agree on the guessed word and interleave the corresponding
runs in a disciplined fashion. Obviously, the latter is possible
l be a word in L(r1 ) # L(rn) and
the first letter in alphabet A. In the run induced
by w that successfully propagates the value 1, the
threads circulate the value 1 around the ring of processes in
the variables x i,c i for each letter c i of w. We call this the
propagation game in the following. At the beginning of the
j-th round, process #0 'proposes' the letter c j
by copying the value 1 from the variable xn,c j-1 to x0,c j in
which it was left by the previous round or by the initial-
ization, respectively. For technical reasons this copying is
done via the 'local' variable 2 y0 . Afterwards the processes
successively copy the value from x i-1,c j to
1 Here and in the following, addition and subtraction in subscripts
of variables and processes is understood modulo n+1.
not local to # i in a strict sense. But the
other threads do not use it as target or source of a copying
assignment; they only re-initialize it.
via their 'local' variables y i . From xn,c j
it is copied
by #0 in the next round to x0,c j+1 and so on. After the last
round (j = l) #0 finally copies the value 1 from xn,c l
to x0,a 1
and all processes terminate. Writing-by a little abuse of
(a) for the single run of # i (a) and #0 (a, b) for
the single run of y0 := xn,a ; KillAll 0 ; x 0,b := y0 , we can
summarize above discussion by saying that
is a run of #0 #n that witnesses that the initialization
of xn,a 1 belongs to the optimal slice. This implies the 'only
if' direction of (2).
Next we show that the construction of the threads ensures
that runs that do not follow the propagation game cannot
propagate value 1 to the write-instruction. In particular, if
propagating run exists, which
implies the 'if' direction of (2).
Note first that all runs of # i are composed of pieces of the
(a) and all runs of #0 of pieces of the form #0(a, b)
which is easily shown by induction. A run can now deviate
from the propagation game in two ways. First, it can follow
the rules but terminate in the middle of a round:
Such a run does not propagate the value 1 to the write-
instruction as KillAll i in # i (cm ) re-initializes x0,a 1 .
Secondly, a run might cease following the rules of the
propagation game after some initial (possibly empty) part.
Consider then the first code piece # i (a) or #0(a, b) that is
started in negligence of the propagation game rules. It is
not hard to see that the first statement in this code piece,
respectively, then sets the local
variable y i or y0 to zero. The reason is that the propagation
game ensures that variable x i-1,a or xn,a holds 0 unless
the next statement to be executed according to the rules of
the propagation game comes from # i (a) or some #0(a, b), re-
spectively. The subsequent statement KillAll i or KillAll 0 then
irrevocably re-initializes all the other variables irrespective
of the behavior of the other threads as we have shown above.
Thus such a run also cannot propagate the value 1 to the
write-instruction.
An Owicki/Gries style proof that confirms this fact is contained
in the full paper.
6. SLICING LOOP-FREE PROGRAMS
We may lower our expectation even more, and ban in
addition to procedures also loops from the programs that
we expect to slice optimally. But even then, the problem
remains intractable, unless P=NP.
Theorem 3. Parallel intraprocedural slicing of loop-free
programs is NP-complete.
That the problem is in NP is easy to see. For each statement
in the optimal slice we can guess a run that witnesses
that the statement can a#ect the slicing criterion. This run
can involve each statement in the program at most once as
the program is loop-free. Hence its length and consequently
the time that is necessary for guessing the run is linear in
the size of the given program.
NP-hardness can be proved by specializing the construction
from Section 5 to star-free regular expressions. The
intersection problem for such expressions is NP-complete.
An alternative reduction from the well-known SAT problem
was given in [20]. In contrast to the construction of the
current paper, the reduction there relies only on propagation
along copying assignments but not on "quasi-synchro-
nization" through well-directed re-initialization of variables.
However, this technique does not seem to generalize to the
general intraprocedural and the interprocedural case.
7. EXTENSIONS
7.1 Beyond Fork/Join Parallelism
A weak form of synchronization is inherent in the fork/join
parallelism used in this paper as start and termination of
threads is synchronized. The hardness results in this paper,
however, are not restricted to such settings but can also be
shown without assuming synchronous start and termination.
Therefore, they also apply to languages like JAVA.
The PSPACE-hardness proof in Section 5, for instance,
can be modified as follows. Let c, d be two new distinct
letters and defined as # i
and the initialization and the final write-instruction is moved
to thread #0 . More specifically, #0 is redefined as follows:
loop
(Of course the statements KillX i have to re-initialize also
the new variables x i,c and x i,d .) Essentially this modification
amounts to requiring that the propagation game is
played with a first round for letter c-this ensures a quasi-synchronous
start of the threads-and a final round for letter
d-this ensures a quasi-synchronous termination. Thus,
only if
belongs to the optimal slice of #0 #n .
Similar modifications work for the reductions in Section 4
and 6.
7.2 Further Dataflow Analysis Problems
Our techniques here can be used to obtain similar results
also for other optimal program analysis problems, in par-
ticular, the detection of truly life variables and copy constants
thereby strengthening recent complexity results for
these problems [20].
A variable x is live at a program point p if there is a
run from p to the end of the program on which x is used
before it is overwritten. By referring to [9], Horwitz et. al.
[12] define a variable x as truly live at a program point p if
there is a run from p to the end of the program on which
x is used in a truly life context before being defined, where
a truly live context means: in a predicate, or in a call to a
library routine, or in an expression whose value is assigned
to a truly life variable.
Thus, true liveness can be seen as a refinement of the ordinary
liveness property. For the programs considered in
this paper, the variable initialized in the crucial initialization
statement is truly live at that program point if and only
if that statement belongs to the optimal slice. Therefore, the
lower bounds provided in Theorem 1, 2 and 3 immediately
translate to corresponding bounds also for the truly live variable
problem. Since the upper bounds PSPACE and NP for
intraprocedural and loop-free intraprocedural programs also
can be easily verified, we obtain the same complexity characterizations
as in Theorem 2 and 3. Indeed, these results
are in sharp contrast to the detection of ordinary liveness of
a variable at a program point which has been shown to be
solvable even in polynomial time [15, 5, 24].
Constant propagation is a standard analysis in compil-
ers. It aims at detecting expressions that are guaranteed
to evaluate to the same value in any run of the program,
information that can be exploited e.g. for expression simplification
or branch elimination. Copy constant detection [7,
pp. 660] is a particularly simple variant of this problem in
sequential programs. In this problem only assignment statements
of the simple forms x := c (constant assignment) and
x := y (copying assignment), where c is a constant and x, y
are variables, are considered, a restriction obeyed by all programs
in this paper. Here, we obtain:
Theorem 4. 1. The interprocedural copy constant detection
problem is undecidable for parallel programs.
2. The intraprocedural copy constant detection problem is
PSPACE-complete for parallel programs.
3. The intraprocedural copy constant detection problem is
co-NP-complete for loop-free parallel programs.
Only a small modification is necessary to apply the reductions
in this paper to copy constant detection in parallel
programs: the statement z := 0 # skip must be added just
before each write-statement, where z is the written variable.
Obviously, this statement prohibits z from being a copy constant
of value 1 at the write statement. After this modification
z is a copy constant at the write statement (necessarily
of value 0) i# the write-statement cannot output the value
1. The latter is the case i# the crucial initialization statement
in question does not belong to the optimal slice. This
proves the lower bounds in the above theorem. The upper
bounds are easily achieved by non-deterministic algorithms
that guess paths that witness non-constancy.
Theorem 4 essentially states that optimal detection of
copy constants in parallel programs is intractable. This result
should be contrasted to the detection problem for strong
copy constants. Strong copy constants di#er from (full) copy
constants in that only constant assignments are taken into
account by the analysis. In particular, each variable that is
a strong copy constant at a program point p is also a copy
constant. The detection of strong copy constants turns out
to be a much simpler problem as it can be solved in polynomial
time [14, 24].
8. CONCLUSION
In this paper we have studied the complexity of synchro-
nization-independent program slicing and related dataflow
problems for parallel languages. By means of a reduction
from the halting problem for two-counter machines, we have
shown that the interprocedural problem is undecidable. If
we consider programs without procedure calls (intraproce-
dural problem) the slicing problem becomes decidable but
is still intractable. More specifically, we have shown it to be
PSPACE-hard by means of a reduction from the intersection
problem for regular expressions. Finally, even if we restrict
attention to parallel straight-line programs, the problem remains
NP-hard. These lower bounds are tight as matching
upper bounds are easy to establish.
Previous complexity and undecidability results for data-flow
problems for concurrent languages [25, 23] exploit in
an essential way synchronization primitives of the considered
languages. In contrast our results hold independently of any
synchronization. They only exploit interleaving of atomic
statements and are thus applicable to a much wider class of
concurrent languages.
9.
--R
Verification of Sequential and Concurrent Programs.
Constrained properties
Slicing concurrent programs-a graph-theoretical approach
Program slicing for VHDL.
An Automata-theoretic Approach to Interprocedural Data-flow Analysis
Crafting a Compiler.
Program Verification.
Invariance of approximative semantics with respect to program transformations.
Interprocedural slicing using dependence graphs.
Demand interprocedural dataflow analysis.
Program slicing on VHDL descriptions and its applications.
Parallel constant propagation.
Lower bounds for natural proof systems.
Static slicing of threaded programs.
Issues in slicing PROMELA and its applications to model checking
Computation: Finite and Infinite Machines.
The program dependence graph in a software development environment.
An axiomatic proof technique for parallel programs.
Complexity of analyzing the synchronization structure of concurrent programs.
A survey of program slicing techniques.
Program slicing.
Slicing concurrent Java programs.
--TR
Crafting a compiler
Interprocedural slicing using dependence graphs
Verification of sequential and concurrent programs (2nd ed.)
Static slicing of threaded programs
Efficient algorithms for pre* and post* on interprocedural parallel flow graphs
Context-sensitive synchronization-sensitive analysis is undecidable
Program Verification
Constraint-Based Inter-Procedural Analysis of Parallel Programs
The Complexity of Copy Constant Detection in Parallel Programs
Parallel Constant Propagation
Constrained Properties, Semilinear Systems, and Petri Nets
An Automata-Theoretic Approach to Interprocedural Data-Flow Analysis
Slicing Concurrent Programs - A Graph-Theoretical Approach
A Formal Study of Slicing for Multi-threaded Programs with JVM Concurrency Primitives
Invariance of Approximate Semantics with Respect to Program Transformations
The program dependence graph in a software development environment
Slicing Concurrent Java Programs
--CTR
Jens Krinke, Context-sensitive slicing of concurrent programs, ACM SIGSOFT Software Engineering Notes, v.28 n.5, September
Markus Mller-Olm, Precise interprocedural dependence analysis of parallel programs, Theoretical Computer Science, v.311 n.1-3, p.325-388, 23 January 2004
Javier Esparza, Grammars as processes, Formal and natural computing, Springer-Verlag New York, Inc., New York, NY, 2002
Mangala Gowri Nanda , S. Ramesh, Interprocedural slicing of multithreaded programs with applications to Java, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.6, p.1088-1144, November 2006
Ingo Brckner , Bjrn Metzler , Heike Wehrheim, Optimizing slicing of formal specifications by deductive verification, Nordic Journal of Computing, v.13 n.1, p.22-45, June 2006
Hon F. Li , Juergen Rilling , Dhrubajyoti Goswami, Granularity-Driven Dynamic Predicate Slicing Algorithms for Message Passing Systems, Automated Software Engineering, v.11 n.1, p.63-89, January 2004
Baowen Xu , Ju Qian , Xiaofang Zhang , Zhongqiang Wu , Lin Chen, A brief survey of program slicing, ACM SIGSOFT Software Engineering Notes, v.30 n.2, March 2005 | parallel programs;undecidability;complexity;interprocedural analysis;slicing |
381455 | A polynomial-time approximation scheme for base station positioning in UMTS networks. | We consider the following optimization problem for UMTS networks: For a specified teletraffic demand and possible base station locations, choose positions for base stations such that the construction costs are below a given limit, as much teletraffic as possible is supplied, the ongoing costs are minimal, and the intra-cell interference in the range of each base station is low. We prove that for a particular specification of teletraffic (the so called demand node concept), this problem has a polynomial-time approximation scheme, but cannot have a fully polynomial-time approximation scheme unless | Introduction
1.1 CDMA Networks
During the planning stage for a new cellular network, one of the main tasks for a provider is
to decide where to place the base stations (i.e., the antennas). In order to accomplish this, data
about the expected teletrac demand is needed. A widely accepted way of describing phone trac
quantitatively is by providing so called demand node maps [Tut98]. A demand node represents
the center of an area with a certain teletrac demand. Each node stands for the same amount
of trac load; for the combinatorial properties of the problems we are going to discuss, it makes
no dierence if we simply think of one demand node as one user with a mobile phone. Thus,
densely populated areas will lead a large number of demand nodes while rural areas will lead to
a sparse distribution. Demand node maps are produced in a canonical way from given stochastic
data (a standardization for this is currently discussed by the International Telecommunication
Union [ITU99]).
When a base station is built, its signals reach all users within a certain range: We will say
that the corresponding demand nodes are supplied. For a base station i we will denote the set
of all supplied demand nodes by N i . For older network architectures, building one base station
could have an eect on other base stations. For example, neighbored base stations had to use
frequencies that are suciently far apart (frequency assignment problem). Also, interferences
between dierent base stations had to be considered; it could well be that a demand node is
located near to a base station i, nevertheless it is not supplied (i.e., not in N i ) because it is
disturbed by too strong signals from other stations.
Newer network architectures, as already existing in the United States (the IS-95 network) and
as currently developed in Europe (UMTS networks), rely on the so called code division multiple
access technology (CDMA), see [Rap96, Sect. 8.4.1 and Sect. 10.4]. This means that signals, both
from a base station and from a mobile station, are multiplied by a very large bandwidth signal
? Part of this work has been supported by NORTEL External Research.
called the spreading signal, a certain kind of pseudo-noise code sequence. All mobile users and all
base stations have their own pseudorandom codewords which are approximately orthogonal to all
other codewords. This implies that users in such a CDMA system use the same carrier frequency
and may transmit simultaneously. This leads to very high transfer rates. (In contrast to this, time
slots were reserved for each user in older time division multiple access networks.)
For the network planning stage, this implies the following: Dierent base stations can be considered
as independent from each other, because there are no frequency con
icts and no problems
with interference. On the other hand, the network has a soft capacity limit. Increasing the number
of mobile users supplied by a base station raises the noise
oor within the cell of the station.
While this will lead eventually even to technical limits (we might run out of dierent codewords),
it already much earlier causes a loss with respect to the quality of the transmitted signals.
The above considerations lead to some objectives that have to be aimed at by a provider
when planning a cellular network. We assume the provider has a demand node map of her area,
and furthermore, we assume that she identied possible locations for base stations (e.g., church
towers if the congregation agrees, roofs of industrial buildings if the company gets special phone
discounts, roofs of public buildings, etc. The provider now has to choose a subset of the possible
locations and built base stations there. The choice will be determined by the following:
1. To reach a high coverage of the area, as many of the given demand nodes as possible have
to be supplied. For each covered node, a certain gain will be obtained.
2. Usually, there is a certain maximal budget that the provider is allowed to spend during the
construction.
3. Built base stations have to be supported; hence they will cause ongoing costs.
4. Since we have no interference to take into account, it will cause no problem if a node
is covered by more than one station. (To the contrary, this is sometimes even a wanted eect,
because it allows a so called soft hando between stations as specied in the IS-95 standard
[Rap96, p. 407]; we do not take this into account here.) However, within the cell of each base
station, the background noise will raise if the number of supplied users raises. Higher noise will
lead to lower customer satisfaction and, hence, cause a certain loss.
Summing up all these gains and losses, a certain prot will be reached by the provider. The
goal is to maximize this prot.
1.2 Formal Denition of the Problem
Next, we present a formal denition of the problem described in Subsect. 1.1. Unless stated
otherwise we stipulate that all variables are considered as natural numbers.
Let B denote a set of possible base station locations, and let N denote a set of demand nodes.
In practical circumstances, the following requirements are certainly given:
(R1) The demand nodes and the possible base station locations are located in the Euclidean
plane.
(R2) Every base station has a maximal broadcast range r max (i). For every actually built
base station i we have to determine an actual broadcast range r(i) r max (i). The sets N i
of those demand nodes that are supplied by i are then given by N
dist(i;
.
r
. In practice, D is some constant (currently, approximately
km). Additionally, in practice it is clear that every base station has a certain spatial
extension, and this implies that there is a constant d such that the distance between two
built base stations is at least d.
In the formal denition that follows, we consider even a slightly more general case: Instead of
requiring that D and d are constant, we only assume the following:
(R3') The maximal number of actually built base stations in any circular area with radius D is
bounded by some constant C.
It is clear that if D and d are constant, then we can determine our required constant C, but
not necessarily vice versa. Hence, if we assume (R1), (R2), and (R3'), this subsumes (R3), and
hence we certainly cover all practical cases.
The problem BSPC is now given as follows.
Base Station Positioning in CDMA Networks (BSPC):
Given are a set N of demand nodes and a set B of possible base station locations (both located
in the Euclidean plane), a budget b, a gain e for supplying a demand node, construction
costs c(i), ongoing costs o(i) for supporting a station at location i 2 B, a cost h(i;
the background noise that occurs when in the range of base station i a number of k demand
nodes is supplied, and a maximal broadcast range r max (i) for every base station i 2 B. Let
Bg. We require that the maximal number of actually built base
stations in any circular area with radius D is bounded by the constant C. The aim is to nd
a subset S B of base stations to be built and a broadcast range r(i) r max (i) for every
such that the overall construction costs
i2S c(i) are at most b and the following prot
P is maximal:
number of supplied demand nodes
z }| {
ongoing costs
z }| {
dist(i;
| {z }
cost of background noise
For the following complexity considerations, we dene the size of an instance as the length of
an encoding of all the above components, where numbers are encoded in binary.
1.3 Relation to Previous Work
Design problems for cellular networks have been examined with respect to frequency assignment
problems (see, e.g., [KvHK99]) and with respect to power range assignment (see [CPS00]).
However, the problem of positioning base stations in an optimal way has not been considered
in depth from a complexity-theoretic point of view. While simulation and planning tools (see
[TLT97]) and solutions based on linear programming (see, e.g., [MN99]) are known, not much has
been proven about (non-)solvability and (non-)approximability of arising combinatorial problems.
In [ESW98], optimum non-approximability results for planning problems for telecommunication
networks, where antennas are placed in balloons at a certain height, were given. In [BKK + 99],
an approximation algorithm for the problem of covering a plane area with cells while avoiding
buildings within dangerous range of a station was given. In [GRV00], three of the present authors
examined the design problem for cellular networks that use TDMA technology, e.g., GSM
networks.
However, turning to third generation cellular networks that use CDMA technology, the net-work
planning process is dierent from all the above. Compared to GSM networks, interferences
between dierent base stations no longer play an important role. On the other hand, for each
base station, the background noise within its cell has to be taken into account here. This posed
no problem for older networks. Mainly because of this point, the problem we consider here has
dierent combinatorial properties than the above mentioned problems. We present an approximation
algorithm based on the so called shifting strategy, which was introduced by Hochbaum and
Maass in [HM85]. In this technique, which is applicable to a lot of geometric problems, the plane
is divided in dierent ways, leading to dierent partitions. For each partition, an approximate
global optimum can be obtained by combining (exact) local optima. Then it is proved that for at
least one partition the obtained solution is near to the optimal solution. Our application of the
shifting strategy here is interesting, because even the solutions for particular partitions can only
be obtained in an approximate way, and we need an approximation algorithm for a knapsack-like
problem as a subroutine. Mainly because of this, the algorithm obtained here is more involved
than the one for GSM networks given in [GRV00].
1.4 Paper Organization
In the upcoming Sect. 2 we recall some basic notions. Sect. 3 contains the main technical part of the
paper. There we present an algorithm for BSPC, witnessing that this problem has a polynomial-time
approximation scheme. In Sect. 4 we prove that BSPC cannot have a fully polynomial-time
approximation scheme. Also, we examine a slightly restricted (but more practical) problem and
show that even this problem does not admit an FPTAS. Finally, Sect. 5 concludes.
Preliminaries
Let h(n) 1. We say that an algorithm A is an h(n)-approximation algorithm for a maximization
problem , if the solution produced by A for an input of size n is at least the optimal solution
divided by h(n). We say that an algorithm A is an h(n)-approximation algorithm for a minimization
problem , if the solution produced by A for an input of size n is at most the optimal
solution times h(n). In both cases, h(n) is called the approximation ratio [ACG
Informally, we will say that a problem is well-approximable if it can be approximated up to
any constant; formally: A problem admits a polynomial-time approximation scheme (PTAS
for short) if, for every rational number r > 1, there is a polynomial-time algorithm A r that
r-approximates , see [ACG We will prove that BSPC admits a PTAS (Theorem 1).
For problems that have a PTAS, the question of course remains if there is a uniform approximation
scheme, working for all ratios r with a runtime which is polynomial in the input length and
1=(r 1). This would mean that the problem has a so called fully polynomial-time approximation
scheme
An optimization problem is NP-hard, if every language L 2 NP reduces to via polynomial-time
Turing-reductions, see [ACG be a polynomial; then p denotes the problem
obtained by restricting to only those instances I for which all numbers, that occur as components
of input I, are bounded by p(jIj). An optimization problem is strongly NP-hard if there
is a polynomial p such that the problem p is already NP-hard [ACG
3 An Approximation Scheme for BSPC
Let the given set N of demand nodes in the plane be enclosed in the rectangular area A. We x an
integer l > 0, the so called shifting parameter. This number will later determine the approximation
ratio of our algorithm.
Our algorithm consists of three steps.
Step 1. Fix a division of A into horizontal and vertical strips of width D, where each strip
is left (up) closed and right (down) open. By considering groups of l consecutive horizontal and
vertical strips of width D, we obtain a partition of A into several squares of size (lD) 2 , which form
a grid of width lD; see Fig. 1. Notice that the number of such squares is possibly superpolynomial
in the input size, but in this case almost all squares do not contain any demand node. Therefore,
it suces to consider the small number (bounded by jN j) of non-empty squares, i.e., squares
containing at least one demand node. A list s of these non-empty squares can be eciently
computed, since for each demand node we can determine the square containing this demand node.
In each such square s i we delete all base station locations that are in the border-strips of width
D; thus we obtain a new square t i of the same size but with no base station locations in distance
less than D to the border; see Fig. 2.
Step 2. For each square t i and each number m b
lD
lD
lD lD
Fig. 1. A partition of the plane.
Dots represent base stations.
(we call this m a local budget) we dene a set S(i; m) of
base stations located in t i such that (i) the cost of this set
is m, i.e.,
we build the
base stations in S(i; m) we obtain the maximal prot in t i
possible with a local budget m (we refer to this maximal
prot as P (i; m)). Note that we cannot compute all the sets
m) and prots P (i; m) for 1 i n and m b, since
there are exponentially many. However, a single P (i; m)
can be computed by exhaustive search, since in each t i we
can build at most base stations; this follows by
the simple fact that we can not build more than C base
stations in any circular area with radius D, and t i can be
covered by l 2 such disks of radius D. Since the prot of
building no base station is 0, the values of P (i; m) are all
at least 0. Moreover, it is easy to see that, for each xed i, P (i; m) as a function in m is monotonic
increasing. Now we have to combine these local solutions in a nearly optimal way, which turns out
as a kind of knapsack-problem. We have to distribute the overall budget b to the single squares,
i.e., for each t i we have to nd a local budget m such that the sum of all these local budgets is at
most b and the sum of all local prots, P (i; m), is (almost) maximal.
We show that this knapsack-like problem admits a full polynomial time approximation scheme
for short). This means that on input of some r and a problem instance I, we can compute
in polynomial time in jIj + r an approximated solution with relative error 1=r. The proof is
based on an idea of [IK75] (see also [Pap94, pp. 305]), showing that the knapsack problem admits
an FPTAS.
In the following we describe the way this approximation algorithm works. Let p
n. Note that P max is an upper bound for the
obtainable prot. In the following we only consider prots that are multiples of 2 k where
blog(p max =rn)c. We will see that this is sucient to obtain a good approximative solution. Now
determine all b
(using
binary search this is possible in time polynomial in (n p max =2 k ) log b. This is polynomial in
log b, since p max =2 k 2rn and P (i; ) is a monotone increasing function). The values b i;p
give us complete information about the function P (; ) disregarding the last k bits of the prot.
Let W (j;
lD
lD
lD lD
Fig. 2. Clearing border strips by
deleting all base stations outside the
shaded squares.
for Hence the function
W (j; p) gives us the minimal construction costs we have to
invest into the rst j squares such that the obtained prot
is at least p 2 k (disregarding in each square the last k bits
of the prot). Since P max =2 k 2rn 2 we can determine all
values W (j; p) in time polynomial in n+ r using a dynamic
programming technique. More precisely let W (1;
for
. Choose the
largest
p such that W (n;
p) b. This means that we can
reach a prot
we distribute an overall investment
of W (n;
p) in a suitable way to the squares.
By we denote the local budgets that are spent
in squares in order to obtain this prot
we estimate the relative error of this solution. For this we x an optimal solution
that the sum of all
Assume the contrary, and let ^
. So the sum of the local
budgets b i;^p i
is b and the sum of the ^
i is by assumption greater than
p. This is a contradiction
to the choice of
since
was chosen to be the maximal approximate solution. This proves the
claim.
2:
The rst inequality holds since the represent an optimal solution, the second inequality
follows from the denition of b , the third because of Claim 1, and the fourth follows by
simple arithmetic. This proves Claim 2.
Hence, the approximate solution 2 k
p is at least the optimal solution minus 2 k n. Since the value
p max is a lower bound for the overall optimal solution, we conclude that the approximate solution
has a relative error of at most n 2 k =p max . Since p max =2 k rn, this error is bounded from the
above by 1=r.
Step 3. Clearly there are l 2 dierent ways of partitioning A by a grid of width D into one
of width lD, because each xed partition can be shifted down (to the right) by D, 2D,
(l 1)D before we obtain the same partition again; see Fig. 3.
Denote the resulting shift partitions by P Notice that such a partitioning can be
described by the coordinates of a single point in the plane. Therefore, we can x a partitioning
of A in polynomial time even if the area A is large. If we choose l, we obtain for every
partition P i an approximate solution p i for the knapsack problem which has a relative error 1=l.
Under all these solutions we choose one with maximal prot.
This concludes the description of the algorithm. Next
lD
lD
lD
lD
Fig. 3. A second partition (dashed
lines), obtained by moving the rst
one right by 2D and down by 3D.
we want to show that the relative error of the whole algorithm
is at most 5=l. We x an optimal solution of the given
problem instance I. Let the prot of this solution be
p and
the set of built base stations be S. Now for every base station
be the set of demand nodes supplied by i.
We assume w.l.o.g. that the sets N i are pairwise disjoint. (If
some demand node occurs in more that one of these sets, we
simply delete it arbitrarily from some of the N i in order to
achieve disjointness.) Now we assign to each i 2 S the value
dist(i;
.
If a set R of base stations is not built, then the prot lost
is at most
This holds because
{ the number of demand nodes that are no longer supplied when we do not build base stations
in R is at most
{ the ongoing costs for supporting i is exactly o(i), and
{ the cost of background noise we prevent by not building i is exactly
dist(i;
Note that each x i is non-negative, since otherwise the base stations with negative x i could be
omitted and thus a solution with higher prot could be achieved.
For each partition P j , let R j be the set of base stations built in the optimal solution that are
located in a border strip of that partition. We consider the sums C
where C j is an upper bound for the prot we loose in partition j by deleting all base stations in
the border strips.
Since for each base station i 2 S there exist exactly 4l 4 partitions where i is on a border
strip, each x i occurs in exactly 4l 4 of these C j . Hence the sum of all C j is equal to
(4l
since it holds that
p. However, since there are l 2 partitions
in total, it follows, using a pigeon hole argument that there must be one partition j 0 with
l
p. By building only the base stations of the optimal solution which are not in R j 0
we will achieve a prot of at least l 4
p. If we consider the new problem instance I j 0 in which all
base station locations in R j 0
are deleted, then the solution of the knapsack problem for partition
is the optimal solution for I j 0 . We have already seen that we can solve this knapsack problem
with a relative error 1=l. So using the partitioning
we nd a solution which has an overall
prot
l
l
ll
l
.
Hence the prot computed by our algorithm for partition P j 0
is an approximation of ^
with
relative error 5=l. Since we have chosen the largest prot over all partitions, we obtain at least
the value (1 5=l)^p.
Since l can be chosen beforehand, we see that for every approximation ratio > 1 (respectively,
for every relative error > 0) we obtain a corresponding polynomial time algorithm A for the BSPC
problem. Hence we have proven the following theorem.
Theorem 1. The problem BSPC admits a PTAS.
Hardness Results
The geometric disk cover problem (DC) is dened as follows: Given is a set N of points in the
plane, a radius r > 0, and a budget b 2 N. The question is if there is a set B of points in the
plane, jBj b, such that every element from N is covered by a disk with radius r centered in one
of the points in B. (If a point j lies in distance exactly d from some i 2 B, i.e., on a circle with
radius r around i, we stipulate that it is covered by a disk centered in i.) It is known that DC is
NP-complete [FPT81], even in the strong sense [HM85].
Lemma 2. For C 12, the problem BSPC is strongly NP-hard.
Proof. First, we show that BSPC is NP-hard by constructing a Turing-reduction from DC.
Given is the set N of points in the plane, the radius r, and the budget b. We x an arbitrary
triangulation T which consists of equilateral triangles with side length
3 r. (Note that this can
be xed by two points of the plane). It is easy to see that the vertices of T (interpreted as centers
of disks) allow a covering of the plane by disks of radius r. Moreover, it can be shown that each
circular area with radius r (respectively, 2r) can be covered by 5 (respectively, 12) disks with
centers in T and radius r. For a given circular area with center i and radius r (respectively, 2r),
a minimal covering set T (i) T can be determined in polynomial time.
We set to be the set of all points i in the plane such that if we draw a
circle with center i and radius D, then at least two elements from N are located on this circle.
be the union of B 0 and all T (i) for
r
Let us consider an optimal solution of this instance. We want to see that in any circular area
with radius D there are no more than stations. Otherwise there would exist
a circular area with center i and radius r that contains more than 12 base stations. Of course
the area that is covered by these base stations is contained in the circular area with center i and
radius 2r. So we would obtain a better solution if we replace these base stations by T (i). This
is a contradiction to our assumption and it follows that in the optimal solution in any circular
area with radius D there are no more than base stations. This shows that for our
reduction the values C 12 cause no restriction to BSPC.
Now it is clear that in order to obtain maximal prot, we have to supply as many nodes in
N as possible with the given budget. Hence all nodes in N can be covered using at most b disks
of radius r i all nodes in N can be supplied by building b base stations with range D i the
maximal prot is jN j.
Besides those numbers already present in the given DC instance, the only non-constant numbers
occurring in a constructed BSPC instance are the coordinates of the points in B, which can
be written down with a number of bits proportional to those needed for coordinates in the input
set N , and the value b, which is certainly at most jBj, hence bounded by a polynomial in the
input length. Therefore, there is a polynomial p such that BSPC p is NP-hard, and thus, BSPC is
strongly NP-hard. ut
Theorem 3. For C 12, the problem BSPC does not admit an FPTAS.
Proof. The value of an optimal solution of an instance of BSPC p , for p as in the proof of Lemma 2,
is at most jN j, hence polynomial in the input length. Thus, this problem does not have an FPTAS
by [ACG
Hardness of a Restricted Problem
Let us come back to requirement (R3) in Subsect. 1.2. The denition of the problem BSPC uses
(R3'), which is slightly weaker than what is required in (R3). Therefore, the question remains if
the practically relevant BSPC problem, i.e., restricted to requirement (R3) with constant values
for d; D, admits even an FPTAS. In the reduction from disk cover, given in the proof of Lemma 2,
we were easily able to conclude that a constant C, as required in the denition of the general
BSPC problem, exists. However, the reduction does not guarantee that D and d are constant.
Hence, the above reduction does not show that the restricted BSPC problem is hard.
Nevertheless, it can be shown, using a modication of the reduction from 3SAT to DC given
in [FPT81], that the restricted BSPC problem is strongly NP-hard; hence we conclude:
Theorem 4. The problem BSPC, restricted to the case that d and D are constant and D d,
does not admit an FPTAS, unless
Proof. (Sketch.)
Fix d; D, where D d. Given a 3-CNF formula , we rst construct an instance of DC with
loops for all variables in as in [FPT81, Theorem 4, pp. 134f]. Let N be the set of nodes on
these loops. The proof given in that paper shows that all nodes in N can be covered by b disks of
suitable xed radius d i is satisable, where b is the value N min determined in the proof from
[FPT81]. Moreover, a set B of possible disk centers can be determined, given , such that, if
is satisable, then all nodes can be covered using b disks centered at some of the points in B. A
careful analysis shows that any two points in B are at distance at least d.
Now, the reduction from 3SAT to restricted BSPC maps a given formula to the BSPC
instance, given by N;B; b as above, and
N. As in Lemma 2, in order to obtain maximal prot, we have to supply as many nodes
in N as possible with the given budget. Hence is satisable i all nodes in N can be supplied
by building b base stations with range r i the maximal prot is jN j.
This proves that the restricted version of BSPC is NP-complete. That the problem is actually
strongly NP-complete follows from the following facts: Coordinates for points in B and N can be
described (as in [FPT81]) using small integers, the value of b is at most jN j, and all other numbers
r, c, e, o, and h are constant.
That restricted BSPC does not have an FPTAS follows, since even the subset of BSPC, used
in the above reduction, is strongly NP-hard, and that the maximal prot is bounded by jN j. ut
5 Conclusion
We examined the complexity of nding an optimal way to built base stations in order to supply
a specied demand of teletrac in a CDMA network. The meaning of \optimal" here took a
number of eects in account, such as gain from supplied users, construction costs and ongoing
costs for base stations, and higher customer satisfaction if the background noise is low. We proved
that an Euclidean version of the problem, which is still good enough for practical purposes, has
a polynomial-time approximation scheme but no fully polynomial-time approximation scheme.
The sets N i of demand nodes supplied by a base station i were for our above algorithm just the
sets of demand nodes that are located within a certain distance from station i. One might argue
that this is unrealistic; e.g., it might be that a demand node is located near to a base station,
but nevertheless it is not supplied because there is a high building in between. However, for our
approximation scheme given in Sect. 3, it is only important that we nd some number D such
that every N i is located within a circle of radius D around base station i. The N i itself can be
of arbitrary shape, for example taking skyscrapers or other particularities of the landscape into
account. Hence we do not rely on an Euclidean metric for station signals. Our algorithm is still
polynomial time, if for a given station i and a demand node j it can be determined in polynomial
time, if j is supplied by i.
If we look at an incremental version of the base station positioning problem, where as an
additional input we get a network of already existing base stations that we have to extend by
building base stations up to a given budget, it can be observed that a modication of our algorithm
shows that this problem also has a PTAS. The only point that changes in the algorithm in Sect. 3
is the computation of the local prots P (i; m). An interesting topic for further research could be
to examine a \build on demand" problem, where new teletrac requests arise online, and we have
to update our network continuously.
Acknowledgement
. We are very grateful to Phuoc Tran-Gia and Kenji Leibnitz for giving us a
tutorial about CDMA technology. During the discussions in these meetings, the formal denition
of the BSPC problem evolved. We also acknowledge helpful discussions with Klaus W. Wagner.
--R
Complexity and Approximation - Combinatorial Optimization Problems and Their Approximability
Station layouts in the presence of location constraints.
A compendium of NP optimization problems.
The power range assignment problem in radio networks on the plane.
Positioning guards at
Optimal packing and covering in the plane are NP-complete
Computers and Intractability
Approximation schemes for covering and packing problems in image processing and VLSI.
Fast approximation algorithms for the knapsack and sum of subset problems.
Optimal solutions for a frequency assignment problem via tree-decomposition
Optimum positioning of base stations for cellular radio networks.
Computational Complexity.
Communications - Principles and Practice
--TR
Approximation schemes for covering and packing problems in image processing and VLSI
Analysis of a local search heuristic for facility location problems
Fast Approximation Algorithms for the Knapsack and Sum of Subset Problems
Communications
Complexity and Approximation
Station Layouts in the Presence of Location Constraints
The Power Range Assignment Problem in Radio Networks on the Plane
Optimal Solutions for Frequency Assignment Problems via Tree Decomposition
Positioning Guards at Fixed Height Above a Terrain - An Optimum Inapproximability Result
--CTR
Christian Glaer , Steffen Reith , Heribert Vollmer, The complexity of base station positioning in cellular networks, Discrete Applied Mathematics, v.148 n.1, p.1-12,
Larry Raisanen , Roger M. Whitaker , Steve Hurley, A comparison of randomized and evolutionary approaches for optimizing base station site selection, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Roger M. Whitaker , Larry Raisanen , Steve Hurley, The infrastructure efficiency of cellular wireless networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.48 n.6, p.941-959, 19 August 2005
Kevin P. Scheibe , Laurence W. Carstensen, Jr. , Terry R. Rakes , Loren Paul Rees, Going the last mile: a spatial decision support system for wireless broadband communications, Decision Support Systems, v.42 n.2, p.557-570, November 2006
Roger M. Whitaker , Steve Hurley, On the optimality of facility location for wireless transmission infrastructure, Computers and Industrial Engineering, v.46 n.1, p.171-191, March 2004
Larry Raisanen , Roger M. Whitaker, Comparison and evaluation of multiple objective genetic algorithms for the antenna placement problem, Mobile Networks and Applications, v.10 n.1-2, p.79-88, February 2005
Hierarchical parallel approach for GSM mobile network design, Journal of Parallel and Distributed Computing, v.66 n.2, p.274-290, February 2006
Filipe F. Mazzini , Geraldo R. Mateus , James Macgregor Smith, Lagrangean based methods for solving large-scale cellular network design problems, Wireless Networks, v.9 n.6, p.659-672, November
Michael R. Fellows, Parameterized complexity: the main ideas and connections to practical computing, Experimental algorithmics: from algorithm design to robust and efficient software, Springer-Verlag New York, Inc., New York, NY, 2002 | approximation algorithms;network design |
381497 | Encoding program executions. | Dynamic analysis is based on collecting data as the program runs. However, raw traces tend to be too voluminous and too unstructured to be used directly for visualization and understanding. We address this problem in two phases: the first phase selects subsets of the data and then compacts it, while the second phase encodes the data in an attempt to infer its structure. Our major compaction/selection techniques include gprof-style N-depth call sequences, selection based on class, compaction based on time intervals, and encoding the whole execution as a directed acyclic graph. Our structure inference techniques include run-length encoding, context-free grammar encoding, and the building of finite state automata. | KEYWORDS
Software understand, Program tracing, Dynamic program
analysis
Software understanding requires inferring the behavior of
software systems. Static techniques do this by analyzing the
program's code, while dynamic techniques are based on running
the program, collecting data as it runs, and then analyzing
the resultant data. In the dynamic case, there are inherent
tradeoffs concerning the amount of collected data, the types
of analyses that can be performed on them, and the overhead
of collection.
We regard frameworks suitable for dynamic analysis as having
three parts. The bottom layer, closest to the program, is
a suite of tools for gathering information as a program exe-
cutes. The middle layer selects relevant portions of the data,
which it later compacts and analyzes, effectively building a
model of it. The top layer, closest to the user, displays these
models. We are in the process of building such a frame-
work. The first part is described in [17], but we also briefly
review it in section 2. The bulk of this paper discusses our
middle layer tools. We envision the top layer as having at
least the capabilities of our previous tools [15], but with enhanced
abilities for specifying meaningful visualizations on
demand, as the task arises.
The purpose of the middle layer is to build models of the collected
data. There are two reasons to build a model: first, it
is impossible to visualize the raw data, because of the volume
and complexity of it, and second, a model can be used
to automatically check properties of the traces. We want our
models to be useful in at least the following tasks:
Summarizing parts of a program execution. Such summaries
would allow us to show the execution at multiple
levels of detail, helping the user to navigate through it.
Modeling class behavior. We can track the outside calls
to methods invoked on all objects of a specific class. If
we have a suite of programs that use the class correctly,
we can use the abstractions generated by the execution
analysis as a model of how the class should be used.
Modeling library usage. We can view a whole library
as an object and track all external calls to it. This would
allow us to build a model of correct library usage and
test future users of the library against the model.
Detecting unusual events. After we have a model for
the program behavior, we should be able to find events
that do not conform to that model. Such events are often
significant in that they represent exceptional condi-
tions. For example, if a program executes one sequence
of synchronization steps almost all the time, but occasionally
executes an alternate sequence, this denotes a
potential problem.
Checking against existing models. If we already have
a model of aspects of the behavior of a system, say,
from the specifications process, then we should be able
to efficiently check whether the model produced by
the traces is conforming to the original one. We also
see here the opportunity of building models of design
patterns and then verifying that a program implements
them correctly. This is especially useful for behavioral
patterns [8], which are hard to specify otherwise [16].
Performance analysis. We want to be able to annotate
our models with performance information. This is what
existing performance tools are doing, except that they
are using a fixed and limited model. Prof is using a simple
function invocation model, and gprof [9] a digram
model.
Visualization. We want our models to be fit for further
exploration by graphical means. In a visualization en-
vironment, we leave it up to the user to separate data
from noise. Of course, what is noise for a particular
application is useful for another; therefore it is essential
that our models are rich enough to at least hint to what
the user should focus on. On the other hand, we cannot
include everything in a model, because of volume.
The work we describe in this paper provides the framework
necessary for addressing these and other problems using
trace data. The important aspects of this research are
1. A framework that separates the collection, analysis and
encoding of data. The framework provides the flexibil-
ity needed to address the wide range of problems such
as those described above.
2. Specific data selection techniques that address the
above problems.
3. Specific encoding techniques that facilitate different
types of analysis when combined with the various selection
techniques.
The overall significance of the research thus lies in the spe-
cific techniques that are used, in the ability to combine them
in various ways, and in the application of their combination
to a broad range of problems.
The remainder of the paper is organized as follows: Section
2 describes the data that our trace tools currently gener-
ate. The different data compaction and selection approaches
are described in Section 3. Section 4 describes the different
sequence encoding techniques that we utilize in these ap-
proaches. Note that each of these techniques can be used
with many of the approaches described in Section 3. Section
5 then discusses the results of these various encodings, commenting
on their degree of compaction and their accuracy.
We conclude with a discussion of the impact of this work
and what extensions we are currently planning.
The first part of our framework collects data from a running
program. We have a suite of tools that traces both binary executables
and Java programs. The binary tracers are tuned
for C and C++, but there is nothing that would prohibit them
from working with executables produced from, say, a Pascal
or Java native compiler. Table 1 summarizes the tracing
data.Our tracer for binaries, WOLF, patches the executable, inserting
hooks at the entry and exit of each routine. It also
instruments basic blocks to keep count of how many instructions
are executed per function invocation. The entry and
exit hooks are used to generate a trace file for each thread of
the program. For each call a record is emitted including the
called address, the calling address, and the first argument of
the call. The latter is used to determine the object used for
method calls, the size for memory allocation, the thread for
thread-related activities, and the synchronization object for a
synchronization call. For each return, the record includes the
address of the function that was called and the return value.
In addition, the trace files contain periodic records recording
the thread run time, the real time clock, and the number
of instructions executed by the thread. The records of each
thread are output on a separate file. We have a second tracer
under development, which is doing minimal work, emitting
only the called address and first argument for function calls,
and piggybacking the count of intervening function returns
on the next function call.
For Java programs, we have a different tool, TMON [17],
that uses the standard Java profiling interface (JVMPI) to
again produce multiple trace files, one per thread. These
files contain records for each method entry and exit, for each
object allocation and deallocation, for thread creation and
termination, and for monitor (synchronization) activity. The
trace files also contain records about thread run time and real
time.
The reason that we use one trace file per thread in both cases
is that this avoids synchronizing calls that would otherwise
not be synchronized. To get the intermingling effect back,
we use another program, TMERGE, which outputs a unified
trace file, and a dictionary file which maps identifiers (like
function addresses for binaries, or object identifiers for Java)
in the trace file to meaningful symbols.
Even though the data we are recording is fairly high-level,
the amount of data for a large or long-running system can
be immense. The resultant trace data files contain about one
gigabyte of data for every two seconds of C/C++ execution or
ten seconds of JITed Java execution (the minimal tracer reduces
this by a factor of 2.5). This imposes certain requirements
on the efficiency of the tools that have to deal with
the files. Primarily, however, it creates the need for the compaction
and selection techniques that we describe in the next
section.
Our data compaction and selection techniques range from
lossless transformations to aggregation. Our trace data is basically
the dynamic call tree. This tree includes a node for
each routine invocation and an edge from each invocation to
all the invocations that it directly generates. The tree can be
augmented with time information (like run time, real time,
and instructions spent in each call) and memory usage infor-
1. Common to Java and C/C++
Function Entry: Function Id; First argument (C) Object
Function Exit: Function
Object Allocate/Free: Object
Thread Start/End
Run Time: Execution time in thread
Real Time: Accumulated real time
2. C/C++ only
Lock (any pthread synchronization object) Cre-
ate/Destroy/Wait/Test/Unlock: Lock
Memory Allocate/Free: Address; Size
Instruction Count: Instructions executed in thread
3. Java Only
Class Load/UnLoad: Class
GC Start/Stop: Number of non-garbage objects; Total
non-garbage object space; Total object space
Monitor Enter/Exit: Object
Monitor Wait: Object Id; Timeout
Table
1: Trace Record Summary
mation (like the size and number of allocations done by each
call).
In the following discussion of compaction/selection
schemes, we will be talking about encoding sequences of
calls and other events with strings and combining them
using various encodings. The degree of compaction depends
largely on which particular structures and operations we
choose for that purpose. For example, we can represent
a sequence of calls as a string or as a run length encoded
string; but we can also represent the sequence as an automa-
ton, and then combine nodes that would not be collapsed
for strings, by combining their automata, by, for example,
taking their union. Our different approaches to this are
described in section 4. The effectiveness of the combination
of compaction and choice of representation is described in
section 5.
String Compaction
The simplest way of representing the dynamic call tree is
with a string. For example, if function A calls function B
and then function C, we could represent it as ABC.Thisisof
course ambiguous, since the same string would result from
calling B and B calling C without returning. We need
therefore to insert markers for return, and then the two cases
become as follows: AB_C_ and ABC__. Note that each
thread of control generates its own string. Thus the output of
a string encoding of the execution tree is a sequence of encoded
strings, one per thread. For Java where we have threadnames, we allow threads with similar names to be grouped
for encoding purposes. Once we have those strings we can
apply on them the techniques of section 4.
Class Selection
For some applications, like modeling the behavior of a class,
parts of the trace are irrelevant. We are interested in what
constitutes a typical use of a class; for this, we need a number
of applications that use the class correctly. We can trace these
applications, and isolate the calls of methods of a particular
class (or class hierarchy) and group those by object. If we are
interested into the inner workings of the class we can monitor
the methods that an object invokes on itself; if we are only
interested in the external interface we can forego them. For
the grouping of calls per object, we need to track the first
argument of all calls, which for methods is the 'this' pointer
(this is largely system dependent behavior, but seems to be
the norm). The pointer becomes interesting when it's the first
argument of a constructor, and becomes uninteresting again
when it's the first argument of a destructor.
Using a similar approach we can provide models of the usage
of a library, package, or other entity. While this is easy
in principle, the difficulty in practice comes at attempting to
define what is meant by a single use of a given library or
entity. The trick we use for objects does not work in the
general case. As an example, consider the C library file in-
terface. A file is accessible to the program through a pointer
to a FILE data structure. A specific address only becomes
interesting when returned by a function (fopen). Also sometimes
it is the first argument to a function (fprintf, fscanf)
and sometimes the fourth (fread, fwrite). What we need is to
somehow specify that the return value of fopen is linked to
the first argument of fprintf and the fourth argument of fread.
On top of the specification difficulty, it is harder to trace the
program, since we might need to emit records with all the
arguments. One approach is to interpose a library that does
exactly this extra step of selecting arguments.
N-levelCalCompaction
The standard UNIX prof tool provides use useful performance
data by grouping all the calls to a single routine into a
single node and collecting statistics for that node. Gprof accumulates
statistics on calling pairs, but even that has been
found inadequate [20]. Using the trace data, we can provide
a simple n-level generalization of this.
To create an N-level call encoding we look at the call stack
at the start of each call. We then create for 1 i n,thei-
tuple that includes this call and the (i-1) top items that are on
the call stack at that point. For each such tuple, we accumulate
the statistics that are collected from the trace data, computing
both the sums for averaging and the sum of squares
for computing the standard deviation.
Interval Compaction
Another way of compacting the data is to consider the program
trace in chunks. We break the execution into a small
Figure
1: Interval Visualization
set of intervals (for example 1024), and then do a simple
analysis within each of the intervals to highlight what the
system is doing at that point. Currently, we define what the
system is doing in two distinct ways, providing two different
analyses for the intervals. The first one concentrates on
the calls during the interval, combining them per class. For
multithreaded applications, it also keeps track of how much
time is spent waiting within a class. Using this model we
can produce an overview visualization such as that shown in
Figure
1.
The second interval analysis looks at allocations on a class
basis. For each interval it records the number and size of allocations
of objects of each class by each thread. This again
can be used to provide a high level visualization of the allocation
behavior of the system or it can be combined with
the call interval analysis to provide additional details of the
behavior of the system.
Dag Compaction
One effective way to compact the dynamic call tree is to
transform it into a directed acyclic graph (DAG). The dag is
built by traversing the tree in a postorder fashion and collapsing
nodes with identical strings of calls. Every such string
appears exactly once in the dag.The algorithm to compact the tree is as follows:
1. In a post-order fashion, for all nodes of the tree: Construct
a string consisting of
the function of the node,
an opening parenthesis,
the encoding of the strings of all its children,
maintaining their order,
a closing parenthesis.
2. Compute a signature of that string, by hashing or otherwise
3. If there is a node in the dag node containing the resulting
string do nothing.
4. Otherwise, create such a node, and create an arc from
it to each of the nodes containing the tree node's chil-
dren's strings, retaining the order.
Once again we don't have to keep strings as strings; we can
use some other representation of a sequence, if we have an
operation to replace concatenation, and some way of defining
unambiguously the head of a sequence (what we do here
with parentheses).
While building the dag, we need to merge the performance
statistics that are associated with the individual nodes. We do
this by keeping for each dag node the count of the number
of original nodes, the sums of each statistic from each of the
original nodes, and the sum of the squares of each statistic
for each of the original nodes. This lets us provide averages
as well as standard deviations for each dag node.
The compaction is generally quite effective since much of
an execution is repetitive in nature and the dag allows such
repetitions to be collapsed into a single node.
When dealing with multiple threads, we start with the forest
of their call trees, and we apply the same algorithm to all
the trees simultaneously, but without starting with an empty
dag every time. That is, we construct a single dag that represents
all threads of control. The individual threads are then
represented as root nodes of the resultant graph.
The effectiveness of many of the compaction techniques relies
on the representation of sequences of items. There are
two approaches to such representations. The first is to provide
an exact representation of a sequence, possibly com-
pressed. The second is to provide a lossy representation,
one that represents not only the original sequence but other
sequences, too. One could, for example, detect repetitions in
the sequence and then ignore the counts of them, so that, say,
AAAABABABC would become some A's, some BA's, and
a BC, or, in regular expression notation A*(BA)*BC.
This kind of approximation of sequences is particularly useful
when one is attempting to identify similarities or, as in our
case, when one is attempting to encode a group of related sequences
using a single encoding. Several of the compaction
techniques of the previous section can use this approach. For
example, when compacting the tree into a dag, we can make
nodes in the dag describe whole families of call sequences.
As an example take the case that function A calls function B
in a loop. Under the classical string representation, it would
be mapped to a different dag node depending on the number
of iterations though the loop. Under, say, a run length encoding
scheme that ignores counts, it would be mapped to
the same node. In such cases an approximation is better for
understanding and visualization.
Our framework allows for a variety of sequence encoding
techniques. These include both approximations and exact
encodings. They vary from very simple to more complex.
The correct one to use will depend on the particular com-
paction/selection scheme that is being used and the specific
understanding task.
Run-Length Encoding
The simplest approach that we provide (other than no
encoding) is to find immediate repetitions and replace
them with a count followed by the item being repeated.
Thus, the string ABBBBBBBBBCDDDBCDC is encoded
as A 9:B C 3:D BCDC. This is very fast and often quite
effective in terms of compression. The run-length encoding
algorithm also takes a parameter k indicating the longest repetition
to be expressed exactly. Any repetition of size longer
than k will look the same. Thus, if the above sequence
would be encoded as A *:B C 3:D BCDC.
Grammar-Based Encoding
An alternative to simple run-length encoding that finds immediate
repetitions, it to find all common subsequences and
to encode the string using a grammar where each such sub-sequence
is represented by a rule. This is a natural extension
to the RLE encoding. One such approach is the Sequitur algorithm
[14]. This algorithm builds a context-free grammar
representing the original string. It ensures that:
a) No pair of adjacent symbols appear more than once in
the grammar; and
Every rule is used more than once.
The algorithm works by looking at successive digrams and,
whenever a new digram is formed that duplicates an existing
one, adding a new rule. The process needs to be applied recursively
when nonterminals replace digrams and rules need
to be eliminated when their number of uses falls to one.
The standard Sequitur algorithm provides an exact encoding
of a single sequence. Our implementation of the algorithm
provides for encoding groups of sequences. This is done by
building a global table of variables. Each sequence is thenencoded separately. When the sequence encoding is com-
plete, its nonterminals are merged with those in the global
variable table so that each unique right hand side of a variable
only appears once. The final encodings are given in
terms of these global variables.
As an example, consider the string ABBBBBBBBBCD-
DDBCDC from above. It would be encoded as:
Here the subsequence BCD has been isolated and the sequence
of B's has been encoded as the two rules
R2.
The basic sequitur algorithm does a good job of finding all
common subsequences and of encoding them efficiently. It
produces a grammar that is compact and where each rule is
meaningful. On the other hand, it does not do the best job
of handling repetitions (it needs log n rules to encode a sequence
of n identical symbols as in the encoding of the 9
Bs), it does not take into account balanced strings (strings
with the same number of calls and returns), and it does not
handle alternation.
We have modified the basic algorithm in two basic ways. The
first is to find immediate repetitions and represent them as
we do in run-length encoding. This is done as a post processing
step. Each rule is processed only once, after all the
nonterminals used in it are processed. Each instance of a
nonterminal in the right hand side of each rule is replaced by
its newly computed expansion, and then the algorithm does a
run-length encoding the new right hand side. The rule above
become
Since and are no longer needed, this yields the gramma
which is a more logical representation of the original sequence
in that the repetition of B is shown explicitly.
This approach can then be combined with the run-length encoding
where more than k repetitions are represented as *:X.
This lets more rules be merged. The representation of the
sequence ABBBBBBBBBCDDDBCDC using Sequitur with
run-length encoding and a cutoff of 3 is:
Our second modification to Sequitur is designed to produce
balanced rules. This is useful for the simple string compaction
of section 3. In his dissertation [14] Nevill-Manning
notes that Sequitur can be restricted to only create rules that
contain more closing symbols than opening ones. This is
done by not considering digrams that are unbalanced. The
resultant grammar is a start, but again is does not produce
only balanced rules. We use a modification of this algorithm
that ensures that all generated rules are balanced.
This is again done through post-processing. We start by restricting
the grammar so that all generated rules are either
completely balanced or the rule starts with a open bracket
(either directly or through a nested rule) and contains more
open brackets than close brackets. This is what Neville-
Manning's balancing algorithm does. Then we do the run-length
encoding of the rules, as we did in the first modifica-
tion. But any time we encounter an unbalanced non-terminal,
we expand it in place. This makes the rules in the final grammar
longer, and some subsequences appear more than once.
However, the resultant grammar has only balanced rules.
Finite State Automata Encoding
A way to provide both alternation and repetition in the encoding
is to use finite state automata that accept the se-
quence. One can vary the accuracy and precision of the encoding
by using different means of constructing the automa-
ton. At the one extreme, one can build a chain automaton
that accepts only the given sequence. At the other extreme,
one can build a single state automaton that accepts everything
via self arcs for every possible input symbol.
Naturally, neither of these approaches is useful. What we
want from the automaton is a good intuition for what sort of
sequence is possible. In other words, the automaton should
reveal something about the structure of the sequence. When
the automaton represents multiple sequences their collective
structure must be revealed.
There has been significant previous work on inferring finite
state automata from input sequences. Most of this work has
concentrated on the use of positive and negative examples,
i.e. providing a set of inputs that are valid sequences and a
set of inputs that are not. Other previous work has looked at
interactive models where the inference procedure is able to
ask the user whether a particular input is valid or not. Neither
of these approaches is practical for the types of sequences
that we are looking at.In some ways, our sequences are special. In the path ex-
ample, the automata should reflect the internal flow graph
of a single procedure. It should encode loops, conditional
branches, and pure sequential flow. This provides us with
some direction. Other applications, such as using automata
to describe how the methods of a class should be used, are
not as clear cut. However, even here, the way that most
classes are designed to be used should result into a fairly
structured flow diagram.
In order to take advantage of the presumed structure behind
the sequence and to still provide a reasonably high degree
of abstraction while building automata, we have developed a
new algorithm for inferring an automaton from one or more
sequences.
The Basic FSA Construction Algorithm
Our algorithm constructs deterministic FSAs with the property
that any given sequence of three symbols can start at
only one state.
To formalize a little, we will use the definitions from
Hopcroft and Ullman [10]. A deterministic finite state automaton
is defined by:
a set of states Q,
an input alphabet ,
an initial state q0 2 Q,
a transition function
andasetoffinalstatesF Q.
In addition, we define a string of length k as an element of
k, and we use the extended definition of which allows its
second argument to be a word (instead of a single symbols).
Q. For our specific purposes,
we define F to have only one element. We achieve that by
extending all our strings with a sequence of length k of the
special symbol $.
We say that a state q has a k-tail t 2 k iff there exists
a state q0 such that: (q;t)=q0, that is, iff starting from
q, it's possible to see the sequence t. We call this relation
Tail(q;t).
Our algorithm maintains two invariants: determinism
mapping of k-tails to states
The basic algorithm maps every k-tail to one state. When
two states have a k-tail in common, they are merged. Merging
two states is actually a recursive process since when two
states are merged, if they have outgoing arcs with the same
symbol, the target states of those arcs must also be merged.
The process can also affect the possible k-tails of some other
state. However, the number of potential additions that need
to be considered as one adds an arc grows as nk where n is
the size of the input alphabet. This is prohibitive in building
a large automaton. We therefore ignore such associations
while building the automaton and then we merge states
based on such associations only once, as a post processing
step. The final postprocessing step is classical minimization
of the FSA.
An example of the algorithm's output, when given the sequence
ABBBBBBBBBCDDDBCDC and k =3is shown
in
Figure
2.
Handling Self Loops
The basic algorithm does reasonably well at finding an appropriate
automaton. It fails, however, to detect sequences
of k or less of a single token. Consider the input sequence
ABBBC. The automaton generated for this using the basic
algorithm would consist of seven states each with a single
transition to the next one. (There are seven rather than six
states because the algorithm adds a transition on the end
$). What we actually want to generate is an automaton
with a self-loop that indicates the repetition of B.
This requires two simple modifications to the above algo-
rithm. The first occurs while we are building the automaton.
If the current state already has an input on the current token,
instead of creating a new state we create a self-loop. This is
sufficient to create self-loops for repetitive input sequences.
Consider the result of applying the modified algorithm to the
example shown in Figure 2. The self-loop for input B in state
S1 is created because of determinism. This is not enough for
the 3 Ds in state S2.ThefirstD takes us to a new state S3.
When we get to the second of the three D's in the sequence,
the new check fires and a self loop is created. This is shown
in
Figure
3 (a). After the last D the next sequence is BCD
and its associated state S1. However, the self-loop in state
on D would then require that S1 and S3 be merged. The
result is shown in Figure 3(b).
The second modification is a bit more complex. Self-loops
tend to cause spurious state merging. Consider the automaton
in part Figure 3(b). Both states S1 and S2 have a potential
successor string of DDD. In general, if a state q has
a self-loop on input X ((q;X)=q), then any state q0 that
has a transition into that state with input X ((q0;X)=q)
would be merged with it. This is not the behavior we want.
We therefore changed the postprocessing step, so that, in situations
like this the initial instance of X causes a transition
to the state representing the loop and is not part of the loop
itself.
5 Experience
We have used the various encoding approaches with a variety
of program traces from both Java and C++. The programs that
generated the traces include the following:
KnightsTour solves the problem of finding a HamiltonianB D
A C D
(a) ABBBBBBBBBCDDDBCDC
B,D
(b) ABBBBBBBBBCDDDBCDC
Figure
3: Examples of self loops
path on the graph induced by the knight's moves on a
chessboard. It is written in C++.
simulate the motion of pendulum in a magnetic
field. They are both written in Java.
OnSets is a engine for the on-sets board game. Its core is a
program that builds valid logic formulas out of a set of
characters. Its written in C++.
Decaf is an optimizing compiler for a small subset of Java.
It is written in C++.
ShowMeanings is a webserver whose main function is
finding alternate meanings of words to elaborate web
searches.
The raw trace data ranges in size from one megabyte for a
simple C++ program implementing Knight's tour to twenty
gigabytes for a test of a commercial Java system that handles
web-based requests. We have tried various encodings
on each of these traces. To evaluate the quality of the results,
we have done spot checks on each trace and a more detailed
analysis on the Knight's tour example.
The encodings can be evaluated in two ways. One of the
goals of the various encodings, especially the dag and call
encodings, is to provide a more concise version of the trace
data. In these cases, we can measure the amount of compaction
that the encoding provides. Note that this is not a
good absolute measure. A program that is very repetitive
will have a more condensed encoding than one that is not,
independent of the encoding technique. Note also that we
are giving up information in doing the encodings. The encodings
typically only look at the dynamic call graph. They
ignore memory management and synchronization information
in the trace. Moreover, they also discard information
about individual objects. Based on our experience, however,
they still encode about a quarter of the original trace data.
Finally, note that the raw trace files are already quite dense
since they contain packed binary data while the encoded output
files are text files containing XML data.
The compression that results from the various encodings can
be seen in Table 2.
S5
CDD
Figure
2: Sample run of the FSA inducing algorithm
The grammar encodings seem to grow with the size of the
trace, while the FSA encodings grow with the number of
functions in the program. Grammars also tend to grow larger
than the FSAs. However, it would be dangerous to make any
definite assertions about the sizes of the models.
The second way of evaluating the encodings, especially for
the automata-based ones, is to see if the resultant automata
reflect the intuitive behavior of the system. For this pur-
pose, we spot-checked routines in the Knight's tour exam-
ple, classes in our Java program, and the I/O classes of the
standard library, to see what the corresponding automata
looked like. An example is given in Figure 4. In all our spot
checks, the automata that were generated seem to correspond
quite well what one would expect given the code.
One thing that we evaluated in generating the encodings was
determine good values for the k parameter for automata. For
most of the cases we looked at, a value of k below 2 collapsed
too many states while a value of k above 4 did not
make that much difference in the resultant automata. More-
over, in most cases, the automata did not change much between
While one can come up with examples
that require an arbitrary k to find the right intuition,
a value of k =3seems to be a good compromise.
6 RELATED WORK
Tracing programs can be done in a number of different ways.
Essentially, the program has to emit records about its state.
One can modify the program code to generate the trace data.
This is the approach taken by some data visualization sys-
tems, like Balsa [2] and Polka[18]. This approach works
very well if the goal is something like algorithm animation,
where tracing is actually part of the finished program. A second
approach is to modify the executable. This can be done
either a compile time, like gprof does, or on the finished ex-
ecutable. Apart from our system, EEL [13] takes the same
approach. Lastly, one can run the same executable under a
tracing environment, like the JVM or a modified JVM, like
Jinsight [5].
There has been significant work in encoding the call tree.
Ours is essentially the algorithm of Jerding et al. [11], except
that they incorporate some of the steps we keep for the
encoding phase. Other approaches ([1], [21]) lose some information
in the process.Jerding et al. [11] also focus on class selection. Prof and
Gprof [9] kept statistics on function invocations and caller-callee
pairs respectively. When we try to derive an automaton
based on the usage of classes, we are, effectively, discovering
path expressions [3]. This kind of result is akin to the
work by Ernst and Notkin [6] who are trying to discover data
invariants of a program.
Sequitur has been used to compress basic block trace data by
Larus [12].
Neville-Manning reports on the work of Gaines [7] that focused
on discovering control flow, albeit with the use of the
absolute value of the program counter. Discovering FSAs
only by positive examples has a long history. In general,
it's impossible to discover the correct minimal FSA. It is
impossible even under the Probably Approximately Correct
model. Our algorithm is closest to that of Cook and Wolf
[4], with the difference that they collapse two nodes iff one
includes all the k-tails of the other one. Since their constraint
is stricter than ours, they end up with bigger FSAs.
The research described in this paper represents a first step
toward a system that will afford a broad basis for understand
and visualizing the dynamic behavior of large complex sys-
tems. Our current efforts involve extending this basis in a
variety of ways.
In the area of trace data collection, we are working on extensions
to reduce the size of the trace files and the impact
of tracing on program execution. We are working on an interface
for specifying which trace records are interesting, and
outputting only those. We are also incorporating the minimal
tracing of section 2 into the larger framework.
In addition we are implementing a variety of minor extensions
to let our trace collection system be used effectively
with multiple process, distributed systems.
Next, we are working on developing and incorporating additional
selection/compaction techniques. The one we are
looking at first involve generating sequences of memory
events (e.g. allocations, memory compaction involving moving
of objects by the garbage collector, frees, garbage col-
lections), and at generating sequences to reflect the use of a
library or arbitrary program abstraction.
Program Name #Functions Raw Trace Size Dag-Grammar Dag-FSA String-Grammar String-FSA
KnightTour 268 700K 120K 208K 12K 40K
OnSets 542 833M 6M 464K 1.5M 80K
Decaf 5443 2.6G 29M 5M 6.1M 700K
ShowMeanings 1488 21G 82M 623K 34M 110K
Table
2: Compression achieved by the encodings.
Finally, we are working on developing additional encoding [8]
techniques. For the sequence encodings, we are investigating
the use of probabilistic models, like hidden Markov models
to approximate the FSAs. Probabilistic models are very flex- [9]
ible, and it's well known how to train them. They are capable
of various things that an FSA cannot do, like segregation of a
program trace in phases. On the other hand, one has to tune
a lot of parameters for them to work properly, and since the
learning process is an optimization procedure, sometimes the [10]
resulting model gets stuck in local optima which are not very
meaningful.
--R
Exploiting hardware performance counters with flow and context sensitive profiling.
Interesting events.
The specifi- cation of process scheduling by path expressions
Discovering models of soft- [14] ware processes from event-based data
Visualizing reference patterns for solving memory leaks in java.
Dynamically discovering likely program invariants to support program evolution.
Behaviour/structure transformations un- [18] der uncertainty
gprof: A call graph execution profiler.
of the SIGPLAN
to Automata Theory
Visualizing interactions
in program executions.
Whole program paths.
the ACM SIGPLAN
Language Design and Implementation
PLAN'95 Conference on Programming Language Design
and Implementation (PLDI)
Inferring Sequential Structure.
PhD thesis
Working with patterns and code.
of the 33rd Hawaii International Conference
on System Sciences
Generating java trace data.
Smooth, continuous animation for portraying
algorithms and processes.
KnightSquareInfo::findRestOfTour(KnightSolution sol)
if (sol->isValid()) return TRUE
while ((sq
Figure 4: A case where the FSA algorithm discovers the control
Software Visualization: Programming as a
Practical experience of the limitations
of Gprof.
--TR
Practical experience of the limitations of Gprof
Visualizing interactions in program executions
Exploiting hardware performance counters with flow and context sensitive profiling
Software visualization in the desert environment
Discovering models of software processes from event-based data
Whole program paths
Dynamically discovering likely program invariants to support program evolution
Generating Java trace data
A portable sampling-based profiler for Java virtual machines
Introduction To Automata Theory, Languages, And Computation
Visualizing Reference Patterns for Solving Memory Leaks in Java
Gprof
Working with Patterns and Code
--CTR
Alessandro Orso , James A. Jones , Mary Jean Harrold , John Stasko, Gammatella: Visualization of Program-Execution Data for Deployed Software, Proceedings of the 26th International Conference on Software Engineering, p.699-700, May 23-28, 2004
James A. Jones , Alessandro Orso , Mary Jean Harrold, GAMMATELLA: visualizing program-execution data for deployed software, Information Visualization, v.3 n.3, p.173-188, September 2004
Steven P. Reiss, An overview of BLOOM, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.2-5, June 2001, Snowbird, Utah, United States
Valentin Dallmeier , Christian Lindig , Andreas Zeller, Lightweight bug localization with AMPLE, Proceedings of the sixth international symposium on Automated analysis-driven debugging, p.99-104, September 19-21, 2005, Monterey, California, USA
Steven P. Reiss, Visualizing Java in action, Proceedings of the ACM symposium on Software visualization, June 11-13, 2003, San Diego, California
Ludovic Langevine , Mireille Ducass, A tracer driver for hybrid execution analyses, Proceedings of the sixth international symposium on Automated analysis-driven debugging, p.143-148, September 19-21, 2005, Monterey, California, USA
Davide Lorenzoli , Leonardo Mariani , Mauro Pezz, Inferring state-based behavior models, Proceedings of the 2006 international workshop on Dynamic systems analysis, May 23-23, 2006, Shanghai, China
Alessandro Orso , James Jones , Mary Jean Harrold, Visualization of program-execution data for deployed software, Proceedings of the ACM symposium on Software visualization, June 11-13, 2003, San Diego, California
Stuart Marshall , Kirk Jackson , Craig Anslow , Robert Biddle, Aspects to visualising reusable components, Proceedings of the Asia-Pacific symposium on Information visualisation, p.81-88, January 01, 2003, Adelaide, Australia
Ankit Goel , Abhik Roychoudhury , Tulika Mitra, Compactly representing parallel program executions, ACM SIGPLAN Notices, v.38 n.10, October
Rhodes Brown , Karel Driesen , David Eng , Laurie Hendren , John Jorgensen , Clark Verbrugge , Qin Wang, STEP: a framework for the efficient encoding of general trace data, ACM SIGSOFT Software Engineering Notes, v.28 n.1, January
Tao Wang , Abhik Roychoudhury, Using Compressed Bytecode Traces for Slicing Java Programs, Proceedings of the 26th International Conference on Software Engineering, p.512-521, May 23-28, 2004
Murali Krishna Ramanathan , Ananth Grama , Suresh Jagannathan, Path-Sensitive Inference of Function Precedence Protocols, Proceedings of the 29th International Conference on Software Engineering, p.240-250, May 20-26, 2007
John Whaley , Michael C. Martin , Monica S. Lam, Automatic extraction of object-oriented component interfaces, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
Ben Liblit , Alex Aiken , Alice X. Zheng , Michael I. Jordan, Bug isolation via remote program sampling, ACM SIGPLAN Notices, v.38 n.5, May
Glenn Ammons , Rastislav Bodk , James R. Larus, Mining specifications, ACM SIGPLAN Notices, v.37 n.1, p.4-16, Jan. 2002
Robert J. Walker , Kevin Viggers, Implementing protocols via declarative event patterns, ACM SIGSOFT Software Engineering Notes, v.29 n.6, November 2004
Sebastian Elbaum , Madeline Hardojo, An empirical study of profiling strategies for released software and their impact on testing activities, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
Manos Renieris , Shashank Ramaprasad , Steven P. Reiss, Arithmetic program paths, ACM SIGSOFT Software Engineering Notes, v.30 n.5, September 2005
Sebastian Elbaum , Madeline Diep, Profiling Deployed Software: Assessing Strategies and Testing Opportunities, IEEE Transactions on Software Engineering, v.31 n.4, p.312-327, April 2005
David Lo , Siau-Cheng Khoo, SMArTIC: towards building an accurate, robust and scalable specification miner, Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, November 05-11, 2006, Portland, Oregon, USA
Jinlin Yang , David Evans , Deepali Bhardwaj , Thirumalesh Bhat , Manuvir Das, Perracotta: mining temporal API rules from imperfect traces, Proceeding of the 28th international conference on Software engineering, May 20-28, 2006, Shanghai, China | program tracing;software understand;dynamic program analysis |
382913 | Role-based authorization constraints specification. | Constraints are an important aspect of role-based access control (RBAC) and are often regarded as one of the principal motivations behind RBAC. Although the importance of contraints in RBAC has been recogni zed for a long time, they have not recieved much attention. In this article, we introduce an intuitive formal language for specifying role-based authorization constraints named RCL 2000 including its basic elements, syntax, and semantics. We give soundness and completeness proofs for RCL 2000 relative to a restricted form of first-order predicate logic. Also, we show how previously identified role-based authorization constraints such as separtation of duty (SOD) can be expressed in our language. Moreover, we show there are other significant SOD properties that have not been previously identified in the literature. Our work shows that there are many alternate formulations of even the simplest SOD properties, with varying degree of flexibility and assurance. Our language provides us a rigorous foundation for systematic study of role-based authorization constraints. | INTRODUCTION
Role-based access control (RBAC) has emerged as a widely accepted alternative to
classical discretionary and mandatory access controls [Sandhu et al. 1996]. Several
models of RBAC have been published and several commercial implementations are
available. RBAC regulates the access of users to information and system resources
Authors' address: Gail-Joon Ahn, College of InformationTechnology, University of North Carolina
at Charlotte, 9201 University City Blvd., Charlotte, NC 28223-0001; email: gahn@uncc.edu; url:
Information and Software Engineering Department, Mail Stop
4A4, George Mason University, Fairfax, VA 22030; email: sandhu@gmu.edu; url: www.list.gmu.edu
Permission to make digital or hard copies of part or all of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for profit or direct commercial
advantage and that copies show this notice on the first page or initial screen of a display along
with the full citation. Copyrights for components of this work owned by others than ACM must
be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, to redistribute to lists, or to use any component of this work in other works, requires prior
specific permission and/or a fee. Permissions may be requested from Publications Dept, ACM
Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.
on the basis of activities that users need to execute in the system. It requires
the identification of roles in the system. A role can be defined as a set of actions
and responsibilities associated with a particular working activity. Then, instead
of specifying all the accesses each individual user is allowed, access authorizations
on objects are specified for roles. Since roles in an organization are relatively
persistent with respect to user turnover and task re-assignment, RBAC provides
a powerful mechanism for reducing the complexity, cost, and potential for error
in assigning permissions to users within the organization. Because roles within an
organization typically have overlapping permissions RBAC models include features
to establish role hierarchies, where a given role can include all of the permissions
of another role. Another fundamental aspect of RBAC is authorization constraints
(also simply called constraints). Although the importance of constraints in RBAC
has been recognized for a long time, they have not received much attention in
the research literature, while role hierarchies have been practiced and discussed at
considerable length.
In this paper our focus is on constraint specifications, i.e, on how constraints can
be expressed. Constraints can be expressed in natural languages, such as English,
or in more formal languages. Natural language specification has the advantage of
ease of comprehension by human beings, but may be prone to ambiguities. Natural
language specifications do not lend themselves to the analysis of properties of the
set of constraints. For example, one may want to check if there are conflicting
constraints in the set of access constraints for an organization. We opted for a formal
language approach to specify constraints. The advantages of a formal approach
include a formal way of reasoning about constraints, a framework for identifying new
types of constraints, a classification scheme for types of constraints (e.g., prohibition
constraints and obligation constraints), and a basis for supporting optimization and
specification techniques on sets of constraints.
To specify these constraints we introduce the specification language RCL 2000
(for Role-based Constraints Language 2000, pronounced R ' ickle 2000) which is the
specification language for role-based authorization constraints. In this paper we describe
its basic elements, syntax, and the formal foundation of RCL 2000 including
rigorous soundness and completeness proofs. RCL 2000 is a substantial generalization
of RSL99 [Ahn and Sandhu 1999] which is the earlier version of RCL 2000. It
obligation constraints in addition to the usual separation of duty and
prohibition constraints. 1
Who would be the user of RCL 2000? The first reaction might be to say the
security officer or the security administrator. However, we feel there is room for a
security policy designer distinct from security administrator. The policy designer
has to understand organizational objectives and articulate major policy decisions
1 A common example of prohibition constraints is separation of duty. We can consider the following
statement as an example of this type of constraints: if a user is assigned to purchasing manager,
he cannot be assigned to accounts payable manager and vice versa. This statement requires that
the same individual cannot be assigned to both roles which are declared mutually exclusive. We
identify another class of constraints called obligation constraints. In [Sandhu 1996], there is a
constraint which requires that certain roles should be simultaneously active in the same session.
There is another constraints which requires a user to have certain combinations of roles in user-role
assignment. We classify such constraints as obligation constraints.
Role-based Authorization Constraints Specification \Delta 3
to support these objectives. The security officer or security administrator is more
concerned with day to day operations. Policy in the large is specified by the security
policy designer and the actions of the security administrator should be subject to
this policy. Thus policy in the large might stipulate what is the meaning of conflicting
roles and what roles are in conflict. For example, the meaning of conflicting
roles for a given organization might be that no users other than senior executives
can belong to two conflicting roles. For another organization the meaning might
be that no one, however senior, may belong to two conflicting roles. In another
context we may want both these interpretations to coexist. So we have a notion
of weak conflict (former case) and strong conflict (latter case), applied to different
roles sets. RCL 2000 is also useful for security researchers to think and reason
about role-based authorization constraints.
The rest of this paper is organized as follows. In section 2 we describes the formal
language RCL 2000 including basic elements and syntax. In section 3 we describe
its formal semantics including soundness and completeness proofs. Section 4 shows
the expressive power of RCL 2000. Section 5 concludes this paper.
2. ROLE-BASED CONSTRAINTS LANGUAGE (RCL 2000)
RCL 2000 is defined in context of RBAC96 which is a well-known family of models
for RBAC [Sandhu et al. 1996]. This model has become a widely-cited authoritative
reference and is the basis of a standard currently under development by the National
Institute of Standards and Technology [Sandhu et al. 2000]. Here we use a slightly
augmented form of RBAC96 illustrated in figure 1. We decompose permissions into
operations and objects to enable formulation of certain forms of constraints. Also
in figure 1 we drop the administrative roles of RBAC96 since they are not germane
to RCL 2000.
Intuitively, a user is a human being or an autonomous agent, a role is a job function
or job title within an organization with some associated semantics regarding
the authority and responsibility conferred on a member of the role, and a permission
is an approval of a particular mode of access (operation) to one or more objects in
the system. Roles are organized in a partial order or hierarchy, so that a senior role
inherits permissions from junior roles, but not vice versa. A user can be a member
of many roles and a role can have many users. Similarly, a role can have many
permissions and the same permission can be assigned to many roles. Each session
relates one user to possibly many roles. Intuitively, a user establishes a session (e.g.,
by signing on to the system) during which the user activates some subset of roles
that he or she is a member of. The permissions available to the users are the union
of permissions from all roles activated in that session. Each session is associated
with a single user. This association remains constant for the life of a session. A
user may have multiple sessions open at the same time, each in a different window
on the workstation screen, for instance. Each session may have a different combination
of active roles. The concept of a session equates to the traditional notion of
a subject in access control. A subject is a unit of access control, and a user may
have multiple subjects (or sessions) with different permissions (or roles) active at
the same time. RBAC96 does not define constraints formally.
Constraints are an important aspect of role-based access control and are a powerful
mechanism for laying out higher level organizational policy. The construc-
U
USERS
USER
ASSIGNMENT
UA
ROLES
R
user roles
PERMISSION
ASSIGNMENT
PA
IONS
OPERA-
TIONS
OP OBJ
HIERARCHY
ROLE RH
Fig. 1. Basic Elements and System Functions : from RBAC96 Model
tions of [Sandhu 1996; Sandhu and Munawer 1998] clearly demonstrate the strong
connection between constraints and policy in RBAC systems. The importance of
flexible constraints to support emerging applications has been recently discussed
by Jaeger [Jaeger 1999]. Consequently, the specification of constraints needs to be
considered. To date, this topic has not received much formal attention in context of
role-based access control. A notable exception is the work of Giuri and Iglio [Giuri
and Iglio 1996] who defined a formal model for constraints on role-activation. RCL
2000 considers all aspects of role-based constraints, not just those applying to role
activation. Another notable exception is the work of Gligor et al [Gligor et al.
1998] who formalize separation of duty constraints enumerated informally by Simon
and Zurko [Simon and Zurko 1997]. RCL 2000 goes beyond separation of
duty to include obligation constraints [Ahn 2000] such as used in the constructions
of [Sandhu 1996; Osborn et al. 2000] for simulating mandatory and discretionary
access controls in RBAC. 2
One of our central claims is that it is futile to try to enumerate all interesting
and practically useful constraints because there are too many possibilities and vari-
ations. Instead, we should pursue an intuitively simple yet rigorous language for
specifying constraints such as RCL 2000. The expressive power of RCL 2000 is
demonstrated in section 4, where it shown that many constraints previously identified
in the RBAC literature and many new ones can be conveniently formulated
in RCL 2000.
2.1 Basic Components
The basic elements and system functions on which RCL 2000 is based are defined
in figure 2. Figure 1 shows the RBAC96 model which is the context for these defi-
nitions. RCL 2000 has six entity sets called users (U), roles (R), objects (OBJ), oper-
Intuitively, Prohibition Constraints are constrains that forbid the RBAC component from doing
(or being) something which is not allowed to do (or be). Most of SOD constraints are included
in this class of constraints. And Obligation Constraints are constraints that force the RBAC
component to do (or be) something.
Role-based Authorization Constraints Specification \Delta 5
set of users, fu1 ; :::; un g.
-R= a set of roles, fr1 ; :::; rmg.
set of operations, fop1 ; :::; opog.
set of objects, fobj1 ; :::; obj r g.
OBJ, a set of permissions, fp1 ; :::; pqg.
set of sessions, fs1 ; :::; s r g.
-RH ' R \Theta R is a partial order on R called the role hierarchy or role dominance relation,
written as -.
-UA ' U \Theta R, a many-to-many user-to-role assignment relation.
\Theta R, a many-to-many permission-to-role assignment relation.
mapping each session s i to the single user.
user mapping each role r i to a set of users.
-roles mapping the set U, P, and S to a set of roles R.
roles extends roles in presence of role hierarchy.
roles
roles (p i
roles
-sessions mapping each user u i to a set of sessions.
-permissions mapping each role r i to a set of permissions.
permissions extends permissions in presence of role hierarchy.
permissions (r i
-operations mapping each role r i and object obj i to a set
of operations.
mapping each permission p i to a set of objects.
Fig. 2. Basic Elements and System Functions : from the RBAC96 Model
6 \Delta G.-J. Ahn and R. Sandhu
Quality
Engineer 2
Engineer 1
Production Quality
Engineer 1
Engineering Department (ED)
Employee (E)
Director (DIR)
Project lead 1 (PL1)
Project lead 2 (PL2)
Engineer 2 (E2)
Project 1 Project 2
Production
Engineer 2
Fig. 3. Example of role hierarchies
ations (OP), permissions (P), and sessions (S). These are interpreted as in RBAC96
as discussed above. OBJ and OP are not in RBAC96. OBJ is the passive entities that
contain or receive information. OP is an executable image of a program, which upon
execution causes information flow between objects. P is an approval of a particular
mode of operation to one or more objects in the system.
The function user gives us the user associated with a session and roles gives
us the roles activated in a session. Both functions do not change during the life of
a session. This is a slight simplification from RBAC96 which does allow roles in a
session to change. RCL 2000 thus builds in the constraint that roles in a session
cannot change.
Hierarchies are a natural means for structuring roles to reflect an organization's
lines of authority and responsibility (see Figure 3). By convention, senior roles
are shown toward the top of this diagram and junior roles toward the bottom.
Mathematically, these hierarchies are partial orders. A partial order is a reflexive,
transitive, and antisymmetric relation, so that if x - y then role x inherits the
permissions of role y , but not vice versa. In figure 3, the junior-most role is that
of Employee. The Engineering Department role is senior to Employee and thereby
inherits all permissions from Employee. The Engineering Department role can
have permissions besides those it inherited. Permission inheritance is transitive,
for example, the Engineer1 role inherits permissions from both the Engineering
Department and Employee roles. Engineer1 and Engineer2 both inherit permissions
from the Engineering Department role, but each will have different permissions
directly assigned to it.
The user assignment relation UA is a many-to-many relation between users and
roles. Similarly the permission-assignment relation PA is a many-to-many relation
Role-based Authorization Constraints Specification \Delta 7
a collection of conflicting role sets, fcr 1 ; :::; cr s g, where cr
a collection of conflicting permission sets, fcp
a collection of conflicting user sets,
Fig. 4. Basic Elements and Non-deterministic Functions: beyond RBAC96 Model
between permissions and roles. Users are authorized to use the permissions of roles
to which they are assigned. This is the essence of RBAC.
The remaining functions defined in figure 2 are built from the sets, relations and
functions discussed above. In particular, note that the roles and user functions
can have different types of arguments so we are overloading these symbols. Also
the definition of roles is carefully formulated to reflect the role inheritance with
respect to users and sessions going downward and with respect to permissions going
upward. In other words a permission in a junior role is available to senior roles,
and activation of a senior role makes available permissions of junior roles. This is a
well-accepted concept in the RBAC literature and is a feature of RBAC96. Using
a single symbol roles simplifies our notation so long as we keep this duality of
inheritance in mind.
Additional elements and system functions used in RCL 2000 are defined in figure
4. The precise meaning of conflicting roles, permissions and users will be specified
as per organizational policy in RCL 2000. For mutually disjoint organizational
roles such as those of purchasing manager and accounts payable manager, the same
individual is generally not permitted to belong to both roles. We defined these
mutually disjoint roles as conflicting roles. We assume that there is a collection CR
of sets of roles which have been defined to conflict.
The concept of conflicting permissions defines conflict in terms of permissions
rather than roles. Thus the permission to issue purchase orders and the permission
to issue payments are conflicting, irrespective of the roles to which they are assigned.
We denote sets of conflicting permissions as CP. As we will see defining conflict
in terms of permissions offers greater assurance than defining it in terms of roles.
Conflict defined in terms of roles allows conflicting permissions to be assigned to the
same role by error (or malice). Conflict defined in terms of permissions eliminates
this possibility. In the real world, conflicting users should be also considered. For
example, for the process of preparing and approving purchase orders, it might be
company policy that members of the same family should not prepare the purchase
order, and also be a user who approves that order.
RCL 2000 has two non-deterministic functions, oneelement and allother. The
function allows us to get one element x i from set X. We usually write
oneelement as OE. Multiple occurrences of OE(X) in a single RCL 2000 statement
all select the same element x i from X. With allother(X) we can get a set by taking
out one element. We usually write allother as AO. These two non-deterministic
functions are related by context, because for any set S , fOE(S )g [ AO(S
at the same time, neither is a deterministic function.
In order to illustrate how to use these two functions to specify role-based con-
straints, we take the requirement of static separation of duty (SOD) property which
is the simplest variation of SOD. For simplicity assume there is no role hierarchy
(otherwise replace roles by roles ).
Requirement: No user can be assigned to two conflicting roles. In other
words, conflicting roles cannot have common users. We can express this
requirement as below.
OE(CR) means a conflicting role set and the function roles(OE(U)) returns all roles
which are assigned to a single user OE(U). Therefore this statement ensures that
a single user cannot have more than one conflicting role from the specific role
set OE(CR). We can interpret the above expression as saying that if a user has
been assigned to one conflicting role, that user cannot be assigned to any other
conflicting role. We can also specify this property in many different ways using
RCL 2000, such as
or user(OE(OE(CR))) "
The expression specifies dynamic separation
of duties applied to active roles in a single session as opposed to static
separation applied to user-role assignment. Dynamic separation applied to all sessions
of a user is expressed by j
A permission-centric formulation of separation of duty is specified as roles(OE(OE(CP)))"
OE. The expression roles(OE(OE(CP))) means all roles which
have a conflicting permission from say cp stands for all
roles which have other conflicting permissions from the same conflicting permission
set cp i . This formulation leaves open the particular roles to which conflicting permissions
are assigned but requires that they be distinct. This is just a sampling of
the expressive power of RCL 2000 to be discussed in section 4.
RCL 2000 system functions do not include a time or state variable in their struc-
ture. So we assume that each function considers the current time or state. For
example, the sessions function maps a user u i to a set of current sessions which
are established by user u i . Elimination of time or state from the language simplifies
its formal semantics. RCL 2000 thereby cannot express history or time-based con-
straints. It will need to be extended to incorporate time or state for this purpose.
As a general notational device we have the following convention.
-For any set valued function f defined on set X,
We understand g.
For example, suppose we want to get all users who are assigned to a set of roles
Engineer2g. We can express this using the function user(R)
as equivalent to user(Employee) [ user(Engineer1) [ user(Engineer2).
2.2 Syntax of RCL 2000
The syntax of RCL 2000 is defined by the syntax diagram and grammar given in
figure 5. The rules take the form of flow diagrams. The possible paths represent
the possible sequence of symbols. Starting at the beginning of a diagram, a path
is followed either by transferring to another diagram if a rectangle is reached or by
reading a basic symbol contained in a circle. Backus Normal Form (BNF) is also
Role-based Authorization Constraints Specification \Delta 9
token
expression
statement
token
token
size
set
|
expression
statement
expression
| set
function ( set
op
|
AO
|
size ::= OE
set ::= U
function ::= user j roles j roles j sessions j permissions j permissions j
operations
Fig. 5. Syntax of Language
used to describe the grammar of RCL 2000 as shown in the bottom of figure 5.
The symbols of this form are: ::= meaning "is defined as" and j meaning "or."
Figure
5 shows that RCL 2000 statements consist of an expression possibly followed
by implication (=)) and another expression. Also RCL 2000 statements can be
recursively combined with logical AND operator (-). Each expression consists of a
token followed by a comparison operator and token, size, set, or set with cardinality.
Also token itself can be an expression. Each token can be just a term or a term
with cardinality. Each term consists of functions and sets including set operators.
The sets and system functions described earlier in section 2.1 are allowed in this
syntax. Also, we denote oneelement and allother as OE and AO respectively.
3. FORMAL SEMANTICS OF RCL 2000
In this section we discuss the formal semantics for RCL 2000. We do so by identifying
a restricted form of first order predicate logic called RFOPL which is exactly
equivalent to RCL 2000. Any property written in RCL 2000, called a RCL 2000
expression, can be translated to an equivalent expression in RFOPL and vice versa.
The syntax of RFOPL is described later in this section. The translation algorithm,
namely Reduction, converts a RCL 2000 expression to an equivalent RFOPL ex-
pression. This algorithm is outlined in figure 6. Reduction algorithm eliminates AO
from RCL 2000 expression in the first step. Then we translate OE terms
iteratively introducing universal quantifiers from left to right. If we have nested
OE functions in the RCL 2000 expression, translation will start from innermost OE
terms. This algorithm translates RCL 2000 expression to RFOPL expression in
time O(n), supposing that the number of OE term is n.
For example, the following expression can be converted to RFOPL expression
according to the sequences below.
Example 1
Example 2
The resulting RFOPL expression will have the following general structure.
(1) The RFOPL expression has a (possibly empty) sequence of universal quantifiers
as a left prefix, and these are the only quantifiers it can have. We call this
sequence the quantifier part.
Role-based Authorization Constraints Specification \Delta 11
Reduction Algorithm
Input: RCL 2000 expression ; Output: RFOPL expression
Let Simple-OE term be either OE(set), or OE(function(element)), where
set is an element of fU, R, OP, OBJ, P, S, CR, CU, CP, cr, cu, cpg and
function is an element of fuser, roles, roles , sessions, permissions, permissions ,
operations, objectg
1. AO elimination
replace all occurrences of AO(expr) with (expr - fOE(expr)g);
2. OE elimination
While There exists Simple-OE term in RCL 2000 expression
choose Simple-OE term;
call reduction procedure;
End
Procedure reduction
case (i) Simple-OE term is OE(set)
create new variable x
put 8x 2 set to right of existing quantifier(s);
replace all occurrences of OE(set) by x ;
case (ii) Simple-OE term is OE(function(element))
create new variable x
put 8x 2 function(element) to right of existing quantifier(s);
replace all occurrences of OE(function(element)) by x ;
End
Fig. 6. Reduction
Construction Algorithm
Input: RFOPL expression ; Output: RCL 2000 expression
1. Construction RCL 2000 expression from RFOPL expression
While There exists a quantifier in RFOPL expression
choose the rightmost quantifier 8 x 2 X;
pick values x and X from the chosen quantifier;
replace all occurrences of x by OE(X);
End
2. Replacement of AO
if there is (expr - fOE(expr)g) in RFOPL expression
replace it with AO(expr);
Fig. 7. Construction
op
function element
set element
element
Fig. 8. Syntax of restricted FOPL expression
(2) The quantifier part will be followed by a predicate separated by a colon (:), i.e.,
universal quantifier part : predicate
(3) The predicate has no free variables or constant symbols. All variables are
declared in the quantifier part, e.g., 8 r 2 R, 8 u
(4) The order of quantifiers is determined by the sequence of OE elimination. In
some cases this order is important so as to reflect the nesting of OE terms in the
RCL 2000 expression. For example, in 8 cr 2 CR, 8 r 2 cr ,8 u
the set cr , which is used in the second quantifier, must be declared in a previous
quantifier as an element, such as cr in the first quantifier.
most of rules in the syntax of RCL 2000 except term syntax
in figure 5. Figure 8 shows the syntax which predicate should follow to express
term.
Because the reduction algorithm has non-deterministic choice for reduction of OE
term, we may have several RFOPL expressions that are translated from a RCL 2000
expression. As we will see in lemma 4 these expressions are logically equivalent, so
it does not matter semantically which one is obtained.
Next, we discuss the algorithm Construction that constructs a RCL 2000 expression
from an RFOPL expression. The algorithm is described in figure 7. This
algorithm repeatedly chooses the rightmost quantifier in RFOPL expression and
constructs the corresponding OE term by eliminating the variable of that quantifier.
After all quantifiers are eliminated the algorithm constructs AO terms according to
the formal definition of AO function. The running time of the algorithm obviously
depends on the number of quantifiers in RFOPL expression.
For example, the following RFOPL expression can be converted to RCL 2000
expression according to the sequence described below.
RFOPL expression:
RCL 2000 expression :
Role-based Authorization Constraints Specification \Delta 13
Unlike the reduction algorithm we can observe the following lemma, where C(expr)
denotes the RCL 2000 expression constructed by Construction algorithm.
Lemma 1. C(fi) always gives us the same RCL 2000 expression ff.
Proof: Construction algorithm always choose the rightmost quantifiers to construct
RCL 2000 expression from RFOPL expression. This procedure is deterministic.
Therefore, given RFOPL expression fi, we will always get the same RCL 2000
We introduced two algorithms, namely Reduction and Construction, that can
reduce and construct RCL 2000 expression. Next we show the soundness and
completeness of this relationship between RCL 2000 and RFOPL expressions.
3.1 Soundness Theorem
Let us define the expressions generated during reduction and construction as intermediate
expression collectively called IE. These expressions have mixed form of
RCL 2000 and RFOPL expressions, that is, they contain quantifiers as well as OE
terms. Note that RCL 2000 and RFOPL expressions are also intermediate expressions
In order to show the soundness of RCL 2000, we introduce the following lemma.
Lemma 2. If the intermediate expression fl is derived from RCL 2000 expression
ff by reduction algorithm in k iterations then construction algorithm applied to fl
will terminate in exactly k iterations.
Proof: It is obvious that fl has k quantifiers because the reduction algorithm
generates exactly one quantifier for each iteration. Now the construction algorithm
eliminates exactly one quantifier per iteration, and will therefore terminate in k
iterations. 2
This leads to the following theorem, where R(expr) denotes the RFOPL expression
translated by Reduction algorithm. We define all occurrences of same OE term
in an intermediate expression as a distinct OE term.
Theorem 1. Given RCL 2000 expression ff, ff can be translated into RFOPL
expression fi. Also ff can be reconstructed from fi. That is,
Proof: Let us define C n as n iterations of reduction algorithm, and R n as n
iterations of reduction algorithm. We will prove the stronger result that C n (R n (ff))
by induction on the number of iterations in reduction R (or, C under the result
of lemma 2).
Basis: If the number of iterations n is 0, the theorem follows trivially.
Inductive Hypothesis: We assume that if n=k , this theorem is true.
Inductive Step: Consider the intermediate expression fl translated by reduction
algorithm in k iterations. Let flbe the intermediate expression translated by
reduction algorithm in the k th iteration. fl differs from flin having an additional
rightmost quantifier and one less distinct OE term. Applying the construction algorithm
to fl, eliminates this rightmost quantifier and brings back the same OE term in
all its occurrences. Thus the construction algorithm applied to fl gives us fl. From
this intermediate expression fl, we can construct ff due to the inductive hypothesis.
This completes the inductive proof. 2
14 \Delta G.-J. Ahn and R. Sandhu
3.2 Completeness Theorem
In order to show the completeness of RCL 2000 relative to RFOPL, we introduce
the following lemma analogous to lemma 2.
Lemma 3. If the intermediate expression fl is derived from RFOPL expression
fi by construction algorithm in k iterations then reduction algorithm applied to fl
will terminate in exactly k iterations.
Proof: It is obvious that fl has k distinct OE terms because the construction algorithm
generates exactly one distinct OE term for each iteration. Now the reduction
algorithm eliminates exactly one distinct OE term per iteration, and will therefore
terminate in k iterations. 2
Next we prove our earlier claim that even though the reduction algorithm is
non-deterministic, all RFOPL expressions translated from the same RCL 2000 expression
will be logically equivalent. More precisely we prove the following result.
Lemma 4. Let ff be an intermediate expression. If R(ff) gives us fi 1 and fi 2 , fi 1
Proof: The proof is by induction on the number n of OE terms in ff.
Basis: If n is 0 the lemma follows trivially.
Inductive Hypothesis: We assume that if n=k , this lemma is true.
Inductive Step: Let By definition R reduces a simple OE term. Clearly
the choice of variable symbol used for this term is not significant. The choice of
term does not matter so long as it is a simple term. Thus all choices for reducing
a simple OE term are equivalent. The Lemma follows by induction hypothesis. 2
The final step to our desired completeness result is obtained below.
Lemma 5. There exists an execution of R such that
Proof: We prove the stronger result that there is an execution of R such that
by induction on the number of iterations in construction C (or, R
under the result of lemma 3).
Basis: If the number of iterations n is 0, the theorem follows trivially.
Inductive Hypothesis: We assume that if n=k , this theorem is true.
Inductive Step: Consider the intermediate expression fl constructed by construction
algorithm in k iterations. Let flbe intermediate expression after the k th
iteration. fl differs from flin having one less quantifier and one more distinct
OE term. Applying one iteration of the reduction algorithm to fl, we can choose
to eliminate this particular OE term and introduce the same variable in the new
rightmost quantifier. This gives us fl 0
. By inductive hypothesis from
there is an
execution of R k that will give us fi. 2
Putting these facts together, we obtain the theorem which shows the completeness
of RCL 2000, relative to RFOPL.
Theorem 2. Given RFOPL expression fi, fi can be translated into RCL 2000
expression ff. Also any firetranslated from ff is logically equivalent to fi. That is,
states that C(fi) gives us a unique result. Let us call it ff. Lemma 5
states there is an execution of R that will go back exactly to fi from ff. Lemma 4
Role-based Authorization Constraints Specification \Delta 15
states that all executions of R for ff will give an equivalent RFOPL expression.
The theorem follows. 2
In this section we have given a formal semantics for RCL 2000 and have demonstrated
its soundness and completeness. Any property written in RCL 2000 could
be translated to an expression which is written in a restricted form of first order
predicate logic, which we call RFOPL. During the analysis of this translation, we
proved two theorems which support the soundness and completeness of the specification
language RCL 2000 and RFOPL respectively.
4. EXPRESSIVE POWER OF RCL 2000
In this section we demonstrate the expressive power of RCL 2000 by showing how
it can be used to express a variety of separation of duty (SOD) properties. In [Ahn
2000] it is further shown how the construction of [Sandhu 1996] and [Osborn et al.
2000] to respectively simulate mandatory and discretionary access controls in RBAC
can be expressed in RCL 2000. As a security principle, SOD is a fundamental
technique for prevention of fraud and errors, known and practiced long before the
existence of computers. It is used to formulate multi-user control policies, requiring
that two or more different users be responsible for the completion of a transaction
or set of related transactions. The purpose of this principle is to minimize fraud by
spreading the responsibility and authority for an action or task over multiple users,
thereby raising the risk involved in committing a fraudulent act by requiring the
involvement of more than one individual. A frequently used example is the process
of preparing and approving purchase orders. If a single individual prepares and
approves purchase orders, it is easy and tempting to prepare and approve a false
order and pocket the money. If different users must prepare and approve orders,
then committing fraud requires a conspiracy of at least two, which significantly
raises the risk of disclosure and capture.
Although separation of duty is easy to motivate and understand intuitively, so far
there is no formal basis for expressing this principle in computer security systems.
Several definitions of SOD have been given in the literature. For the purpose of
this paper we use the following definition.
Separation of duty reduces the possibility for fraud or significant errors
(which can cause damage to an organization) by partitioning of tasks
and associated privileges so cooperation of multiple users is required to
complete sensitive tasks.
We have the following definition for interpreting SOD in role-based environments.
Role-Based separation of duty ensures SOD requirements in role-based
systems by controlling membership in, activation of, and use of
roles as well as permission assignment.
There are several papers in the literature over the past decade which deal with
separation of duty. During this period various forms of SOD have been identified.
Attempts have been made to systematically categorize these definitions. Notably,
Simon and Zurko [Simon and Zurko 1997] provide an informal characterization,
and Gligor et al. [Gligor et al. 1998] provide a formalism of this characterization.
However, this work has significant limitations. It omits important forms of SOD
Properties Expressions
1. SSOD-CR j roles
2. SSOD-CP j permissions(roles
3. Variation of 2 (2) - j permissions
4. Variation of 1 (1) - j permissions
5. SSOD-CU (1) -
6. Yet another variation (4) - (5)
Table
1. Static Separation of Duty
including session-based dynamic SOD needed for simulating lattice-based access
control and Chinese Walls in RBAC [Sandhu 1993; Sandhu 1996]. It also does not
deal with SOD in the presence of role hierarchies. Moreover, as will see, there are
additional SOD properties that have not been identified in the previous literature.
Here, we take a different approach to understand SOD. Rather than simply enumerating
different kinds of SOD we show how RCL 2000 can be used to specify the
various separation of duty properties.
4.1 Static SOD
Static SOD (SSOD) is the simplest variation of SOD. In table 1 we show our
expression of several forms of SSOD. These include new forms of SSOD which have
not previously been identified in the literature. This demonstrates how RCL 2000
helps us in understanding SOD and discovering new basic forms of it.
Property 1 is the most straightforward property. The SSOD requirement is that
no user should be assigned to two roles which are conflicting each other. In other
words, it means that conflicting roles cannot have common users. RCL 2000 can
clearly express this property. This property is the classic formulation of SSOD
which is identified by several papers including [Gligor et al. 1998; Kuhn 1997;
Sandhu et al. 1996]. It is a role-centric property.
Property 2 follows the same intuition as property 1, but is permission-centric.
that a user can have at most one conflicting permission acquired
through roles assigned to the user. Property 2 is a stronger formulation than property
which prevents mistakes in role-permission assignment. This kind of property
has not been previously mentioned in the literature. RCL 2000 helps us discover
such omissions in previous work. In retrospect property 2 is an "obvious property"
but there is no mention of this property in over a decade of SOD literature. Even
though property 2 allows more flexibility in role-permission assignment since the
conflicting roles are not predefined, it can also generate roles which cannot be used
at all. For example, two conflicting permissions can be assigned to a role. Property
simply requires that no user can be assigned to such a role or any role senior to
it, which makes that role quite useless. Thus property 2 prevents certain kinds of
mistakes in role-permissions but tolerates others.
Property 3 eliminates the possibility of useless roles with an extra condition,
jpermissions (OE(R))"OE(CP) j - 1. This condition ensures that each role can have
at most one conflicting permission without consideration of user-role assignment.
With this new condition, we can extend property 1 in presence of conflicting
Role-based Authorization Constraints Specification \Delta 17
permissions as property 4. In property 4 we have another additional condition that
conflicting permissions can only be assigned to conflicting roles. In other words,
non-conflicting roles cannot have conflicting permissions. The net effect is that a
user can have at most one conflicting permission via roles assigned to the user.
Property 4 can be viewed as a reformulation of property 3 in a role-centric man-
ner. Property 3 does not stipulate a concept of conflicting roles. However, we
can interpret conflicting roles to be those that happen to have conflicting permissions
assigned to them. Thus for every cp i we can define cr
OEg. With this interpretation, properties 3 and 4 are essentially
identical. The viewpoint of property 3 is that conflicting permissions get
assigned to distinct roles which thereby become conflicting, and therefore cannot
assigned to the same user. Which roles are deemed conflicting is not determined a
priori but is a side-effect of permission-role assignment. The viewpoint of property
4 is that conflicting roles are designated in advance and conflicting permissions must
be restricted to conflicting roles. These properties have different consequences on
how roles get designed and managed but essentially achieve the same objective with
respect to separation of conflicting permissions. Both properties achieve this goal
with much higher assurance than property 1. Property 2 achieves this goal with
similar high assurance but allows for the possibility of useless roles. Thus, even in
the simple situation of static SOD, we have a number of alternative formulations
offering different degrees of assurance and flexibility.
Property 5 is a very different property and is also new to the literature. With a
notion of conflicting users, we identify new forms of SSOD. Property 5 says that two
conflicting users cannot be assigned to roles in the same conflicting role set. This
property is useful because it is much easier to commit fraud if two conflicting users
can have different conflicting roles in the same conflicting role set. This property
prevents this kind of situation in role-based systems. A collection of conflicting
users is less trustworthy than a collection of non-conflicting users, and therefore
should not be mixed up in the same conflicting role set. This property has not
been previously identified in the literature.
We also identify a composite property which includes conflicting users, roles and
permissions. Property 6 combines property 4 and 5 so that conflicting users cannot
have conflicting roles from the same conflict set while assuring that conflicting roles
have at most one conflicting permission from each conflicting permission sets. This
property supports SSOD in user-role and role-permission assignment with respect
to conflicting users, roles, and permissions.
4.2 Dynamic SOD
In RBAC systems, a dynamic SOD (DSOD) property with respect to the roles activated
by the users requires that no user can activate two conflicting roles. In other
words, conflicting roles may have common users but users can not simultaneously
activate roles which are conflicting each other. From this requirement we can express
user-based Dynamic SOD as property 1. We can also identify a Session-based
Dynamic SOD property which can apply to the single session as property 2. We can
also consider these properties with conflicting users such as property 1-1 and 2-1.
Additional analysis of dynamic SOD properties based on conflicting permissions
can also be pursued as was done for static SOD.
Properties Expressions
1. User-based DSOD j roles
1-1. User-based DSOD with CU j roles
2. Session-based DSOD j roles
2-1. Session-based DSOD with CU j roles
Table
2. Dynamic Separation of Duty
5. CONCLUSION
In this paper we have described the specification language RCL 2000. This language
is built on RBAC96 components and has two non-deterministic functions
OE and AO. We have given a formal syntax and semantics for RCL 2000 and have
demonstrated its soundness and completeness. Any property written in RCL 2000
could be translated to an expression which is written in a restricted form of first
order predicate logic, which we call RFOPL. During the analysis of this transla-
tion, we proved two theorems which support the soundness and completeness of the
specification language RCL 2000 and RFOPL respectively.
RCL 2000 provides us a foundation for studying role-based authorization con-
straints. It is more natural and intuitive than RFOPL. The OE and AO operators
were intuitively motivated by Chen and Sandhu [Chen and Sandhu 1995] and formalized
in RCL 2000. They provide a viable alternative to reasoning in terms of
long strings of universal quantifiers. Also the same RCL 2000 expression has multiple
but equivalent RFOPL formulations indicating that there is a unifying concept
in RCL 2000.
There is room for much additional work with RCL 2000 and similar specification
languages. The language can be extended by introducing time and state. Analysis
of RCL 2000 specifications and their composition can be studied. The efficient
enforcement of these constraints can also be investigated. A user-friendly front-end
to the language can be developed so that it can be realistically used by security
policy designers.
Acknowledgement
This work is partially supported by the National Science Foundation and the National
Agency.
--R
The RCL
The RSL99 language for role-based separation of duty constraints
Constraints for role based access control.
A formal model for role-based access control with constraints
On the increasing importance of constraints.
Mutual exclusion of roles as a means of implementing separation of duty in role-based access control systems
Configuring role-based access control to enforce mandatory and discretionary access control policies
The nist model for role-based access control: Towards a unified standard
How to do discretionary access control using roles.
Role hierarchies and constraints for lattice-based access controls
Separation of duty in role-based environments
--TR
Role-Based Access Control Models
Mutual exclusion of roles as a means of implementing separation of duty in role-based access control systems
Constraints for role-based access control
How to do discretionary access control using roles
On the increasing importance of constraints
The <italic>RSL99</italic> language for role-based separation of duty constraints
The NIST model for role-based access control
Configuring role-based access control to enforce mandatory and discretionary access control policies
Lattice-Based Access Control Models
Role Hierarchies and Constraints for Lattice-Based Access Controls
A Formal Model for Role-Based Access Control with Constraints
Separation of Duty in Role-based Environments
The rcl 2000 language for specifying role-based authorization constraints
--CTR
David F. Ferraiolo , Ravi Sandhu , Serban Gavrila , D. Richard Kuhn , Ramaswamy Chandramouli, Proposed NIST standard for role-based access control, ACM Transactions on Information and System Security (TISSEC), v.4 n.3, p.224-274, August 2001
Trent Jeager, Managing access control complexity using metrices, Proceedings of the sixth ACM symposium on Access control models and technologies, p.131-139, May 2001, Chantilly, Virginia, United States
Ninghui Li , Mahesh V. Tripunitara, Security analysis in role-based access control, ACM Transactions on Information and System Security (TISSEC), v.9 n.4, p.391-420, November 2006
Ninghui Li , Mahesh V. Tripunitara , Qihua Wang, Resiliency policies in access control, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA
Timothy Fraser , David Ferraiolo , Mikel L. Matthews , Casey Schaufler , Stephen Smalley , Robert Watson, Panel: which access control technique will provide the greatest overall benefit, Proceedings of the sixth ACM symposium on Access control models and technologies, p.141-149, May 2001, Chantilly, Virginia, United States
Jason Crampton , George Loizou, Authorisation and antichains, ACM SIGOPS Operating Systems Review, v.35 n.3, p.6-15, July 1 2001
Chiara Braghin , Daniele Gorla , Vladimiro Sassone, Role-based access control for a distributed calculus, Journal of Computer Security, v.14 n.2, p.113-155, January 2006
Jason Crampton , George Loizou, Administrative scope: A foundation for role-based administrative models, ACM Transactions on Information and System Security (TISSEC), v.6 n.2, p.201-231, May
James B. D. Joshi , Elisa Bertino , Arif Ghafoor, An Analysis of Expressiveness and Design Issues for the Generalized Temporal Role-Based Access Control Model, IEEE Transactions on Dependable and Secure Computing, v.2 n.2, p.157-175, April 2005
Ninghui Li , Ziad Bizri , Mahesh V. Tripunitara, On mutually-exclusive roles and separation of duty, Proceedings of the 11th ACM conference on Computer and communications security, October 25-29, 2004, Washington DC, USA
Shih-Chien Chou , Wei-Chuan Hsu , Wei-Kuang Lo, DPE/PAC: decentralized process engine with product access control, Journal of Systems and Software, v.76 n.3, p.207-219, June 2005
Dongwan Shin , Gail-Joon Ahn , Sangrae Cho , Seunghun Jin, On modeling system-centric information for role engineering, Proceedings of the eighth ACM symposium on Access control models and technologies, June 02-03, 2003, Como, Italy
Tolone , Gail-Joon Ahn , Tanusree Pai , Seng-Phil Hong, Access control in collaborative systems, ACM Computing Surveys (CSUR), v.37 n.1, p.29-41, March 2005
Jean Bacon , Ken Moody , Walt Yao, A model of OASIS role-based access control and its support for active security, ACM Transactions on Information and System Security (TISSEC), v.5 n.4, p.492-540, November 2002
James B. D. Joshi , Elisa Bertino , Usman Latif , Arif Ghafoor, A Generalized Temporal Role-Based Access Control Model, IEEE Transactions on Knowledge and Data Engineering, v.17 n.1, p.4-23, January 2005
Steve Barker , Peter J. Stuckey, Flexible access control policy specification with constraint logic programming, ACM Transactions on Information and System Security (TISSEC), v.6 n.4, p.501-546, November
Ninghui Li , Mahesh V. Tripunitara, Security analysis in role-based access control, Proceedings of the ninth ACM symposium on Access control models and technologies, June 02-04, 2004, Yorktown Heights, New York, USA
M. Koch , L. V. Mancini , F. Parisi-Presicce, Administrative scope in the graph-based framework, Proceedings of the ninth ACM symposium on Access control models and technologies, June 02-04, 2004, Yorktown Heights, New York, USA
Steve Neely , Helen Lowe , David Eyers , Jean Bacon , Julian Newman , Xiaofeng Gong, An architecture for supporting vicarious learning in a distributed environment, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Trent Jaeger , Reiner Sailer , Xiaolan Zhang, Resolving constraint conflicts, Proceedings of the ninth ACM symposium on Access control models and technologies, June 02-04, 2004, Yorktown Heights, New York, USA
Dongwan Shin , Gail-Joon Ahn , Sangrae Cho , Seunghun Jin, A role administration system in role-based authorization infrastructures: design and implementation, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Trent Jaeger , Xiaolan Zhang , Fidel Cacheda, Policy management using access control spaces, ACM Transactions on Information and System Security (TISSEC), v.6 n.3, p.327-364, August
Sushil Jajodia , Duminda Wijesekera, Recent advances in access control models, Proceedings of the fifteenth annual working conference on Database and application security, p.3-15, July 15-18, 2001, Niagara, Ontario, Canada
David F. Ferraiolo, An argument for the role-based access control model, Proceedings of the sixth ACM symposium on Access control models and technologies, p.142-143, May 2001, Chantilly, Virginia, United States
Duminda Wijesekera , Sushil Jajodia, Policy algebras for access control the predicate case, Proceedings of the 9th ACM conference on Computer and communications security, November 18-22, 2002, Washington, DC, USA
Joon S. Park , Ravi Sandhu , Gail-Joon Ahn, Role-based access control on the web, ACM Transactions on Information and System Security (TISSEC), v.4 n.1, p.37-71, Feb. 2001
Trent Jaeger , Antony Edwards , Xiaolan Zhang, Managing access control policies using access control spaces, Proceedings of the seventh ACM symposium on Access control models and technologies, June 03-04, 2002, Monterey, California, USA
James B. D. Joshi , Basit Shafiq , Arif Ghafoor , Elisa Bertino, Dependencies and separation of duty constraints in GTRBAC, Proceedings of the eighth ACM symposium on Access control models and technologies, June 02-03, 2003, Como, Italy
Ninghui Li , Mahesh V. Tripunitara , Ziad Bizri, On mutually exclusive roles and separation-of-duty, ACM Transactions on Information and System Security (TISSEC), v.10 n.2, p.5-es, May 2007
Mark Strembeck , Gustaf Neumann, An integrated approach to engineer and enforce context constraints in RBAC environments, ACM Transactions on Information and System Security (TISSEC), v.7 n.3, p.392-427, August 2004
Shariq Rizvi , Alberto Mendelzon , S. Sudarshan , Prasan Roy, Extending query rewriting techniques for fine-grained access control, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
Ninghui Li , Qihua Wang, Beyond separation of duty: an algebra for specifying high-level security policies, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA
Jason Crampton, Specifying and enforcing constraints in role-based access control, Proceedings of the eighth ACM symposium on Access control models and technologies, June 02-03, 2003, Como, Italy
Gail-Joon Ahn , Badrinath Mohan , Seng-Phil Hong, Towards secure information sharing using role-based delegation, Journal of Network and Computer Applications, v.30 n.1, p.42-59, January 2007
Hong Chen , Ninghui Li, Constraint generation for separation of duty, Proceedings of the eleventh ACM symposium on Access control models and technologies, June 07-09, 2006, Lake Tahoe, California, USA
Rakesh Bobba , Serban Gavrila , Virgil Gligor , Himanshu Khurana , Radostina Koleva, Administering access control in dynamic coalitions, Proceedings of the 19th conference on Large Installation System Administration Conference, p.23-23, December 04-09, 2005, San Diego, CA
David Basin , Jrgen Doser , Torsten Lodderstedt, Model driven security: From UML models to access control infrastructures, ACM Transactions on Software Engineering and Methodology (TOSEM), v.15 n.1, p.39-91, January 2006
Shih-Chien Chou, Embedding role-based access control model in object-oriented systems to protect privacy, Journal of Systems and Software, v.71 n.1-2, p.143-161, April 2004
Gustaf Neumann , Mark Strembeck, An approach to engineer and enforce context constraints in an RBAC environment, Proceedings of the eighth ACM symposium on Access control models and technologies, June 02-03, 2003, Como, Italy
Trent Jaeger , Jonathon E. Tidswell, Practical safety in flexible access control models, ACM Transactions on Information and System Security (TISSEC), v.4 n.2, p.158-190, May 2001
David F. Ferraiolo , R. Chandramouli , Gail-Joon Ahn , Serban I. Gavrila, The role control center: features and case studies, Proceedings of the eighth ACM symposium on Access control models and technologies, June 02-03, 2003, Como, Italy
Longhua Zhang , Gail-Joon Ahn , Bei-Tseng Chu, A rule-based framework for role-based delegation and revocation, ACM Transactions on Information and System Security (TISSEC), v.6 n.3, p.404-441, August
Ravi Sandhu , Kumar Ranganathan , Xinwen Zhang, Secure information sharing enabled by Trusted Computing and PEI models, Proceedings of the 2006 ACM Symposium on Information, computer and communications security, March 21-24, 2006, Taipei, Taiwan
Gustaf Neumann , Mark Strembeck, Design and implementation of a flexible RBAC-service in an object-oriented scripting language, Proceedings of the 8th ACM conference on Computer and Communications Security, November 05-08, 2001, Philadelphia, PA, USA
Tanvir Ahmed , Anand R. Tripathi, Specification and verification of security requirements in a programming model for decentralized CSCW systems, ACM Transactions on Information and System Security (TISSEC), v.10 n.2, p.7-es, May 2007
Merwyn Taylor, Hierarchical data security in a query-by-example interface for a shared database, Journal of Biomedical Informatics, v.35 n.3, p.171-177, June 2002
Gail-Joon Ahn , Hongxin Hu, Towards realizing a formal RBAC model in real systems, Proceedings of the 12th ACM symposium on Access control models and technologies, June 20-22, 2007, Sophia Antipolis, France
Elisa Bertino , Ravi Sandhu, Database Security-Concepts, Approaches, and Challenges, IEEE Transactions on Dependable and Secure Computing, v.2 n.1, p.2-19, January 2005
Andreas Schaad , Volkmar Lotz , Karsten Sohr, A model-checking approach to analysing organisational controls in a loan origination process, Proceedings of the eleventh ACM symposium on Access control models and technologies, June 07-09, 2006, Lake Tahoe, California, USA
JuHum Kwon , Chang-Joo Moon, Visual modeling and formal specification of constraints of RBAC using semantic web technology, Knowledge-Based Systems, v.20 n.4, p.350-356, May, 2007
Khaled Alghathbar, Validating the enforcement of access control policies and separation of duty principle in requirement engineering, Information and Software Technology, v.49 n.2, p.142-157, February, 2007 | authorization constraints;constraints specification;access control models;role-based access control |
383040 | Query-based sampling of text databases. | The proliferation of searchable text databases on corporate networks and the Internet causes a database selection problem for many people. Algorithms such as gGLOSS and CORI can automatically select which text databases to search for a given information need, but only if given a set of resource descriptions that accurately represent the contents of each database. The existing techniques for a acquiring resource descriptions have significant limitations when used in wide-area networks controlled by many parties. This paper presents query-based sampling, a new technicque for acquiring accurate resource descriptions. Query-based sampling does not require the cooperation of resource providers, nor does it require that resource providers use a particular search engine or representation technique. An extensive set of experimental results demonstrates that accurate resource descriptions are crated, that computation and communication costs are reasonable, and that the resource descriptions do in fact enable accurate automatic dtabase selection. | Table
1.
Size, Size,
Name in bytes in documents
CACM 2MB 3,204
Test corpora.
Size, Size,
in unique in total
terms terms Variety
6,468 117,473 homogeneous
122,807 9,723,528 heterogeneous
very heterogenenous
of these choices is deferred to later sections of the paper.
How best to represent a large document database is an open problem. However,
much of the prior research is based on simple resource descriptions consisting of
term lists, term frequency or term weight information, and information about the
number of documents 15; 14; 36 or number of words 7; 39; 40 contained in the
resource. Zipf's Law and Heap's Law suggest that relatively accurate estimates of
the rst two pieces of information, term lists and the relative frequency of each
term, can be acquired by sampling 20; 43 .
It is not clear whether the size of a resource can be estimated with query-based
sampling, but it is also not clear that this information is actually required for
accurate database selection. We return to this point later in the paper.
The hypothesis motivating our work is that su ciently accurate resource descriptions
can be learned by sampling a text database with simple free-text' queries.
This hypothesis can be tested in twoways:
1 by comparing resource descriptions learned by sampling known databases
learnedresource descriptions' with the actual resource descriptions for those
databases, and
2 by comparing resource selection accuracy using learned resource descriptions
with resource selection using actual resource descriptions.
Both types of experiments were conducted and are discussed below.
4. EXPERIMENTAL RESULTS: DESCRIPTION ACCURACY
The rst set of experiments investigated the accuracy of learned resource descriptions
as a function of the number of documents examined. The experimental
method was based on comparing learned resource descriptions for known databases
with the actual resource descriptions for those databases.
The goals of the experiments were to determine whether query-based sampling
learns accurate resource descriptions, and if so, what combination of parameters
produce the fastest or most accurate learning. A secondary goal was to study the
sensitivity of query-based sampling to parameter settings.
The following sections describe the data, the type of resource description used,
the metrics, parameter settings, and nally, experimental results.
4.1 Data
Three full-text databases were used:
CACM: a small, homogeneous set of titles and abstracts of scienti c articles from
the Communications of the ACM;
Query-Based Sampling of Text Databases 7
WSJ88: the 1988 Wall Street Journal, a medium-sized corpus of American newspaper
articles;1 and
TREC-123: a large, heterogeneous database consisting of TREC CDs 1, 2, and 3,
which contains newspaper articles, magazine articles, scienti c abstracts, and
governmentdocuments
These are standard test corpora used bymany researchers. Their characteristics
are summarized in Table 1.
4.2 Resource Descriptions
Experiments were conducted on resource descriptions consisting of index terms
usually words and their document frequencies, df the number of documents containing
each term .
Stopwords were not discarded when learned resource descriptions were con-
structed. However, during testing, learned and actual resource descriptions were
compared only on words that appeared in the actual resource descriptions, which
e ectively discarded from the learned resource description anyword that was considered
a stopword by the database. The databases each used the default stopword
list of the INQUERY IR system 34; 33; 6 , whichcontained 418 very frequent
and or closed-class words.
xes were not removed from words stemming' when resource descriptions
were constructed. However, during controlled testing, su xes were removed prior
to comparison to the actual resource description, because the actual resource descriptions
the database indexes were stemmed.
4.3 Metrics
Resource descriptions consisted of twotypes of information: a vocabulary, and frequency
information for eachvocabulary term. The correspondence between the
learned and actual vocabularies was measured with a metric called ctf ratio.The
correspondence between the learned and actual frequency information was measured
with the Spearman Rank Correlation Coe cient.Each metric is described
below.
4.3.1 Measuring Vocabulary Correspondence: Ctf Ratio. The terms in a learned
resource description are necessarily a subset of the terms in the actual description.
One could measure howmany of the database terms are found during learning,
but such a metric is skewed by the many terms occurring just once or twice in a
collection 43; 20 . We desired a metric that gave more emphasis to the frequentand
moderately-frequent terms, whichwe believeconvey the most information about
the contents of a database.
Ctf ratio is the proportion of term occurrences in the database that are covered
by terms in the learned resource description. For a learned vocabulary V 0 and an
actual vocabulary V , ctf ratio is:
1The 1988 Wall Street Journal data WSJ88 is included on TREC CD 1. WSJ88 is about 10
of the text on TREC CD 1.
8 J. Callan and M. Connell
Table
2. ctf ratio example.
Actual Resource Description
Learned Resource Descriptions
Vocabulary ctf
Vocabulary ctf ratio
apple 4
bear 1
cat 3
dog 2
where ctfi is the number of times term i occurs in the database collection term
frequency,orctf . A ctf ratio of 80 means that the learned resource description
contains the terms that account for 80 of the term occurrences in the database.
For example, suppose a database consists of 4 occurrences of apple", 1 occurrence
of bear", 3 occurrence of cat", and 2 occurrences of dog" Table 2 . If
the learned resource description contains only the word apple" 25 of the actual
vocabulary terms , the ctf accounts
for 40 of the word occurrences in the database. If the learned resource description
contains both apple" and cat", the ctf ratio is 70 . ctf ratio measures the degree
to which the learned resource description contains the words that are frequentin
the actual resource description.
Note that the ctf ratios reported in this paper are not arti cially in ated by
nding stopwords, because ctf ratio was always computed after stopwords were
removed.
4.3.2 Spearman Rank Correlation Coe cient. The second component of a resource
description is document frequency information df , which indicates the relative
importance of each term in describing the database. The accuracy of frequency
information can be determined either by comparison of learned and actual
df values after appropriate scaling, or by comparison of the frequency-based term
rankings produced by learned and actual df values. The two measurementmethods
emphasize di erentcharacteristics of the frequency information.
Direct comparison of df values has the undesirable characteristic that the comparison
is biased in favor of estimates based on larger amounts of information, because
estimates based on 10n documents enable only n digits of accuracy in scaled values.
This characteristic was a concern because even relatively noisy df estimates based
on small numbers of documents might be su cient to enable accurate resource
selection.
rankings produced by learned and actual df values can be compared by
the Spearman Rank Correlation Coe cient, an accepted metric for comparing two
orderings. The Spearman Rank Correlation Coe cient is de ned as:
where di is the rank di erence of common term i, n is the number of terms, fk is
the number of ties in the kth group of ties in the learned resource description, and
gm is the number of ties in the mth group of ties in the actual resource description.
Two orderings are identical when the rank correlation coe cient is 1. They are
Query-Based Sampling of Text Databases 9
uncorrelated when the coe cient is 0, and they are in reverse order when the
coe cientis,1.
The complexity of this variant of the Spearman Rank Correlation Coe cientmay
some readers. Simpler versions are more common e.g., 28 . However,
simpler versions assume a total ordering of ranked elements; two elements cannot
share the same ranking. Term rankings have many terms with identical frequencies,
and hence identical rankings. Variants of the Spearman Rank Correlation Coe -
cient that ignore the e ects of tied rankings can give misleading results, as was the
case in our initial research on query-based sampling 5 .
The Spearman Rank Correlation Coe cientwas computed using just the terms
in the intersection of V and V 0. Use of the intersection is appropriate because the
Spearman Rank Correlation Coe cient is used to discover whether the terms in V 0
are ordered appropriately by the learned frequency information.
Database selection does not require a rank correlation coe cient of 1.0. It is
su cient for the learned resource description to represent the relative importance
of index terms in each database to some degree of accuracy.For example, it might
be su cienttoknow the ranking of a term 5 . Although most database selection
algorithms are likely to be insensitive to small ranking errors, it is an open
question howmuch error a given algorithm can tolerate before selection accuracy
deteriorates.
4.4 Parameters
Experiments with query-based sampling require making choices about howquery
terms are selected and howmany documents are examined per query.
In our experiments, the rst query run on a database was determined by selecting
a term randomly from the TREC-123 vocabulary. The initial query could be
selected using other criteria, for example selecting a very frequent term, or it could
be selected from another resource. Several informal experiments found that the
choice of the initial query term had minimal e ect on the quality of the resource
description learned and the speed of learning, as long as it retrieved at least one
document.
Subsequent query terms were chosen byavariety of methods, as described in
the following sections. However, in all cases the terms chosen were subject to
requirements similar to those placed on index terms in many text retrieval systems:
A term selected as a query term could not be a number, and was required to be 3
or more characters long.
We had no hypotheses to guide the decision about howmany documents to
sample per database query. Instead, a series of experiments was conducted to
determine the e ect of varying this parameter.
The experiments presented belowwere ended after examining 500 documents.
This stopping criteria was chosen empirically after running several initial experi-
ments, and were biased by our interest in learning resource descriptions from small
ideally, constant sized samples. Several experiments with each database were
continued until several thousand documents were sampled, to ensure that nothing
unusual happened.
J. Callan and M. Connell
ctf ratio
| | | | | | 0.0 | | | | | |
|
|
|
|
|
|
|
|
Spearman Rank Correlation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Number of documents examined Number of documents examined
(a) (b)
Fig. 1. Measures of howwell a learned resource description matches the actual resource description
of a full-text database. a Percentage of database word occurrences covered by terms in the
learned resource description. b Spearman rank correlation coe cientbetween the term rankings
in the learned resource description and the database. Four documents examined per query. Each
point is the average of 10 trials.
4.5 Results
Four sets of experiments were conducted to study the accuracy of resource descriptions
learned under a variety of conditions. The rst set of experiments was an
initial investigation of query-based sampling with the parameter settings discussed
above. We call these the baseline experiments. A second set of experiments studied
the e ect of varying the number of documents examined per query. A third set
of experiments studied the e ect of varying the way query terms were selected. A
fourth set of experiments studied the e ect of varying the choice of the collection
from which documents were picked. Each set of experiments is discussed separately
below.
4.5.1 Results of Baseline Experiments. The baseline experiments were an initial
investigation of query-based sampling. The goal of the baseline experiments was to
determine whether query-based sampling produced accurate resource descriptions,
and if so, how accuracy varied as a function of the total number of documents
examined.
The initial query term was selected randomly from the TREC-123 resource de-
scription, as described above. Subsequent query terms were selected randomly from
the resource description being learned.
The top four documents retrieved byeach query were examined to update the
resource description. Duplicate documents, that is, documents that had been retrieved
previously by another query,were discarded, hence some queries produced
fewer than four documents.
Ten trials were conducted, each starting from a di erent randomly selected query
term, to compensate for the e ects of random query term selection. The experimental
results reported here are averages of results returned by the ten trials.
Query-Based Sampling of Text Databases 11
Table
3. E ect of varying the number of documents examined per query on howlongittakes a
sampling method to reachactf ratio of 80 .
Documents CACM WSJ88 TREC-123
Per Total Total Total
Query Docs Spearman Docs Spearman Docs Spearman
The variation in the measurements obtained from each trial on a particular
database was large 10 , 15 at 50 documents, but decreased rapidly.At150
documents it was 4 , 5 , and at 250 documents it was 2 , 4 . The consistency
among the trials suggests that the choice of the initial query term is not particularly
important, as long as it returns at least one document. The e ects of di erent
strategies for selecting subsequent query terms are addressed in Section 4.5.3.
Figure 1a shows that query-based sampling quickly nds the terms that account
for 80 of the non-stopword term occurrences in each collection.2 After about
250 documents, the new vocabulary being discovered consists of terms that are
relatively rare in the corpus, which is consistent with Zipf's law 43 .
Figure 1b shows the degree of agreementbetween the term orderings in the
learned and actual resource descriptions, as measured by the Spearman Rank Correlation
Coe cient. A high degree of correlation between learned and actual orderings
is observed for all collections after seeing about 250 documents. The correlation
observed for the largest collection TREC-123 is less than the correlations observed
for the smaller collections CACM and WSJ88 . Extending the number of documents
sampled beyond 500 does not substantially improve the correlation measure
on this large collection.
Results from both metrics support the hypothesis that accurate resource descriptions
can be learned by examining only a small fraction of the collection. This result
is encouraging, because it suggests that query-based sampling is a viable method
of learning accurate resource descriptions.
4.5.2 Results of Varying Sample Size. The baseline experiments sampled the four
most highly ranked documents retrieved for each query.However, the sampling
process could have retrieved more documents, or fewer documents, per query.Doing
so could change the number of queries and or documents required to achieveagiven
level of accuracy, which in turn could a ect the costs of running the algorithm.
A series of experiments was conducted to investigate the e ects of varying the
number of documents examined per query.Valuesof1,2,4,6,8,and10docu-
ments per query were tested. As in the prior experiment, ten trials were conducted
for eachvalue, each trial starting from a di erent randomly selected query term,
2Recall that stopwords were excluded from the comparison. If stopwords were included in the
comparison, the rate of convergencewould be considerably faster.
J. Callan and M. Connell
ctf ratio
CACM data CACM data
4 docs / qry
8 docs / qry
| | | | | | 0.0 | | | | | |
|
|
|
|
|
|
|
|
|
Spearman Rank Correlation
|
|
|
|
|
|
|
|
|
|
|
|
|
Number of documents examined Number of documents examined
(a) (b)
ctf ratio
WSJ88 data WSJ88 data
4 docs / qry
8 docs / qry
| | | | | | 0.0 | | | | | |
|
|
|
|
|
|
|
|
|
Spearman Rank Correlation
|
|
|
|
|
|
|
|
|
|
|
|
|
Number of documents examined Number of documents examined
(c) (d)
ctf ratio
data TREC123 data
4 docs / qry
8 docs / qry
| | | | | | 0.0 | | | | | |
|
|
|
|
|
|
|
|
|
Spearman Rank Correlation
|
|
|
|
|
|
|
|
|
|
|
|
|
Number of documents examined Number of documents examined
Fig. 2. Measures of howwell a learned resource description matches the actual resource description
of a full-text database. Each pointistheaverage of 10 trials. a , c , and e : Percentage
of database word occurrences covered by terms in the learned resource description. b , d ,
correlation coe cientbetween the term rankings in the learned resource
description and the database.
Query-Based Sampling of Text Databases 13
with subsequent query terms chosen randomly from the resource description being
learned. Each experimental result reported belowisanaverage of the experimental
results from ten trials.
Varying the numberofdocuments per query had little e ect on the speed of
learning, as measured by the average number of documents required to reacha
given level of accuracy. Indeed, the e ect was so small that it is di cult to display
the results of di erentvalues on a single graph. Figure 2 shows results for values of
1, 4, and 8 documents per query on each database. Results for valuesof2,6,and
were very similar.
Table
3 provides another perspective on the experimental results. It shows the
number of documents required to reachactf ratio of 80 . Varying the number
of documents examined per query from 1 to 10 caused only minor variations in
performance for 2 of the 3 databases.
Careful study reveals that examining more documents per query results in slightly
faster learning fewer queries required on the small, homogeneous CACM database;
examining fewer documents per query results in somewhat faster learning on the
larger, heterogeneous TREC123 database. However, the e ects of varying the number
of documents per query are, on average, small. The most noticeable e ect is
that examining fewer documents per query results in a moreconsistent learning
speed on all databases. There was greater variation among the ten trials when
documents were examined per query 3 , 5 than when 1 documentwas
examined per query 1 , 3 .
In this experiment, larger samples worked well with the small homogeneous col-
lection, and smaller samples worked well with the large heterogeneous collection.
We do not nd this result surprising. Samples are biased by the queries that draw
them; the documents within a sample are necessarily similar to some extent. We
would expect that many small samples would better approximate a random sample
than fewer large samples in collections where there is signi cant heterogeneity.The
results support this intuition.
4.5.3 Results of Varying Query Selection Strategies. The baseline experiments
select query terms randomly from the resource description being learned. Other
selection criteria could be used, or terms could be selected from other sources.
One hypothesis was that it would be best to select terms that appear to occur
frequently in the collection, i.e., words that are nearly frequentenoughtobe
stopwords, because they would return the most random sample of documents. We
tested this hypothesis by selecting frequent query terms, as measured by document
frequency df , collection term frequency ctf , and average term frequency avg tf
ctf df .
One early concern was that learned resource descriptions would be strongly biased
by the set of documents that just happened to be examined rst, and that this bias
would be reinforced by selecting additional query terms from the learned resource
description. A solution would be to select terms from a di erent, more complete
resource description. This hypothesis was named the other resource description,
or ord hypothesis, and was compared to the default learnedresource description
or lrd approach used in the other experiments. The complete TREC-123 resource
description served as the other' resource description.
14 J. Callan and M. Connell
ctf ratio
df, ord df, ord
ctf, ord ctf, ord
| | | | 0.0 | | | |
|
|
|
|
|
|
|
|
Spearman Rank Correlation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Number of documents examined Number of documents examined
(a) (b)
ctf ratio
df, lrd df, lrd
ctf, lrd ctf, lrd
| | | | 0.0 | | | |
|
|
|
|
|
|
|
|
Spearman Rank Correlation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Number of documents examined Number of documents examined
(c) (d)
Fig. 3. Measures of how di erent query selection strategies a ect the accuracy of a learned
resource description. a and c : Percentage of database word occurrencescovered byterms
in the learned resource description. b and d : Spearman rank correlation coe cientbetween
the term rankings in the learned resource description and the database. 1988 Wall Street Journal
database. Four documents examined per query.Eachpoint for the random and lrd curves is the
average of 10 trials.
The choice of TREC-123 as the other' resource description mightbechallenged,
because WSJ88 is a subset of TREC-123. It is possible that TREC-123 mightbea
biased, or an unrealistically good, other' resource description from which to select
terms for sampling WSJ88. Wewere aware of this possible bias, and were prepared
to conduct more thorough experiments if the initial results appeared to con rm the
other' resource description hypothesis.
A series of experiments was conducted, following the same experimental methodology
used in previous experiments, except in how query terms were selected. Query
terms were selected either randomly or based on one of the frequency criteria,
from either the learned resource description lrd or the other' resource description
ord . Four documents were examined per query.Ten trials were conducted for
Query-Based Sampling of Text Databases 15
Table
4. The di erencesbetween selectingquery terms from an other resource description ord or
learned resource description lrd . Signi cantAt&Above' is the point on the curves in Figure 3 at
which the di erence between selecting from ord and lrd resources becomes statistically signi cant
t test, p 0:01 . Values for learned resource descriptions and the random selection method are
averages of 10 trials.
ctf ratio
Selection Signi cant 100 Documents 200 Documents 300 Documents
Method At & Above ord lrd ord lrd ord lrd
avg tf 20 docs 0.8651 0.8026 0.8989 0.8552 0.9130 0.8779
random 20 docs 0.8452 0.7787 0.8859 0.8401 0.9067 0.8678
ctf 190 docs 0.7920 0.7774 0.8412 0.8310 0.8625 0.8558
df
each method that selected query terms randomly or from the learned resource description
lrd , to compensate for random variation and order e ects. Experiments
were conducted on all three collections, but results were su ciently similar that
only results for the WSJ88 collection are presented here.
In all of the experiments, selecting terms from the other' resource description
produced faster learning, as measured bythenumberofdocuments required to reach
agiven level of accuracy Figure 3 . The di erences were statistically signi cant
for all four term selection methods t test, p 0:01 . However, the di erences were
relatively large for the avg tf and random selection methods, and were statistically
signi cant after only 20 documents were observed; the di erences were small for
the ctf and df selection methods, and required 130 and 190 documents respectively
to achieve statistical signi cance Table 4 . There might be some value to using an
other resource description for avg tf and random term selection methods, but there
appears to be little value for the ctf and df selection methods.
One weakness of selecting query terms from an other resource description is that
it can provide terms that do not appear in the target resource out of vocabulary'
query terms . This characteristic is particularly noticeable with avg tf and random
term selection. Avg tf and random selection from an other resource description
produced the most accurate results Table 4 , but required many more queries to
retrieveagiven number of unique documents due to out of vocabulary' queries
Table
5 . Recall also that the other' resource description TREC-123 was a
superset of the target database WSJ88 . The number of failed queries mighthave
been higher if the other' resource description had been a less similar database.
The experiments demonstrate that selecting query terms from the learned resource
description, as opposed to a more complete other' resource description,
does not produce a strongly skewed sample of documents. Indeed, random and
avg tf selection of query terms from the learned resource description provided the
best balance of accuracy and e ciency in these experiments. The worst-case behav-
ior, obtained with an other resource description that is a poor match for the target
resource, would also favor selecting terms from the learned resource description.
The experiments also demonstrate that selecting query terms randomly from
the learned resource description is more e ective than selecting them based on
high frequency. This result was a surprise, because our hypothesis was that high
J. Callan and M. Connell
Table
5. The number of queries required to retrieve 300 documents using di erent query selection
criteria.
Selection Random, Random, avg tf, avg tf, df, df, ctf, ctf
strategy ord lrd ord lrd ord lrd ord lrd
Number of queries 378 84 6,673 112 78 154 77 154
frequency terms would either occur in manycontexts, or would have relatively weak
contexts, producing a more random sample. That hypothesis was not supported
by the experiments.
4.5.4 Results of Varying the Databases Sampled. The results of the experiments
described in the preceding sections support the hypothesis that database contents
can be determined by query based sampling. However, they do not rule out a
competing hypothesis: That a relatively random sample of documents from nearly
any American English database would produce an equally accurate description of
the three test databases. Perhaps these experiments merely reveal properties of
American discourse, for example, that certain words are used commonly.
If the competing hypothesis is true, then query-based sampling is not necessary; a
partial description from any relatively similar resource would produce similar results
at lower computational cost. More importantly,itwould cast doubt on whether
partial resource descriptions distinguish databases su ciently to enable accurate
database selection. If the partial resource descriptions for most American English
databases are very similar, a database selection algorithm would presumably have
great di culty identifying the databases that best match a speci c information
need.
A series of experiments was conducted to test the hypothesis that relatively
random samples of documents from di erent American English database would
produce equally accurate descriptions of the three test databases.
The experimental method consisted of comparing the resource descriptions created
by query-based sampling of various databases to the actual, complete resource
description for the test databases. For example, resource descriptions created by
query-based sampling of CACM, WSJ88, and TREC-123 databases were compared
to the actual description for the CACM database Figures 4a and 4b . The hypothesis
would be supported if each of the learned resource descriptions were roughly
comparable in howwell they matched the actual, complete resource description of
a particular database.
Experiments were conducted with the CACM, WSJ88, and TREC-123 databases.
Comparisons were performed over 300-500 examined documents. The experimental
results are summarized in Figure 4.
The experimental results indicate that a description learned for one resource,
particularly a large resource, can contain the vocabulary that occurs frequently in
other resources. For example, the resource descriptions learned for the TREC-123
database contained the vocabulary that is frequent, and presumably important, in
the WSJ88 and CACM databases Figures 4a and 4c . The results also suggest
that prior knowledge of database characteristics might be required to decide which
descriptions to use for each database. The CACM resource description, for example,
lacked much of the vocabulary that is important to both the WSJ88 and TREC-123
Query-Based Sampling of Text Databases 17
ctf ratio0.90.70.50.30.1
|
|
|
|
|
|
|
|
|
|
|CACM database (actual) CACM database (actual)0.80.60.4
CACM (learned) 0.3 CACM (learned)
| | | | | 0.0 | | | | | |
Spearman Rank Correlation
|
|
|
|
|
|
|
|
|
|
|
Number of documents examined Number of documents examined
(a) (b)
ctf ratio0.90.70.50.30.1
|
|
|
|
|
|
|
|
|
|
|WSJ88 database (actual) WSJ88 database (actual)0.80.60.4
CACM (learned) 0.3 CACM (learned)
| | | | | 0.0 | | | | | |
Spearman Rank Correlation
|
|
|
|
|
|
|
|
|
|
|
Number of documents examined Number of documents examined
(c) (d)
ctf ratio0.90.70.50.30.1
|
|
|
|
|
|
|
|
|
|
database (actual) TREC123 database (actual)0.80.60.4
CACM (learned) 0.3 CACM (learned)
| | | | | 0.0 | | | | | |
Spearman Rank Correlation
|
|
|
|
|
|
|
|
|
|
|
Number of documents examined Number of documents examined
Fig. 4. Measures of howwell learned resource descriptions for three di erent databases match
the actual resource description of a given database. a , c and e : Percentage of actual
database term occurrences that are covered by terms in di erent learned resource descriptions.
b , d and f : Spearman rank correlation coe cientbetween the actual term rankings and
term rankings in di erent learned resource description. Four documents examined per query.
J. Callan and M. Connell
resources Figures 4c and 4e .
The problem with using the description learned for one resource to describe
another, di erent resource is more apparent when relative term frequency is con-
sidered. Relative term frequency is important because it indicates which terms are
common in a database, and most database selection algorithms prefer databases
in which query terms are common. In these experiments, the relative frequency of
vocabulary items in the three test databases was rarely correlated Figures 4b, 4d,
and 4f . For example, neither the WSJ88 nor the TREC-123 databases gavean
accurate indication of relative term frequency in the CACM database Figure 4b .
Likewise, neither the CACM nor the TREC-123 database gave an accurate indication
of term frequency for the WSJ88 database Figure 4d . The one exception to
this trend was that the WSJ88 database did appear to give a relatively accurate
indication of relative term frequency in the TREC-123 database Figure 4f .3
These experiments refute the hypothesis that the experimental results of the earlier
sections are based upon language patterns that are common across di erent
collections of American English text. There may be considerable overlap of vocabulary
among the di erent databases, but there are also considerable di erences in
the relative frequencies of terms in each database. For example, the term com-
puter" occurs in all three databases, but its relative frequency is much higher in
the CACM database than in the WSJ88 and TREC-123 databases.
Post-experiment analysis indicates that an improved experimental methodology
would provide even stronger evidence refuting the alternate hypothesis. The ctf
ratio does not measure the fact that the description learned for TREC-123 contains
many terms not in the CACM database Figure 4a . Hence, the ctf ratio results
in Figures 4a, 4c, and 4e can overstate the degree to which the learned vocabulary
from one database re ects the actual vocabulary of a di erent database. A large
dictionary of American English would yield a ctf ratio close to 1.0 for all three of
our databases, but few people would argue that it accurately described anyofthem.
5. EXPERIMENTAL RESULTS: SELECTION ACCURACY
The experiments described in the previous section investigate howquickly and
reliably the learned resource description for a database converges upon the actual
resource description. However, we do not knowhow accurate a resource description
needs to be for accurate resource selection. Indeed, wedonoteven knowthat
description accuracy is correlated with selection accuracy, although wepresume
that it is.
The second group of experiments investigated the accuracy of resource selection
as a function of the number of documents examined. The experimental method
was based on comparing the e ectiveness of the database ranking algorithm when
using complete and learned resource descriptions. Databases were ranked with the
INQUERY IR system's default database ranking algorithm 7 .
The following sections describe the data, the type of resource description used,
the metrics, parameter settings, and nally, experimental results.
3This exception may be caused by the fact that about 10 of the TREC-123 database consists of
Wall Street Journal data.
Query-Based Sampling of Text Databases 19
Table
6. Summary statistics for the 100 databases in the testbed.
Resource Documents Per Database Bytes Per Database
Description Minimum Average Maximum Minimum Average Maximum
Actual 752 10,782 39,723 28,070,646 33,365,514 41,796,822
Learned 300 300 300 229,915 2,701,449 15,917,750
5.1 Data
The TREC-123 database described above Section 4.1 was divided into 100 smaller
databases of roughly equal size about 33 megabytes each , but varying in the number
of documents they contained Table 6 . Each database contained documents
from a single source, ordered as they were found on the TREC CDs; hence documents
in a database were also usually from similar timeframes. CD 1 contributed
37 databases, CD 2 contributed 27 databases, and CD 3 contributed 36 databases.
Queries were based on TREC topics 51-150 17 . We used query sets INQ001 and
INQ026, both created by the UMass CIIR as part of its participation in TREC-2
and Tipster 24 month evaluations 6 . Queries in these query sets are long, complex,
and have undergone automatic query expansion.
The relevance assessments were the standard TREC relevance assessments supplied
by the U.S. National Institute for Standards and Technology 17 .
5.2 Resource Descriptions
Each experiment used 100 resource descriptions one per database . Each resource
description consisted of a list of terms and their document frequencies df , as in
previous experiments. Terms on a stopword list of 418 common or closed-class
words were discarded. The remaining terms were stemmed with KStem 21 .
5.3 Metrics
Several methods have been proposed for evaluating resource selection algorithms
. The most appropriate for our needs is a recall-oriented metric
called that measures the percentage of relevant documents contained in
the n top-ranked databases.4 R is de ned as:
ni 1Ri
Ni 1Ri
where n is the number of databases searched, N is the total number of databases,
and Ri is the number of relevantdocuments contained bythei'th database.
R isacumulative searching the top 3 databases
always returns at least as many relevant documents as searching just the top 2
databases.
R is a desirable metric when the accuracy of the database ranking algorithm is
to be measured independently of other system components, and when the goal is to
rank databases containing many relevant documents ahead of databases containing
few relevant documents.
4The metric called R was called R in 23 . We use the more recent and more widely known name,
R, in this paper.
J. Callan and M. Connell< R
Complete
Learned, 700 docs
Learned, 300 docs
Learned, 100 docs
| | | | | | 0 | | |
|
|
|
|
Complete
Learned, 700 docs
Learned, 300 docs
Learned, 100 docs
| | |
Percentage of collections searched Percentage of collections searched
(a) (b)
Fig. 5. Measures of collection ranking accuracy using resource descriptions of varying accuracy.
a Topics 51-100 TREC query set INQ026 . b Topics 101-150 TREC query set INQ001 . 4
documents examined per query. TREC volumes 1, 2, and 3.
5.4 Parameter Settings
The experiments in Section 4 suggested that any relatively small sample size is
e ective, and that di erentchoices produce only small variations in results. We
chose a sample size of four 4 documents per query , to be consistent with the
baseline results in previous experiments. Query terms were chosen randomly from
the learned resource description, as in the baseline experiments.
It was unclear from the experiments in Section 4 when enough samples had been
taken. Wechose to build resource descriptions from samples of 100 documents
about queries , 300 documents about 75 queries , and 700 documents about
175 queries from each database, in order to cover the space of reasonable" numbers
of samples. If results varied dramatically,wewere prepared to conduct additional
experiments.
The collection ranking algorithm itself forces us to set one additional parameter.
The collection ranking algorithm normalizes term frequency statistics dfi;j using
the length, in words, of the collection cwj 7 . However, we do not knowhow
to estimate collection size with query-based sampling. In our experiments, term
frequency information df was normalized using the length, in words, of the set of
sampled documents used to construct the resource description.
5.5 Experimental Results
The experimental results are summarized in the two graphs in Figure 5 one per
query set . The baseline in each graph is the curveshowing results with the actual
resource description complete resource descriptions" . This is the best result that
the collection ranking algorithm can produce when given a complete description for
each collection.
Our interest is in the di erence between what is achieved with complete information
and what is achieved with incomplete information. Both graphs showonlya
small loss of e ectiveness when resource descriptions are based on 700 documents.
Query-Based Sampling of Text Databases 21
Losses grow as less information is used, but the loss is small compared to the information
reduction. Accuracy at low recall", i.e., when only 10-20 of the databases
are searched, is quite good, even when resource descriptions are based on only 100
documents.
These results are consistent with the results presented in Section 4. The earlier
experiments showed that term rankings in the learned and actual resource descriptions
were highly correlated after examining 100-300 documents.
These experimental results also demonstrate that it is possible to rank databases
without knowing their sizes. The size of the pool of documents sampled from a
database was an e ective surrogate for actual database size in these tests. Our
testing did not reveal whether this result is general, a characteristic of the CORI
database selection algorithm, or a quirk due to the 100 database testbed. The
distribution of database sizes in the testbed ranged from 752 documents to 39,723
documents, and from 28 megabytes to 42 megabytes Table 6 . A more thorough
study of this characteristic would require testbeds with a wider variety of size
distributions.
6. EXPERIMENTAL RESULTS: RETRIEVAL ACCURACY
The experiments described in the previous section demonstrate that resource descriptions
learned with query-based sampling enable accurate resource ranking.
Accurate resource ranking is generally viewed as a prerequisite to accurate document
retrieval, but it is not a guarantee. The nal document ranking depends upon
how results from di erent databases are merged, which can be in uenced bythe
quality of the resource descriptions for each database.
A third group of experiments investigated the accuracy of document retrieval in
the presence of learned resource descriptions. The experimental method was based
on comparing the accuracy of the nal document rankings produced by a distributed
system when it uses complete and learned resource descriptions to make decisions
about where to search. Databases were ranked, selected, and searched, and results
were merged into a nal document ranking by the INQUERY IR system's default
database ranking and result merging algorithms 7 .
6.1 Data
The data consisted of the same 100 databases that were used to test database
selection accuracy. Section 5.1 provides details.
6.2 Resource Descriptions
Each database was described by a learned resource description created from a sample
of 300 documents, as done in other experiments 4 documents per query,query
terms chosen randomly from the learned resource description . A sample size of
300 documents was chosen because in previous experiments it provided reasonably
accurate resource descriptions at a relatively low cost about 75 queries per
database .
Each of the 100 resource descriptions one per database consisted of a list of
terms and their document frequencies df , as in previous experiments. Terms on a
stopword list of 418 common or closed-class words were discarded. The remaining
terms were stemmed with KStem 21 .
22 J. Callan and M. Connell
6.3 Metrics
The e ectiveness of archival search systems is often measured either by Precision
at speci ed documentranks,orby Precision at speci ed Recall points. Precision at
speci ed Recall points e.g., 11-point Recall" was the standard for manyyears,
because it normalizes results based on the number of relevant documents; results
for easy" queries many relevantdocuments and hard" queries few relevantdoc-
uments are more comparable. However, when there are many relevantdocuments,
as can be the case with large databases, Precision at speci ed Recall points focuses
attention on results that are irrelevanttomany search patrons e.g., at rank 50 and
100 .
Precision at speci ed document ranks is often used when the emphasis is on the
results a person would see in the rst few screens of an interactive system. Precision
at rank n is de ned as:
Rr
where Rr is the number of retrieved relevantdocuments in ranks 1 through n.
Precision in our experiments was measured at ranks 5, 10, 15, 20, and
uments, as is common in experiments with TREC data 17; These values
indicate the accuracy that would be observed at various points on the rst twoor
three screens of an interactive system.
6.4 Parameter Settings
All INQUERY system parameters were set to their default values for this experi-
ment. The only choices made for these experiments were decisions about howmany
databases to search, and howmany documents to return from each database.
INQUERYsearched the 10 databases ranked most highly for the query byits
database selection algorithm. The number10was chosen because it has been
used in other recent research on distributed search with the INQUERY system 39;
. The database selection algorithm ranked databases using either the learned
resource descriptions or the complete resource descriptions, as determined bythe
experimenter.
Each searched database returned its most highly ranked documents. The
number chosen because Precision was measured up to, but not beyond,
rank 30.
The returned documents 10 were merged, using INQUERY's default algorithm
for merging multi-database" search results. The algorithm for merging
results from multiple searches is based on estimating an idf,normalized score D0
for a document with a score of D in a collection with a score of C as:
Ds D , Dmin Dmax , Dmin 5
where Dmax and Dmin are the maximum and minimum possible scores anydocu-
ment in that database could obtain for the particular query,andCmax and Cmin are
Query-Based Sampling of Text Databases 23
Table
7. Precision of a search system using complete and learned resource descriptions for
database selection and result merging. TREC volumes 1, 2, and 3, divided into 100 databases. 10
databases were searched for eachquery.
Topics 51-100 query set INQ026 Topics 101-150 query set INQ001
Complete Learned Complete Learned
Document Resource Resource Resource Resource
Rank Descriptions Descriptions Descriptions Descriptions
the maximum and minimum scores any collection could obtain for the particular
query. This scaling compensates for the fact that while a system like INQUERY
can in theory produce document scores in the range 0; 1 , in practice the tf.idf algorithm
makes it mathematically impossible for a documenttohave a score outside
a relatively narrow range. Dmin and Cmin are usually 0.4, and Dmax and Cmax are
usually about 0.6. Their exact values are query-dependent, and are calculated by
setting the tf componentofthetf:idf formula to 0.0 and 1.0 for every query term
4 .
Although the theoretical justi cation for this heuristic normalization is weak, it
has been e ective in practice 1; 2; 4; 22 and has been used in INQUERY since
1995.
6.5 Experimental Results
were ranked with either an index of complete resource descriptions base-line
condition or an index of learned resource descriptions test condition . The
were searched; each returned documents. The result lists returned
by each database were merged to produce a nal result list of documents.
The scores used to rank the databases determined the value of C in Equation 6.
Precision was measured at ranks 5, 10, 15, 20, and documents. The experimental
results are summarized in Table 7.
The experimental results indicate that distributed, or multi-database", retrieval
is as e ective with learned resource descriptions as it is with complete resource
descriptions. Precision with one query set INQ026, topics 51-100 averaged 4.8
higher using learned descriptions, with a range of 2:0to7:2 . Precision with
the other query set INQ001, topics 101-150 averaged 3:2 lower using learned
descriptions, with a range of ,1:1 to ,6:1 . Both the improvement and the loss
were too small for a person to notice.
These experimental results extend the results of Section 5, which indicated that
using learned resource descriptions to rank collections introduced only a small
amount of error into the ranking process. One might argue that the amountof
error was too small to cause a noticeable change in search results, but there was no
evidence to support that argument. These results demonstrate that the small errors
introduced by learned resource descriptions do not noticeably reduce the accuracy
of the nal search results.
The accuracy of the document ranking depends also on merging results from
J. Callan and M. Connell
Table
8. A comparison of the 50 most frequent terms, as measured by document frequency,in
a text database and in a learned resource description constructed for that database. 1988 Wall
Journal database. 300 documents examined 4 documents per query .
Text Learned Text Learned
Rank Database Vocabulary Rank Database Vocabulary
million company 26 group york
new million 27 concern operate
3 company new 28 exchange stock
4 make make 29 high hold
executive
6 base base 31 operate close
7 business business price group
8 two market 33 unit international
9 trade co 34 increase increase
hold general
president 36 billion time
close two 37 end exchange
president billion 38 yesterday sale
14 stock say 39 product change
interest result
er service
month share 42 recent manage
u.s. unit 43 america made
19 sta plan 44 manage work
report expect 45 current america
plan three 46 part buy
22 say trade national
interest 48 bank
expect product 49 executive end
di erent collections accurately. The experimental results indicate that learned resource
descriptions support this activityaswell. This result is important because
INQUERY's result merging algorithm estimates a normalized document score as
a function of the collection's score and the document's score with respect to its
collection. The results indicate that not only are collections ranked appropriately
using learned descriptions, but that the scores used to rank them are highly correlated
with the scores produced with complete resource descriptions. This is further
evidence that query-based sampling produces very accurate resource descriptions.
7. A PEEK INSIDE: SUMMARIZING DATABASE CONTENTS
Our interest is primarily in an automatic method of learning resource descriptions
that are su ciently accurate and detailed for use by automatic database selection
algorithms. However, a resource description can also be used to indicate to a person
the general nature of a given text database.
The simplest method is to display the terms that occur frequently and are not
Query-Based Sampling of Text Databases 25
Table
9. The covered by the Combined Health Information database.
AIDS education Disease Prevention Health Promotion
Alzheimer's Disease Epilepsy Education and Prevention
Arthritis; Musculoskeletal and Skin Diseases Health Promotion and Education
Cancer Patient Education Kidney and Urologic Diseases
Cancer Prevention and Control Maternal and Child Health
Complementary and Alternative Medicine Medical Genetics and Rare Disorders
Deafness and Communication Disorders Oral Health
Diabetes Prenatal Smoking Cessation
Digestive Diseases Weight Control
stopwords. This method can be e ective just because the database is, in some
sense, guaranteed to be about the words that occur most often. For example, the
list of the top 50 words found by sampling the 1988 Wall Street Journal Table
8 contains words such as market", interest", trade", million", stock", and
exchange", which are indeed suggestiveoftheoverall subject of the database.
Table
8 also compares the top 50 words in the learned resource description with
the top 50 words in the database. It demonstrates that after 300 documents the
learned resource description is reasonably representativeofthevocabulary in the
target text database and it is representative of the relative importance ranks of
the terms; in this example, there is 76 agreement on the top 50 terms after seeing
just 300 documents.
Controlled experiments are essential to understanding the characteristics of a
new technique, but less controlled, real world' experimentscanalsoberevealing.
A simple database sampling system was built to test the algorithm on databases
found on the Web. The program was tested initially on the Microsoft Customer
Support Database at a time when we understood less about the most e ective
parameter settings. Accurate resource descriptions were learned, but at the cost of
examining many documents 5 .
Wechose for this paper to reproduce the earlier experiment on a more easily
accessible Web database, using sampling parameters that were consistent with
parameter settings described elsewhere in this paper. The Combined Health Information
Database 29 , which is published by several health-related agencies of the
U.S. government National Institutes of Health, Centers for Disease Control and
Prevention, and Health Resources and Services Administration was selected. The
database contains health-related information on topics, which are summarized
in
Table
9.
The initial query term was chosen randomly from the TREC-123 database. Subsequent
query terms were chosen randomly from the resource description that was
being learned. Four documents were examined per query. The experimentwas
ended after 300 documents were examined. Terms in the resource description were
sorted by collection term frequency ctf , and the top 100 terms were displayed.
The results are shown in Table 10.
One can see easily that the database contains documents about health-related
topics. Terms such as hiv", aids", health", prevention", risk", cdc", trans-
mission", medical", disease", virus", drug" and immunode ciency" showup
26 J. Callan and M. Connell
Table
10. The top 100 words found by sampling the U.S. National Institutes of Health NIH
Combined Health Information database. Terms are ranked by collection term frequency ctf in
the sampled documents. 300 documents were examined 4 documents per query .
hiv 1931 254 lg 296 296 control 168 86
aids 1561 291 mj 296 296 department 166 90
health 1161 237 ve 296 296 notes 163 163
prevention 666 195 veri cation 296 296 nt 163 163
education 534 293 yr 296 296 state 160 64
information 439 184 code 295 292 program 158 80
persons 393 174 english 294 280 video 148
number 384 296 ac 292 292 acquired 144 140
author 370 294 physical 282 267 de ciency 139 137
material 361 293 print 281 257 research 138 74
document 356 296 treatment 280 127 syndrome 138 138
human 355 212 cn 279 279 factors 137 95
source 346 296 corporate 279 279 drugs 132 68
report 328 89 description 278 266 united 132 80
accession 323 296 pd 266 266 centers 131 67
public 323 156 programs 264 112 world 131 55
update 317 296 organizations 261 126 box 130 121
community 313 107 positive 254 150 cdc 128 75
language 310 296 care 248 83 children 122 45
services 310 129 virus 246 192 patient 119 42
descriptors 308 296 disease 241 120 center 118 67
format 308 296 service 241 133 people 117 68
major 305 296 discusses 226 152 agencies 112
national 304 132 provides 226 154 government 112 63
transmission 304 114 professionals 217 167 nations 112 41
published 303 296 medical 212 117 describes 110 87
audience 302 293 immunode ciency 193 180 organization 109 51
availability 302 293 drug 190 74 sex 108
abstract 299 296 risk 185 99 std 107 50
date 299 296 issues 182 96 counseling 106 50
chid 297 296 brochure 180 54 refs 103 103
sub le 297 296 immune 179 144 surveillance 103 35
ab 296 296 examines 173 132
fm 296 296 women 171 61
high in the list.
Several of the most frequentwords appear to indicate little about the database
contents, such as update", published", format", and abstract". These terms
could have been removed by using a larger stopword list. However, in general
it is unclear whichwords in a multi-database environment should be considered
stopwords, since words that are unimportant in one database maybecontentwords
for others.
Query-Based Sampling of Text Databases 27
Table
11. The top 50 words found by sampling TREC-123 Terms are ranked by documentfre-
quency df in the sampled documents. 500 documents were examined 4 documents per query .
two 460 159 say 228 94 plan 163 79
new 553 158 made 246 94 million 199 79
time 437 135 result 249 93 end 556 78
three 269 128 information 706 93 allow 190 78
system 1609 122 develop 525 91 month 222 78
base 421 115 accord 322 91 set 278 77
high 585 115 service 468 90 manage 302 77
make 254 115 general 479 87 national 209 77
state 446 114 call 432 86 change 311 76
report 336 104 number 292 86 long 153 76
product 549 103 company 304 85 problem 170 75
part 371 101 show 223 83 line 271 75
group 513 101 president 339 82 close 207 75
work 256 98 require 432 80 increase 173 75
relate 269 96 people 181 79 second 882 75
operate 396 95 support 283 79 order 236 74
follow 262 94 data 608 79
This particular resource description was based on a very simple approachto
tokenizing, case conversion, and stopword removal. For example, all terms were
converted to lower case, hence it does not distinguish among terms that di er only
in case, such as aids" and AIDS". This distinction is important in this particular
database, and illustrates some of the issues that a real world' system must address.
Appropriate lexical processing is not necessarily a major barrier, but accuracy in
real world' settings probably requires that it be addressed.
The Wall Street Journal and Combined Health Information databases are homogeneous
to varying degrees, whichmaymake it easier to summarize their contents
with brief lists of frequent terms. This summarization technique may be less e ective
with larger, heterogeneous databases such as TREC-123. The top 50 words in
the TREC-123 database Table 11 provide some evidence that the database contains
documents about U.S. national and business news, but it would be di cult
to draw rm conclusions about the database contents from this list of words alone.
Although simple word lists are e ective for summarizing database contents in
some situations, they are not necessarily the most e ectivetechniques. Frequent
phrases and common relationships can be better.
Indeed, one consequence of the sampling approach to creating learned resource
descriptions is that it makes more powerful summarizations possible. The sampling
process is not restricted just to word lists and frequency tables, nor is it restricted
to just the information the database chooses to provide. Instead, it has a set
of several hundred documents from which to mine frequent phrases, names, dates,
relationships, and other interesting information. This information is likely to enable
construction of more powerful and more informative summaries than is possible with
the simple resource descriptions used by cooperative methods.
28 J. Callan and M. Connell
8. OTHER USES
The set of documents sampled from a single database re ects the contents of that
database. One use of these documents is to build a resource description for a single
database, as described above. However, other uses are possible.
One potential use is in a query expansion database. Recent research showed that
query expansion signi cantly improves the accuracy of database selection 39 . The
state-of-the-art in query expansion is based upon analyzing the searched corpus
for co-occurrence patterns, but what database s should be used when the task is
database selection? This question has been unanswered.
If the documents sampled from each database were combined into a query expansion
corpus, the result would be a set of documents that re ects the contents and
word co-occurrence patterns across all of the available databases. It would require
little additional e ort for a database selection service to create a query expansion
database in this manner.
Co-occurrence-based query expansion can be viewed as a form of data mining.
Other forms of data mining could also be applied to the set of documents sampled
from all databases. For example, frequent concepts, names, or relationships might
be extracted and used in a visualization interface.
The ability to construct a single database that acts as a surrogate for a set of
databases is signi cant, because it could be a way of rapidly porting manyfamiliar
Information Retrieval tools to environments containing many databases. Although
there are many unanswered questions, this appears to be a promising direction for
future research.
9. CONCLUSIONS
Our hypothesis was that an accurate description of a text database can be constructed
from documents obtained by running queries on the database. Preliminary
experiments 5 supported the hypothesis, but were not conclusive. The
experiments presented in this paper test the hypothesis extensively,frommulti-
ple perspectives, and con rm the hypothesis. The resource descriptions created by
query-based sampling are su ciently similar to resource descriptions created from
complete information that it makes little di erence which is used for database selection
Query-based sampling avoids many of the limitations of cooperative protocols
suchasSTARTS. Query-based sampling can be applied to older legacy' databases
and to databases that have no incentive to cooperate. It is not as easily defeated
byintentional misrepresentation. It also avoids the problem of needing to reconcile
the di ering tokenizing, stopword lists, word stemming, case conversion, name
recognition, and other representational choices made in each database. These representation
problem are perhaps the most serious weakness of cooperative protocols,
because they exist even when all parties intend to cooperate.
The experimental results also demonstrate that the cost of query-based sampling,
as measured bythenumber of queries and documents required, is reasonably low,
and that query-based sampling is robust with respect to variations in parameter
settings.
Finally, and perhaps most importantly, the experiments described in this paper
Query-Based Sampling of Text Databases 29
demonstrate that a fairly small partial description of a resource can be as e ective
for distributed search as a complete description of that resource. This result
suggests that much of the information exchanged by cooperative protocols is un-
necessary, and that communications costs could be reduced signi cantly without
a ecting results.
The demonstrated e ectiveness of partial resource descriptions also raises questions
about which terms are necessary for describing text collections. Query-based
sampling identi es terms across a wide frequency range, but it necessarily favors
the frequent, non-stopword terms in a database. Luhn suggested that terms in the
middle of the frequency range would be best for describing documents 24 . It is an
open question whether terms in the middle of the frequency range would be best
for describing collections, too.
Several other open questions remain, among them whether the numberofdocu-
ments in a database can be estimated with query-based sampling. Wehaveshown
that this information may not be required for database selection, but it is nonetheless
desirable information. It is also an open question howmany documents must
be sampled from a resource to obtain a description of a desired accuracy, although
300-500 documents appears to be very e ective across a range of database sizes.
The work reported here can be extended in several directions, to provide a more
complete environment for searching and browsing among many databases. For ex-
ample, the documents obtained by query-based sampling could be used to provide
query expansion for database selection, or to drive a summarization or visualization
interface showing the range of information available in a multi-database environ-
ment. More generally, the ability to construct a single database that acts as a
surrogate for a large set of databases o ers many possibilities for interesting re-search
ACKNOWLEDGMENTS
We thank Aiqun Du for her work in the early stages of the research reported here.
We also thank the reviewers for their many helpful suggestions, and a reviewer for
the SIGIR conference for suggesting the experiments in Section 4.5.4.
This material is based on work supported in part by the Library of Congress
and Department of Commerce under cooperative agreementnumber EEC-9209623,
andinpartby NSF grants IIS-9873009, EIA-9983253, and EIA-9983215. Any
opinions, ndings, conclusions or recommendations expressed in this material are
the authors', and do not necessarily re ect those of the sponsors.
--R
Comparing the performance of database selection algorithms.
Evaluating database selection techniques: A testbed and experiment.
A decision-theoretic approach to database selection in networked IR
STARTS Stanford proposal for Internet meta-searching
Generalizing GLOSS to vector-space databases and broker hierarchies
The e ectiveness of GLOSS for the text database discovery problem.
Precision and recall of GLOSS estimators for database discovery.
The Second Text REtrieval Conference TREC2
Methods for informationserver selection.
Information Retrieval: Computational and Theoretical Aspects.
Word Sense Disambiguation for Large Text Databases.
Collection selection and results merging with topically organized U.
Measures in collection ranking evaluation.
The automatic creation of literature abstracts.
An experimental comparison of the e ectiveness of computers and humans as searchintermediaries.
Determiningtext databases to searchintheInternet.
Estimating the usefulness of search engines.
Facts from gures.
National Institutes of Health
National Information Standards Organization.
The impact of database selection on distributedsearching.
Numerical recipies in C: The art of scienti c computing.
Inference Networks for Document Retrieval.
Evaluation of an inference network-based retrieval model
Dissemination of collection wide information in a distributed Information Retrieval system.
Learning collection fusion strategies.
Multiple search engines in databasemerging.
ectiveretrieval of distributed collections.
Search and ranking algorithms for locating resources on the World Wide Web.
Server ranking for distributedtext retrieval systems on the Internet.
Human Behavior and the Principle of Least E ort: AnIntroduction to Human Ecology.
--TR
Evaluation of an inference network-based retrieval model
Inference networks for document retrieval
Numerical recipes in C (2nd ed.)
The effectiveness of GIOSS for the text database discovery problem
TREC and TIPSTER experiments with INQUERY
Dissemination of collection wide information in a distributed information retrieval system
Searching distributed collections with inference networks
Learning collection fusion strategies
HyPursuit
Word sense disambiguation for large text databases
A probabilistic model for distributed information retrieval
Multiple search engines in database merging
Effective retrieval with distributed collections
Evaluating database selection techniques
Methods for information server selection
Automatic discovery of language models for text databases
Comparing the performance of database selection algorithms
Cluster-based language models for distributed retrieval
A decision-theoretic approach to database selection in networked IR
Server selection on the World Wide Web
The impact of database selection on distributed searching
Collection selection and results merging with topically organized U.S. patents and TREC data
Precision and recall of <italic>GIOSS</italic> estimators for database discovery
Information Retrieval
Search and Ranking Algorithms for Locating Resources on the World Wide Web
Determining Text Databases to Search in the Internet
Generalizing GlOSS to Vector-Space Databases and Broker Hierarchies
Server Ranking for Distributed Text Retrieval Systems on the Internet
Estimating the Usefulness of Search Engines
--CTR
Leif Azzopardi , Mark Baillie , Fabio Crestani, Adaptive query-based sampling for distributed IR, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Henrik Nottelmann , Norbert Fuhr, Evaluating different methods of estimating retrieval quality for resource selection, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Panagiotis G. Ipeirotis , Luis Gravano, When one sample is not improving text database selection using shrinkage, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
W. Bruce Croft , Jamie Callan, Collaborative research - digital government: a language modeling approach to metadata for cross-database linkage and search, Proceedings of the 2004 annual national conference on Digital government research, p.1-2, May 24-26, 2004, Seattle, WA
Y. L. Hedley , M. Younas , A. James , M. Sanderson, Query-related data extraction of hidden web documents, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Jared Cope , Nick Craswell , David Hawking, Automated discovery of search interfaces on the web, Proceedings of the fourteenth Australasian database conference, p.181-189, February 01, 2003, Adelaide, Australia
Ronak Desai , Qi Yang , Zonghuan Wu , Weiyi Meng , Clement Yu, Identifying redundant search engines in a very large scale metasearch engine context, Proceedings of the eighth ACM international workshop on Web information and data management, November 10-10, 2006, Arlington, Virginia, USA
Mark Baillie , Leif Azzopardi , Fabio Crestani, An evaluation of resource description quality measures, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Henrik Nottelmann , Norbert Fuhr, From Retrieval Status Values to Probabilities of Relevance for Advanced IR Applications, Information Retrieval, v.6 n.3-4, p.363-388, September-December
Y. L. Hedley , M. Younas , A. James , M. Sanderson, A two-phase sampling technique for information extraction from hidden web databases, Proceedings of the 6th annual ACM international workshop on Web information and data management, November 12-13, 2004, Washington DC, USA
Panagiotis G. Ipeirotis , Luis Gravano, Distributed search over the hidden web: hierarchical database sampling and selection, Proceedings of the 28th international conference on Very Large Data Bases, p.394-405, August 20-23, 2002, Hong Kong, China
Mark Baillie , Leif Azzopardi , Fabio Crestani, Towards better measures: evaluation of estimated resource description quality for distributed IR, Proceedings of the 1st international conference on Scalable information systems, p.41-es, May 30-June 01, 2006, Hong Kong
James Caverlee , Ling Liu , Joonsoo Bae, Distributed query sampling: a quality-conscious approach, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Jamie Callan , Fabio Crestani , Henrik Nottelmann , Pietro Pala , Xiao Mang Shou, Resource selection and data fusion in multimedia distributed digital libraries, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Demet Aksoy, Information source selection for resource constrained environments, ACM SIGMOD Record, v.34 n.4, p.15-20, December 2005
Leif Azzopardi , Maarten de Rijke, Automatic construction of known-item finding test beds, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Stefano Berretti , Alberto Del Bimbo , Pietro Pala, Merging Results for Distributed Content Based Image Retrieval, Multimedia Tools and Applications, v.24 n.3, p.215-232, December 2004
Jie Lu , Jamie Callan, Pruning long documents for distributed information retrieval, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
James Caverlee , Ling Liu , Daniel Rocco, Discovering and ranking web services with BASIL: a personalized approach with biased focus, Proceedings of the 2nd international conference on Service oriented computing, November 15-19, 2004, New York, NY, USA
semisupervised learning method to merge search engine results, ACM Transactions on Information Systems (TOIS), v.21 n.4, p.457-491, October
Jack G. Conrad , Xi S. Guo , Cindy P. Schriber, Online duplicate document detection: signature reliability in a dynamic retrieval environment, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Luo Si , Jamie Callan, Using sampled data and regression to merge search engine results, Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, August 11-15, 2002, Tampere, Finland
Panagiotis G. Ipeirotis , Tom Barry , Luis Gravano, Extending SDARTS: extracting metadata from web databases and interfacing with the open archives initiative, Proceedings of the 2nd ACM/IEEE-CS joint conference on Digital libraries, July 14-18, 2002, Portland, Oregon, USA
Henrik Nottelmann , Norbert Fuhr, Evaluating different methods of estimating retrieval quality for resource selection, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Jack G. Conrad , Xi S. Guo , Peter Jackson , Monem Meziou, Database selection using actual physical and acquired logical collection resources in a massive domain-specific operational environment, Proceedings of the 28th international conference on Very Large Data Bases, p.71-82, August 20-23, 2002, Hong Kong, China
Milad Shokouhi , Justin Zobel , Saied Tahaghoghi , Falk Scholer, Using query logs to establish vocabularies in distributed information retrieval, Information Processing and Management: an International Journal, v.43 n.1, p.169-180, January 2007
Henrik Nottelmann , Gudrun Fischer, Search and browse services for heterogeneous collections with the peer-to-peer network Pepper, Information Processing and Management: an International Journal, v.43 n.3, p.624-642, May, 2007
Bei Yu , Guoliang Li , Karen Sollins , Anthony K. H. Tung, Effective keyword-based selection of relational databases, Proceedings of the 2007 ACM SIGMOD international conference on Management of data, June 11-14, 2007, Beijing, China
Luo Si , Rong Jin , Jamie Callan , Paul Ogilvie, A language modeling framework for resource selection and results merging, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
M. Elena Renda , Umberto Straccia, Automatic structured query transformation over distributed digital libraries, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Yih-Ling Hedley , Muhammad Younas , Anne James , Mark Sanderson, Sampling, information extraction and summarisation of hidden web databases, Data & Knowledge Engineering, v.59 n.2, p.213-230, November 2006
Paul Ogilvie , Jamie Callan, The effectiveness of query expansion for distributed information retrieval, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
Luis Gravano , Panagiotis G. Ipeirotis , Mehran Sahami, QProber: A system for automatic classification of hidden-Web databases, ACM Transactions on Information Systems (TOIS), v.21 n.1, p.1-41, January
Milad Shokouhi , Justin Zobel , Falk Scholer , S. M. M. Tahaghoghi, Capturing collection size for distributed non-cooperative retrieval, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Yang , Minjie Zhang, Two-stage statistical language models for text database selection, Information Retrieval, v.9 n.1, p.5-31, January 2006
Fabio Simeoni , Murat Yakici , Steve Neely , Fabio Crestani, Metadata harvesting for content-based distributed information retrieval, Journal of the American Society for Information Science and Technology, v.59 n.1, p.12-24, January 2008
Jack G. Conrad , Joanne R. S. Claussen, Early user---system interaction for database selection in massive domain-specific online environments, ACM Transactions on Information Systems (TOIS), v.21 n.1, p.94-131, January
Jack G. Conrad , Joanne R. S. Claussen, Client-system collaboration for legal corpus selection in an online production environment, Proceedings of the 9th international conference on Artificial intelligence and law, June 24-28, 2003, Scotland, United Kingdom
Henri Avancini , Leonardo Candela , Umberto Straccia, Recommenders in a personalized, collaborative digital library environment, Journal of Intelligent Information Systems, v.28 n.3, p.253-283, June 2007
Panagiotis G. Ipeirotis , Eugene Agichtein , Pranay Jain , Luis Gravano, To search or to crawl?: towards a query optimizer for text-centric tasks, Proceedings of the 2006 ACM SIGMOD international conference on Management of data, June 27-29, 2006, Chicago, IL, USA
Milad Shokouhi , Justin Zobel , Yaniv Bernstein, Distributed text retrieval from overlapping collections, Proceedings of the eighteenth conference on Australasian database, p.141-150, January 30-February 02, 2007, Ballarat, Victoria, Australia
John Gerdes, Jr., EDGAR-analyzer: automating the analysis of corporate data contained in the SEC's EDGAR database, Decision Support Systems, v.35 n.1, p.7-29, 01 April
Brian F. Cooper, Guiding queries to information sources with InfoBeacons, Proceedings of the 5th ACM/IFIP/USENIX international conference on Middleware, October 18-22, 2004, Toronto, Canada
Andrei Broder , Marcus Fontura , Vanja Josifovski , Ravi Kumar , Rajeev Motwani , Shubha Nabar , Rina Panigrahy , Andrew Tomkins , Ying Xu, Estimating corpus size via queries, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Alexandros Ntoulas , Petros Zerfos , Junghoo Cho, Downloading textual hidden web content through keyword queries, Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2005, Denver, CO, USA
Massimo Melucci, On rank correlation in information retrieval evaluation, ACM SIGIR Forum, v.41 n.1, p.18-33, June 2007 | resource selection;resource ranking;server selection;distributed information retrieval;query-based sampling |
383081 | Extending equation-based congestion control to multicast applications. | In this paper we introduce TFMCC, an equation-based multicast congestion control mechanism that extends the TCP-friendly TFRC protocol from the unicast to the multicast domain. The key challenges in the design of TFMCC lie in scalable round-trip time measurements, appropriate feedback suppression, and in ensuring that feedback delays in the control loop do not adversely affect fairness towards competing flows. A major contribution is the feedback mechanism, the key component of end-to-end multicast congestion control schemes. We improve upon the well-known approach of using exponentially weighted random timers by biasing feedback in favor of low-rate receivers while still preventing a response implosion. We evaluate the design using simulation, and demonstrate that TFMCC is both TCP-friendly and scales well to multicast groups with thousands of receivers. We also investigate TFMCC's weaknesses and scaling limits to provide guidance as to application domains for which it is well suited. | Introduction
It is widely accepted that one of several factors inhibiting the
usage of IP multicast is the lack of good, deployable, well-tested
multicast congestion control mechanisms. To quote
[10]:
The success of the Internet relies on the fact that best-effort
traffic responds to congestion on a link by reducing
the load presented to the network. Congestion
collapse in today's Internet is prevented only by the
congestion control mechanisms in TCP.
We believe that for multicast to be successful, it is crucial
that multicast congestion control mechanisms be deployed
that can co-exist with TCP in the FIFO queues of the current
Internet.
University of Mannheim, Germany
y AT&T Center for Internet Research at ICSI (ACIRI)
The precise requirements for multicast congestion control are
perhaps open to discussion given the efficiency savings of
multicast, but we take the conservative position that a multi-cast
flow is acceptable if it achieves no greater medium-term
throughput to any receiver in the multicast group than would
be achieved by a TCP flow between the multicast sender and
that receiver.
Such a requirement can be satisfied either by a single multicast
group if the sender transmits at a rate dictated by the
slowest receiver in the group, or by a layered multicast scheme
that allows different receivers to receive different numbers of
layers at different rates. Much work has been done on the latter
class [12, 18, 4], but the jury is still out on whether any of
these mechanisms can be made safe to deploy.
This paper describes TCP-Friendly Multicast Congestion Control
(TFMCC), which belongs to the class of single rate congestion
control schemes. Such schemes inevitably do not
scale as well as layered schemes. However, they are much
simpler, match the requirements of some applications well,
and we will demonstrate that they can scale to applications
with many thousands of receivers. These schemes also suffer
from degradation in the face of badly broken links to a few
receivers - how to deal with such situations is a policy deci-
sion, but we expect that most applications using a single-rate
scheme will have application-specific thresholds below which
a receiver is compelled to leave the multicast group.
TFMCC is not the only single-rate multicast congestion control
scheme available. In particular, Pragmatic General Multi-cast
Congestion Control (PGMCC) [17] is also a viable solution
with some nice properties and a certain elegant simplic-
ity. However, TFMCC and PGMCC differ considerably in
the smoothness and predictability of their transmission. We
will argue that both are appropriate solutions, and that some
applications are better suited to one than the other.
1.1 TFMCC and TFRC
The TCP-friendly Rate Control protocol (TFRC) [5] is a unicast
congestion control mechanism intended for applications
that require a smoother, more predictable transmission rate
than TCP can achieve. TFMCC extends the basic mechanisms
of TFRC into the multicast domain.
TFRC is an equation-based congestion control scheme. It
uses a control equation derived from a model of TCP's long-term
throughput to directly control the sender's transmission
rate. Basically TFRC functions as follows:
1. The receiver measures the packet loss rate and feeds
this information back to the sender.
2. The sender uses the feedback messages to measure the
round-trip time to the receiver.
3. The sender uses the control equation to derive an acceptable
transmission rate from the measured loss rate
and round-trip time (RTT).
4. The sender's transmission rate is then adjusted directly
to match the calculated transmission rate.
For full details of TFRC, we refer the reader to [5].
TFMCC follows a very similar design for multicast congestion
control. The primary differences are that it is the receivers
that measure their RTT to the sender and perform the
calculation of the acceptable rate. This rate is then fed back to
the sender, the challenge being to do this in a manner which
ensures that feedback from the receiver with the lowest calculated
rate reaches the sender whilst avoiding feedback im-
plosions. Moreover, we need to make sure than any additional
delay imposed to avoid feedback implosion does not
adversely affect the fairness towards competing protocols.
2 The TFMCC Protocol
Building an equation-based multicast congestion control mechanism
requires that the following problems be solved:
A control equation must be chosen that defines the target
throughput in terms of measurable parameters, in
this case loss event rate and RTT.
Each receiver must measure the loss event rate. Thus a
filter for the packet loss history needs to be chosen that
is a good stable measure of the current network condi-
tions, but is sufficiently responsive when those conditions
change.
Each receiver must measure or estimate the RTT to the
sender. Devising a way to do this without causing excessive
network traffic is a key challenge.
Each receiver uses the control equation to calculate an
acceptable sending rate from the sender to itself.
A feedback scheme must be so devised that feedback
from the receiver calculating the slowest transmission
rate always reaches the sender, but feedback implosions
do not occur when network conditions change.
A filtering algorithm needs to be devised for the sender
to determine which feedback it should take into account
as it adjusts the transmission rate.
Clearly, all these parts are closely coupled. For example,
altering the feedback suppression mechanisms will impact
how the sender deals with this feedback. Many of our design
choices are heavily influenced by TFRC, as these mechanisms
are fairly well understood and tested. In this paper
we will expend most of our efforts focusing on those parts of
TFMCC that differ from TFRC.
2.1 Determining an Acceptable Sending Rate
The control equation used by TFRC and TFMCC is derived
from a model for long-term TCP throughput in bytes/sec [15]:
s
q
(1)
The expected throughput T TCP of a TCP flow is calculated
as a function of the steady-state loss event rate p, the round-trip
time t RTT , and the packet size s. Each TFMCC receiver
measures its own loss event rate and estimates its RTT to the
sender. It then uses Equation (1) to calculate T TCP , which
is an estimate of the throughput a TCP flow would achieve
on the network path to that receiver under the same network
conditions. If the sender does not exceed this rate for any
receiver then it should be TCP-friendly, in that it does not
affect a TCP flow through the same bottlenecks more than
another TCP flow would do.
In the following section we will elaborate on how the necessary
parameters for the model are computed and how to deal
with potentially large receiver sets.
2.2 Adjusting the Sending Rate
The sender will continuously receive feedback from the re-
ceivers. If a receiver sends feedback that indicates a rate that
is lower than the sender's current rate, the sender will immediately
reduce its rate to that in the feedback message.
In order to eliminate a large number of unnecessary messages,
receivers will not send feedback unless their calculated rate is
less than the current sending rate. However, this leaves us
with a problem - how do we increase the transmission rate?
We cannot afford to increase the transmission rate in the absence
of feedback, as the feedback path from the slowest receiver
may be congested or lossy. As a solution we introduce
the concept of the current limiting receiver (CLR). The CLR
is the receiver that the sender believes currently has the lowest
expected throughput of the group. 1 The CLR is permitted to
1 In this respect, the CLR is comparable to the representative used in congestion
control schemes such as PGMCC.
send immediate feedback without any form of suppression, so
the sender can use the CLR's feedback to increase the transmission
rate.
The CLR will change if another receiver sends feedback indicating
that a lower transmission rate is required. It will also
change if the CLR leaves the multicast group - this is normally
signaled by the CLR, but an additional timeout mechanism
serves as a backup in case the CLR crashes or becomes
unreachable.
Normally the way loss measurement is performed limits the
possible rate increase to roughly 0.3 packets per RTT , as
shown in [5]. However, if the CLR leaves the group, the new
CLR may have a significantly higher calculated rate. We cannot
afford to increase directly to this rate, as the loss rate currently
measured may not be a predictor of the loss rate at the
new transmission rate. Instead we then impose a rate increase
limit of one packet per RTT , which is the same as TCP's additive
increase constant, so that the rate gradually increases to
the new CLR's rate.
2.3 Measuring the Loss Event Rate
The loss event rate can only be scalably measured at the re-
ceivers. The measurement mechanism closely matches that
used for TFRC. A receiver aggregates the packet losses into
loss events, defined as one or more packets lost during a round-trip
time. The number of packets between consecutive loss
events is called a loss interval. The average loss interval size
can be computed as the weighted average of the m most recent
loss intervals l
l avg
The weights w i are chosen so that very recent loss intervals
receive the same high weights, while the weights gradually
decrease to 0 for older loss intervals. For example, with eight
weights we might use f5, 5, 5, 5, 4, 3, 2, 1g. This allows
for smooth changes in l avg as loss events age. While large
values for m improve the smoothness of the estimate, a very
long loss history also reduces the responsiveness and thus the
fairness of the protocol. Values around 8 to 32 appear to be a
good compromise.
The loss event rate p used as an input for the TCP model is
defined as the inverse of l avg . The interval since the most
recent loss event does not end with a loss event and thus may
not reflect the loss event rate. This interval is included in the
calculation of the loss event rate if doing so reduces p:
For a more thorough discussion of this loss measurement mechanism
see [5].
2.4 Round-trip Time Measurements
A key challenge of TFMCC is for each receiver to be able to
measure its RTT to the sender without causing excessive traffic
at the sender. In practice the problem is primarily one of
getting an initial RTT measurement as, with the use of time-stamps
in the data packets, a receiver can see changes in the
delay of the forward path simply from the packet's arrival
time. We will discuss this further in Section 2.4.3.
2.4.1 RTT Estimate Initialization
Ideally we would like a receiver to be able to initialize its
RTT measurement without having to exchange any feedback
packets with the sender. This is possible if the sender and
receiver have synchronized clocks, which might be achieved
using GPS receivers. Less accurately, it can also be done
using clocks synchronized with NTP [13].
In either case, the data packets are timestamped by the sender,
and the receiver can then compute the one-way delay. The
RTT is estimated to be twice the one-way delay d S!R . In the
case of NTP, the errors that accumulate between the stratum-
1 server and the local host must be taken into account. An
NTP server knows the RTT and dispersion to the stratum-1
server to which it is synchronized. The sum of these gives the
worst-case error in synchronization. To be conservative:
In practice NTP provides an average timer accuracy of 20-30
ms [13], and in most cases this gives us an estimate of RTT
that is accurate at least to the nearest 100 ms. Although not
perfect, this is still useful as a first estimate.
In many cases though, no reliable form of clock synchronization
is available. Each receiver must then initialize its RTT
estimate to a value that should be larger than the highest RTT
of any of the receivers. We assume that for most networks a
value of 500 ms is appropriate [1]. This initial value is used
until a real measurement can be made. In Appendix A we reason
why it is safe to also use this value to aggregate losses to
loss events, where a low RTT value would be the conservative
option.
2.4.2 RTT Measurement
A receiver gets to measure the instantaneous RTT t inst
RTT by
sending timestamped feedback to the sender, which then echoes
the timestamp and receiver ID in the header of a data packet.
If more feedback messages arrive than data packets are sent,
we prioritize the sender's report echoes in the following order
1. a receiver whose report causes it to be selected as the
new CLR
2. receivers that have not yet measured their RTT
3. non-CLR receivers with previous RTT measurements
4. the existing CLR.
Ties are broken in favor of the receiver with the lowest reported
rate. Normally the number of data packets is larger
than the number of feedback packets, so the CLR's last report
is echoed in any remaining data packets. 2
To prevent a single spurious RTT value from having an excessive
effect on the sending rate we smooth the values using
an exponentially weighted moving average (EWMA)
inst
For the CLR we set 0:05. Given that other receivers
will not get very frequent RTT measurements and thus old
measurements are likely to be outdated, a higher value of
non used for them.
2.4.3 One-way Delay RTT Adjustments
Due to the infrequent RTT measurements, it would also be
possible for large increases in RTT to go unnoticed if the receiver
is not the CLR. To avoid this we adjust the RTT estimate
between actual measurements. Since data packets carry
a send timestamp t data , a receiver that gets a RTT measurement
at time t now can also compute the one-way delay from
sender to receiver (including clock skew) as
data
and the one-way from receiver to sender as
inst
RTT dS!R
Due to clock skew, these values are not directly meaningful,
but dR!S can be used to modify the RTT estimate between
real RTT measurements. When in a later data packet the one-way
delay from sender to receiver is determined as d 0
S!R , it
is possible to compute an up-to-date RTT estimate
inst
RTT
Clock skew between sender and receiver cancels out, provided
that clock drift between real RTT measurements is neg-
ligible. The modified RTT estimates are smoothed with an
EWMA just like normal RTT measurements, albeit with a
smaller decay factor for the EWMA since the one-way delay
adjustments are possible with each new data packet. One-way
delay adjustments are used as an indicator that the RTT may
have changed significantly and thus a real RTT measurement
is necessary. If the receiver is then selected as CLR, it measures
its RTT with the next packet and all interim one-way
delay adjustments are discarded. For this reason it proved to
be unnecessary to filter out flawed one-way delay estimates.
2 To be able to infer an accurate RTT from the timestamps it is necessary to
also take into account the offset between receipt of a timestamp and echoing
it back.
2.4.4 Sender-side RTT Measurements
While a preconfigured initial RTT value can be used at the
receiver for loss aggregation and rate computation, it should
not be used to set the sending rate. Using a high initial RTT
would result in a very low sending rate, followed by a high
sending rate when the CLR gets the first RTT measurement,
then a CLR change to a receiver with no previous RTT mea-
surement, and so on. Such rate oscillations should be avoided.
On the other hand, if the sender only accepted a receiver with
a valid RTT as CLR, receivers with a very high loss rate might
never receive their feedback echo, and so never become CLR.
For these reasons, TFMCC supports additional sender-based
RTT measurements. A receiver report also echoes the time-stamp
of the last data packet, and so the sender and receivers
are both able to measure RTT. The sender only computes the
RTT when it has to react to a receiver report without a valid
RTT, and it uses this to adjust the calculated rate in the receiver
report.
2.5 Receiver Feedback
As TFMCC is designed to be used with receiver sets of perhaps
several thousand receivers, it is critical to ensure that
the sender gets feedback from the receivers experiencing the
worst network conditions without being overwhelmed by feed-back
from all the other receivers. Congestion may occur at
any point in the distribution tree, from the sender's access
link through to a single receiver's tail circuit. Thus any mechanism
must be able to cope when conditions change from a
single receiver being lightly congested to all the receivers being
equally heavily congested, and other similarly pathological
cases. At the same time we would like the feedback delay
to be relatively small in the steady state. The latter can be
achieved through the concept of a CLR, which can send feed-back
immediately.
However, a CLR is of no help during a change in network
conditions that affect receivers other than the CLR. Thus, we
will ignore the influence of the CLR on the feedback process
in this section, but we note that the CLR generates relatively
little feedback traffic and both strictly improves the responsiveness
to congestion and reduces the amount of feedback
sent by other receivers.
Various reliable multicast protocols incorporate feedback trees,
where the receivers are organized into a tree hierarchy, and internal
nodes in the tree aggregate feedback. Such trees largely
solve the feedback implosion problem, but are difficult to
build and maintain. If such a tree exists it should clearly be
used, but in this paper we will assume that is not the case, and
examine pure end-to-end suppression mechanisms.
Several mechanisms using randomized timers for feedback
suppression in multicast protocols have been proposed before
[6, 7, 9, 14]. Time is divided into feedback rounds, which
are either implicitly or explicitly indicated to the receivers.
At the start of each feedback round, each receiver sets a randomized
timer. If the receiver hears feedback from another
receiver that makes it unnecessary for it to send its own feed-
back, it cancels its timer. Otherwise when the timer expires,
a feedback message is sent.
For TFMCC, we use such a mechanism based on exponentially
distributed random timers. When the feedback timer
expires, the receiver unicasts its current calculated sending
rate to the sender. If this rate is lower than previous feedback
received, the sender echoes the feedback to all receivers. With
respect to the intended application of finding the correct CLR,
we improve upon the original concept by biasing feedback in
favor of low-rate receivers. The dynamics of such a mechanism
depend both on the way that the timers are initialized,
and on how one receiver's feedback suppresses another's.
2.5.1 Randomized Timer Values
The basic exponentially distributed random timer mechanism
initializes a feedback timer to expire after t seconds, with
where
x is a uniformly distributed random variable in (0; 1],
T is an upper limit on the delay before sending feedback,
N is an estimated upper bound on the number of receivers.
T is set to a multiple of the maximum RTT of the receivers;
RTT . The choice of b determines the number of feed-back
packets per round that will be sent in worst-case conditions
and the feedback delay under normal conditions. In
Section 2.5.4 we show that for our purpose useful values for
b lie between 3 and 6. We use a default value of 4.
The mechanism is relatively insensitive to overestimation of
the receiver set size N , but underestimation may result in a
feedback implosion. Thus, a sufficiently large value for N
should be chosen. In our simulations we use
which seems reasonable given our scaling goals.
Whilst this basic algorithm is sufficient to prevent a feedback
implosion, it does not ensure that receivers with low expected
rates will be more likely to respond than receivers with high
rates. Even if a receiver can only respond when its rate is
less than the current sending rate, this does not ensure that
the lowest-rate receiver will respond quickly when congestion
worsens rapidly. 3 Thus the sender would be insufficiently
responsive to increased congestion.
To avoid this problem, we bias the feedback timers in favor
of receivers with lower rates, while still allowing sufficient
3 In fact, receivers with lower RTTs are incorrectly favored since they
receive the feedback request earlier.
randomization to avoid implosion when all the receivers calculate
the same low rate. Since a receiver knows the sending
rate but not the calculated rate of other receivers, a good measure
of the importance of its feedback is the ratio r of the
calculated rate to the current sending rate. 4 There are several
ways to use r to bias the timers:
Modify N: reduce the upper bound on the receiver set.
Offset: subtract an offset value from the feedback time.
Modify x: reduce the random value x.
All three alternatives cause low-rate receivers to report earlier
but they differ with respect to the degree of biasing they
cause and the circumstances under which a feedback implosion
might be possible.
When modifying N , its value should never be reduced to less
than the actual number of receivers n, since receivers send
an immediate response with a probability of 1=N . In case
n < N , the number of feedback responses increases linearly
in relation to n. If N is known to be too large and it is thus
possible to safely reduce N , it makes sense to always use the
reduced N for the feedback suppression instead of using it
for the biasing.
Using an offset decreases the time for all congested receivers
to respond, but the probability of a very short timer value is
not greatly increased and so suppression still works.
determines the fraction of T that should be used to spread
out the feedback responses with respect to the reported rate.
Care has to be taken to ensure that (1
)T is sufficiently
large to prevent a feedback implosion.
log N x) +T log N r, the third
case is similar to the second case with a different offset value.
Also here it is important to bound the impact of r on the feed-back
time.
Figure
shows how the cumulative distribution function (CDF)
of the feedback time changes from the original CDF when biasing
the feedback. A decrease in N corresponds to shifting
the CDF up, thus increasing the probability of early responses
that cannot be suppressed. In contrast, using a fraction of T
as an offset reduces the time over which the responses are
spread out, assuming the worst case of all receivers reporting
an optimal value.
Thus, in TFMCC the feedback timers are biased in favor of
low-rate receivers with an offset as in Equation 3. To clarify
how this method affects the feedback time, the time-value
distribution of the receiver set without biasing and with timers
biased with an offset is depicted in Figure 2. Suppressed
Feedback is marked with a dot, feedback received at the sender
4 Note that 0 < r < 1 since only receivers with lower rates than the
current rate send reports.
Cumulative
Probability
Feedback Time (RTTs)
exponential
offset
modified N
Figure
1: Different feedback biasing methods
is marked with a cross and the best value of the feedback received
is marked with a square. Note that a uniform distribution
of the feedback value r as was used for the graph is
unlikely to occur in reality and is used here only for the purpose
of demonstrating the properties of feedback biasing.
With the offset method, the time interval available for suppression
is smaller than with unbiased feedback if the original
worst case delay is to be maintained. As a consequence,
the number of feedback messages is higher when biasing the
feedback timers. However, through the biasing, early feed-back
messages and thus also the best feedback value received
are closer to optimal.11
Feedback
Value
Feedback Time
Offset
Normal
Figure
2: Time-value distribution
We can further optimize the offset method by truncating the
range of r to likely values, and normalizing the resulting interval
to [0,1]. In the implementation, instead of r, we use
The effect of this is to start biasing feedback only when a re-
ceiver's rate is less than 90% of the sender's rate (this doesn't
significantly affect fairness), and to saturate the bias if the re-
ceiver's rate is 50% of the sender's rate (since receivers with
even lower rates will take several rounds for their loss measures
to change anyway).
2.5.2 Canceling Feedback
When a receiver sees echoed feedback from another receiver,
it must decide whether or not to cancel its feedback timer.
One possibility is to rely completely on the feedback timer
bias, and cancel the timer on receipt of the first feedback for
this round. Another possibility is to cancel the timer only if
the echoed feedback indicates a rate lower than the rate the receiver
wanted to report. The latter guarantees that the receiver
with the lowest rate will always get to send its feedback, but
the former results in significantly less feedback traffic in the
worst case.
A spectrum lies between these two extremes: if the receiver's
calculated rate is R calc and the rate from the echoed feedback
is R fb , then the timer is canceled if R fb R calc < R fb .
The former method discussed above corresponds to
and the latter to As we change from zero to one, we
reduce the chance of hearing from the absolute lowest-rate re-
ceiver, but also reduce the increase in the number of feedback
messages. As shown in [19], the expected number of feed-back
messages increases logarithmically with n for
For values of < 1, this number becomes approximately
constant in the limit for large
Number
of
Responses
Number of Receivers (n)
all suppressed
10% lower suppressed
higher suppressed
Figure
3: Different feedback cancellation methods
These results are corroborated by the simulations depicted in
Figure
3. The graph shows the number of feedback messages
in the first round of the worst-case scenario, where n
receivers (except the CLR) suddenly experience congestion.
The effects of being 0.0, 0.1, and 1.0 are shown. Values
of around 0.1 result in the desired behavior of only a
marginally higher number of feedback messages, while the
resulting transient transmission rate is no worse than 10%
higher than it should be.
The improvement in sent feedback values caused by the biasing
in combination with the above feedback cancellation
method results in a significant improvement of the characteristics
of the feedback process over normal exponential feed-back
timers.
2.5.3 Feedback at Low Sending Rates
At very low sending rates and high loss rates (which usually
go together), it is still possible to get a feedback implo-
sion. The feedback echo from the sender that suppresses other
feedback is sent with the next data packet. Thus, when the delay
before the next data packet is sent is close to the feedback
delay, it will arrive too late for suppression to work.
This problem can be prevented by increasing the feedback delay
T in proportion to the time interval between data packets
when the sending rate R send is low:
s
R send
c being the number of consecutive data packets that can be
lost without running the risk of implosion, and s the packet
size. We recommend using values of c between 2 and 4.
2.5.4 Expected Number of Feedback Messages, Feedback
Delay, and Feedback Quality
The expected number of duplicate feedback messages E[f
for exponential feedback suppression is given in [7] as
where
n is the actual number of receivers,
is the network delay (for unicast feedback channels
T 0 is the maximum feedback delay used for suppression.
Assuming the worst case of
)T .
Whilst our primary concern is to avoid implosion, a very low
number of responses (say 1 or 2) is also undesirable. Some
additional responses greatly increase the probability of not
having a low-rate but the lowest-rate receiver respond and
also provide RTT measurements to a larger number of receivers
Figure
4 shows a plot of E[f ] for different values of T 0 and
n, in the range of roughly 3
to 4 RTTs result in the desired number of feedback messages,
particularly in the common range for n of one to two orders of
magnitude below N . For this reason, the values chosen for
and T in the TFMCC implementation are 1=4 and 4 t max
RTT re-
spectively. Given those choices for
and T , we now examine
how well the feedback biasing methods achieve the additional
goal of low response time and how close the reported rate is
to that of the true lowest-rate receiver.
Figure
5 compares the feedback delay for unbiased exponential
timers with the basic offset bias and the modified offset
PSfrag
replacements35
6 1100100002060100Number of Receivers
T' (RTTs)
Number of Responses
Figure
4: Expected number of feedback messages
that uses r 0 instead of r. All three show the logarithmic decrease
in response time with the number of receivers typical
for feedback suppression based on exponential timers. The
difference between the methods is not great, with the modified
offset algorithm having a slight edge over the regular
offset.
When examining the rates that are reported in the feedback
messages, the advantage of the offset methods becomes ap-
parent. Figure 6 compares the lowest reported rate of the
feedback messages of a single feedback round to the actual
lowest rate of the receiver set. For example, a value of 0.1 indicates
that the lowest reported rate is on average 10% higher
after one feedback round than it should be in the ideal case.
Rates reported with the offset methods are considerably closer
to the real minimum than those reported with unmodified exponential
timers. Particularly when r is adjusted appropriately
by the modified offset method, feedback will normally
be within a few percent of the minimum rate. Plain exponential
feedback shows average deviations of nearly 20% above
the minimum rate.135
Response
Time
(in
Number of Receivers (n)
unbiased exponential
basic offset
modified offset
Figure
5: Comparison of methods to bias feedback
2.6 Slowstart
TFMCC uses a slowstart mechanism to more quickly approach
its fair bandwidth share at the start of a session. During
Quality
of
Reported
Rate
Number of Receivers (n)
unbiased exponential
basic offset
modified offset
Figure
Comparison of methods to bias feedback
slowstart, the sending rate increases exponentially, whereas
normal congestion control allows only a linear increase. An
exponential increase can easily lead to heavy congestion, so
great care has to be taken to design a safe increase mecha-
nism. A simple measure to this end is to limit the increase to
a multiple d of the minimum rate R min
recv received by any of the
receivers. Since a receiver can never receive at a rate higher
than its link bandwidth, this effectively limits the overshoot to
d times that bandwidth. The target sending rate is calculated
as
R target
recv
and the current sending rate is gradually adjusted to the target
rate over the course of a RTT. In our implementation we use
a value of 2. Slowstart is terminated as soon as any one
of the receivers experiences its first packet loss.
It is necessary to use a different feedback bias for slowstart
since receivers cannot calculate a TCP-friendly rate. For this
reason we use:
A report from the receiver that experiences the first loss event
can only be suppressed by other reports also indicating packet
loss, but not by reports from receivers that did not yet experience
loss. Thus, slowstart will be terminated no later than
one feedback delay after the loss was detected.
In practice, TFMCC will seldomly reach the theoretical maximum
of a doubling of the sending rate per RTT for two reasons
The target sending rate is increased only when feed-back
from a new feedback round is received. Thus,
doubling is not possible every RTT, but every feedback
delay, which is usually much larger than a RTT.
Measuring the receive rate over several RTTs and gradually
increasing R send to R target
send gives a minimum receive
rate at the end of a feedback interval that is lower
than the sending rate during that interval. Thus, setting
R target
send to twice the minimum receive rate does
not double the current sending rate.
As is desirable for a multicast protocol, TFMCC slowstart behaves
more conservatively than comparable unicast slowstart
mechanisms.
3 Protocol Behavior with Very
Large Receiver Sets
The loss path multiplicity problem is a well-known characteristic
of multicast congestion control mechanisms that react
to single loss indications from receivers on different network
paths. It prevents the scaling of those mechanisms to large
receiver sets. In [3], the authors propose as a possible solution
tracking the most congested path and taking only loss
indications from that path into account. Since the reports of a
TFMCC receiver contain the expected rate based on the loss
event rate and RTT on the single path from sender to that re-
ceiver, the protocol implicitly avoids the loss path multiplicity
problem. Yet TFMCC (and all other single-rate congestion
control schemes) may be confined to a rate below the fair
rate if, rather than there being a single most congested path,
there is a path that changes over time. The faster a multicast
congestion control protocol responds to transient congestion,
the more pronounced is the effect of tracking the minimum
of stochastic variations in the calculated rate at the different
receivers. For example, if loss to several receivers independently
varies fairly quickly between 0% and 10% with the
average being 5%, a congestion control protocol may always
track the worst receiver, giving a loss estimate that is twice
what it should be.
A worst-case scenario in this respect is a high number of
receivers with independent loss and a calculated rate in the
range of the lowest-rate receiver. If n receivers experience
independent packet loss with the same loss probability, the
loss intervals will have an exponential distribution. The expected
value of the minimum of n exponentially distributed
random variables is proportional to 1=n. Thus, if TFMCC
based its rate calculations on a single loss interval, the average
sending rate would scale proportionally to 1=
n (in the
case of moderate loss rates, otherwise even worse). The rate
calculation in TFMCC is based on a weighted average of m
loss intervals. Since the average of exponentially distributed
random variables is gamma distributed, the expected loss rate
in TFMCC is inversely proportional to the expected value for
the minimum of n gamma distributed random variables. 5
This effect is shown in Figure 7 for different numbers of receivers
n with a constant loss probability. For uncorrelated
loss at a rate of 10% and a RTT of 50 ms, the fair rate for
the TFMCC transmission is around 300 KBit/s. This sending
rate is reached when the receiver set consists of only a single
5 For first order statistics of the gamma distribution, no simple closed form
expressions exists. Details about the distribution of the minimum of gamma
distributed random variables can be found in [8].
Throughput
(KBit/s)
Number of Receivers
constant
distrib.
Figure
7: Scaling
receiver but it quickly drops to a value of only a fraction of
the fair rate for larger n. For example, for 10,000 receivers,
only 1/6 of the fair rate is achieved.
Fortunately, such a loss distribution is extremely unlikely in
real networks. Multicast data is transmitted along the paths of
the distribution tree of the underlying multicast routing pro-
tocol. A lossy link high up in the tree may affect a large
number of receivers but the losses are correlated and so the
above effect does not occur. When some of those receivers
have additional lossy links, the loss rates are no longer cor-
related, rather the values are spread out over a larger inter-
val, thus decreasing the number of receivers with similar loss
rates. To demonstrate this effect, we choose a distribution of
loss rates that is closer to actual loss distributions in multi-cast
trees in that there are only a limited number of high loss
receivers while the majority of receivers will have moderate
loss rates. 6 Here, a small number of receivers (proportional
to a log(n), where a is a constant) is in the high loss range of
5-10%, some more are in the range of 2%-5%, and the vast
majority have loss rates between 0.5% and 2%. Under such
network conditions the throughput degradation with 10,000
receivers is merely 30%. Thus, the throughput degradation
plays a significant role only when the vast majority of packet
loss occurs on the last hop to the receivers and those losses
amount to the same loss rates.
It is impossible to distinguish between a "stochastic" decrease
in the sending rate and a "real" decrease caused by an increased
congestion level (otherwise it would be possible to
estimate the effect and adjust the sending rate accordingly).
The degradation effect can be alleviated by increasing the
number of loss intervals used for the loss history, albeit at
the expense of less responsiveness.
6 By no means do we claim that the chosen distribution exactly reflects
network conditions in multicast distribution trees.
Simulations
We implemented TFMCC in the ns2 network simulator [2]
to investigate its behavior under controlled conditions. In
this paper, we can only report a small fraction of the simulations
that were carried out. In all simulations below, drop-tail
queues were used at the routers to ensure acceptable behavior
in the current Internet. Generally, both fairness towards TCP
and intra-protocol fairness improve when active queuing (e.g.
RED) is used instead.
4.1 Fairness
Fairness towards competing TCP flows was analyzed using
the well-known single-bottleneck topology (Figure
a number of sending nodes are connected to as many receiving
nodes through a common bottleneck. Figure 9 shows the
Receivers
Senders
Bottleneck Link
Figure
8: Topology
throughput of a TFMCC flow and two sample TCP flows (out
of 15) from a typical example of such simulations. The average
throughput of TFMCC closely matches the average TCP
throughput but TFMCC achieves a smoother rate. Similar results
can be obtained for many other combinations of flows.
In general, the higher the level of statistical multiplexing, the
better the fairness among competing flows. Only in scenarios
where the number of TFMCC flows greatly exceeds the
number of TCP flows is TFMCC more aggressive than TCP.
The reason for this lies in the spacing of the data packets and
buffer requirements: TFMCC spaces out data packets, while
sends them back-to-back if it can send multiple packets,
making TCP more sensitive to nearly-full queues typical of
drop-tail queue management.
If instead of one bottleneck the topology has separate bottlenecks
on the last hops to the receivers, then we observe
the throughput degradation predicted in Section 3. When the
scenario above is modified such that TFMCC competes with
single TCP flows on sixteen identical 1 MBit/s tail circuits,
then TFMCC achieves only 70% of TCP's throughput (see
Figure
10).
Throughput
(KBit/s)
Time
Figure
9: One TFMCC flow and 15 TCP flows over a single
Throughput
(KBit/s)
Time
Figure
flow and 16 TCP flows (individual bottlenecks
4.2 Responsiveness to Changes in the Loss Rate
An important concern in the design of congestion control protocols
is their responsiveness to changes in network condi-
tions. Furthermore, when receivers join and leave the session
it is important that TFMCC react sufficiently fast should a
change of CLR be required. This behavior is investigated using
a star topology with four links having a RTT of 60 ms and
loss rates of 0.1%, 0.5%, 2.5%, and 12.5% respectively. At
the beginning of the simulation the receiver set consists only
of the receiver with the lowest loss rate. Other receivers join
the session after 100 seconds at 50 second intervals in the
order of their loss rates (lower-loss-rate receivers join first).
After 250 seconds, receivers leave the transmission in reverse
order, again with 50 second intervals in between. To verify
that TFMCC throughput is similar to TCP throughput, an
additional TCP connection to each receiver is set up for the
duration of the whole experiment.
As show in Figure 11, TFMCC matches closely the TCP
throughput at all four loss levels. Adaption of the sending
rate when a new higher-loss receiver joins is fast. The receiver
needs 500-1000 ms after the join to get enough packets
to compute a meaningful loss rate. The major part of the
delay is caused by the exponential timer for the feedback sup-
pression, which increases the overall delay before a new CLR
is chosen to roughly one to three seconds. 7 The experiment
7 Note that this high delay is caused by the use of the initial RTT in the
feedback suppression mechanism. Once all receivers have a valid RTT esti-
mate, the delay caused by feedback suppression is much shorter.2610
Throughput
(MBit/s)
Time
Figure
11: Responsiveness to changes in the loss rate
demonstrates TFMCC's very good reactivity to changes in
congestion level.
The delay before TFMCC assumes that a rate-limiting receiver
left the group and the sending rate can be increased
is configurable. Currently, an absence of feedback from the
the feedback delay is used as an indication that
this receiver left the group. In case explicit leave messages
are used with the TFMCC protocol the delay can be reduced
to one RTT.
The same simulation setting can be used to investigate responsiveness
to changes in the RTT. The results (not shown
here) are similar to those above, since all four receivers have
measured their RTT by the time the RTT changes, and the
one-way RTT adjustments immediately indicate this change.
With larger receiver sets, the amount of time that expires until
a high RTT receiver is found may be greater. This effect is
investigated in the next section.
4.3 Initial RTT Measurements and Responsiveness
to Changes in the RTT
The number of receivers that measure their RTT each feed-back
round depends on the number of feedback messages and
thus on the parameters used for feedback suppression. Figure
12 shows how the number of receivers with a valid RTT
estimate evolves over time for a large receiver set and a high
initial RTT value. The link RTTs for the 1000 receivers vary
between ms and 140 ms and the initial RTT value is set to
500 ms. A single bottleneck is used to produce highly correlated
loss for all receivers. This is the worst case, since if
loss estimates at the receivers vary, it is often unnecessary to
measure the RTT to the low-loss receivers. Since the calculated
rate of the receivers still using the initial RTT is below
the current sending rate, at least one receiver will get its first
RTT measurement per feedback round until all receivers have
measured their RTT.
At the beginning of the simulation, the number of receivers
obtaining initial RTT measurements is close to the expected
Number
of
Receivers
with
valid
RTT
Time
Figure
12: Rate of initial RTT measurements
number of feedback messages per feedback round. Over time,
as more and more receivers have a valid RTT, the number
of receivers that want to give feedback decreases, and the
rate of initial RTT measurements gradually drops to one new
measurement per feedback round. While a delay of 200 seconds
until 700 of the 1000 receivers have measured their RTT
seems rather large, one should keep in mind that this results
from having the same congestion level for all receivers. If
some receivers experience higher loss rates, those receivers
will measure their RTT first and TFMCC can adapt to their
calculated rate. Under most real network conditions it will
not be necessary to measure the RTT to all receivers.
In scenarios with 40, 200 and 1000 receivers respectively,
we investigate how long it takes until a high RTT receiver
is found among receivers with a low RTT when all receiver
experience independent loss with the same loss probability.
The x-axis of the graph in Figure 13 denotes the point of time
when the RTT is increased during the experiment and the y-axis
shows the amount of time after which this change in RTT
is reacted upon by choosing the correct CLR. The later the
increase in RTT, the greater the number of receivers already
having valid RTT estimates, and the expected time until the
high-RTT receiver is selected as CLR decreases.2060100140
Delay
until
Reaction
Time of Change
receivers
200 receivers
1000 receivers
Figure
13: Responsiveness to changes in the RTT
4.4 Slowstart
The highest sending rate achieved during slowstart is largely
determined by the level of statistical multiplexing. On an
otherwise empty link, TFMCC will reach roughly twice the
bottleneck bandwidth before leaving slowstart, as depicted in
Figure
14. When TFMCC competes with a single TCP flow,
slowstart is terminated at a rate below the fair rate 8 of the
TFMCC flow and this rate is relatively independent of the
number of TFMCC receivers. Already in the case of two
competing TCP flows, and even more so when the level of
statistical multiplexing is higher, the slowstart rate decreases
considerably when the number of receivers increases. Most
of the increase to the fair rate takes place after slowstart in
normal congestion control mode.50015002500
Max.
Slowstart
Rate
(KBit/s)
Number of Receivers
Fair Rate
only TFMCC
one competing TCP
high stat. mux.
Figure
14: Maximum slowstart rate
We do not include an extra graph of the exact increase behavior
of TFMCC compared to TCP, since it can be seen for
example in Figures 15 and 16. TFMCC and TCP are started
at the same time. TCP's increase to the fair rate is very rapid,
while it takes TFMCC roughly 20 seconds to reach that level
of bandwidth.
4.5 Late-join of Low-rate Receiver
In the previous experiments we investigated congestion control
with moderate loss rates, expected to be prevalent in the
applications domains for which TFMCC is well suited. Under
some circumstances, the loss rate at a receiver can initially
be much higher. Consider an example where TFMCC
operates at a fair rate of several MBit/s and a receiver with a
low-bandwidth connection joins. Immediately after joining,
this receiver may experience loss rates close to 100%. While
such conditions are difficult to avoid, TFMCC should ensure
that they exist only for a limited amount of time and quickly
choose the new receiver as CLR.
The initial setup for this simulation is a eight-member TFMCC
session competing with seven TCP connections on a 8 MBit/s
link, giving a fair rate of 1 MBit/s. During the simulation, a
8 The fair rate for TFMCC in all three simulations is 1 MBit/s.
new receiver joins the session behind a separate 200 KBit/s
bottleneck from the sender from time 50 to 100 seconds.
TFMCC does not have any problems coping with this sce-
nario, choosing the joining receiver as CLR within a very few
seconds. Although the loss rate for the joining receiver is initially
very high, the TFMCC rate does not drop to zero. As
soon as the buffer of the 200 KBit/s connection is full, the
receiver experiences the first loss event and the loss history
is initialized. Details about the loss history initialization process
can be found in Appendix B. When the first loss occurs,
the receiver gets data at a rate of exactly the bottleneck band-
width. Thus, the loss rate will be initialized to a value below
the 80% value and from there adapt to the appropriate loss
event rate such that the available bandwidth of 200 KBit/s is
used.
When an additional TCP flow is set up using the 200 KBit/s
link for the duration of the experiment, this flow inevitably
experiences a timeout when the new receiver joins the multicast
group and the link is flooded with packets. However,
shortly afterwards, TFMCC adapts to the available capacity
and TCP recovers with bandwidth shared fairly between
TFMCC and TCP.
We conclude that TFMCC shows good performance and fair-
ness, even under unfavorable network
Throughput
(KBit/s)
Time
aggregated TCP flows
TFMCC flow
Figure
15: Late-join of low-rate receiver200600100014000 20 40
Throughput
(KBit/s)
Time
aggregated TCP flows
TCP on 200KBit/s link
TFMCC flow
Figure
Additional TCP flow on the slow link
5 Related Work
To date, a number of single-rate multicast congestion control
schemes have been proposed. A prominent recent example is
PGMCC [17]. It selects the receiver with the worst network
conditions as a group representative, called the acker. The
selection process for the acker mainly determines the fairness
of the protocol, and is based on a simplified version of the
TCP throughput model in Equation (4). Similar to TFMCC,
each receiver tracks the RTT and the smoothed loss rate, and
feeds these values into the model. The results are communicated
to the sender using normal randomized feedback timers
to avoid an implosion. If available, PGMCC also makes use
of network elements to aggregate feedback.
Once an acker is selected, a TCP-style window-based congestion
control algorithm is run between the sender and the
acker. Minor modifications compared to TCP concern the
separation of congestion control and reliability to be able to
use PGMCC for reliable as well as unreliable data transport
and the handling of out of order packets and RTT changes
when a different receiver is selected as the acker.
As evidenced by the simulations in [17], PGMCC competes
fairly with TCP for many different network conditions. The
basic congestion control mechanism is simple and its dynamics
are well understood from the analysis of TCP congestion
control. This close mimicking of TCP's window behavior
produces rate variations that resemble TCP's sawtooth-
like rate. This makes PGMCC suited for applications that
can cope with larger variations in the sending rate. In con-
trast, the rate produced by TFMCC is generally smoother
and more predictable, making TFMCC well suited to applications
with more constraints on acceptable rate changes. Since
the acker selection process is critical for PGMCC's perfor-
mance, PGMCC might benefit from using a feedback mechanism
similar to that of TFMCC, based on biased exponentially
weighted timers. To summarize, we believe that both
PGMCC and TFMCC present viable solutions for single-rate
multicast congestion control, targeted at somewhat different
application domains.
While PGMCC relies on a congestion window, TCP-Emulation
at Receivers (TEAR) [16] is a combination of window- and
rate-based congestion control. It features a TCP-like window
emulation algorithm at the receivers, but the window is not
used to directly control transmission. Instead, the average
window size is calculated and transformed into a smoothed
sending rate, which is used by the sender to space out data
packets. So far, only a unicast version of TEAR exists, but
the mechanism can be made multicast-capable by implementing
a TFMCC-like scalable feedback suppression scheme to
communicate the calculated rate to the sender as well as scalable
RTT measurements. The advantage of TEAR lies in
the fact that it does not require a model of TCP with all the
necessary assumptions to compute a rate. However, for low
levels of statistical multiplexing, TEAR's emulation assumptions
about the independence of loss timing from transmit rate
and of timeout emulation mean that it shares many of the limitations
of the TCP models we use. Thus we do not expect a
multicast variant of TEAR to behave significantly better or
worse than TFMCC.
6 Conclusions
We have described TFMCC, a single-rate multicast congestion
control mechanism intended to scale to groups of several
thousand receivers. Performing multicast congestion control
whilst remaining TCP-friendly is difficult, in particular because
TCP's transmission rate depends on the RTT, and measuring
RTT in a scalable manner is a hard problem. Given the
limitations of end-to-end protocols, we believe that TFMCC
represents a significant improvement over previous work in
this area.
We have extensively evaluated TFMCC through analysis and
simulation, and believe we have a good understanding of its
behavior in a wide range of network conditions. To summa-
rize, we believe that under the sort of conditions TFMCC
will experience in the real-world it will behave rather well.
However we have also examined certain pathological cases;
in these cases the failure mode is for TFMCC to achieve a
slower than desired transmission rate. Given that all protocols
have bounds to their good behavior, this is the failure
mode we would desire, as it ensures the safety of the Internet.
An important part of any research is to identify the limitations
of a new design. TFMCC's main weakness is in the
startup phase - it can take a long time for sufficiently many
receivers to measure their RTT (assuming we cannot use NTP
to provide approximate default values). In addition, with
large receiver sets, TCP-style slowstart is not really an appropriate
mechanism, and a linear increase can take some time to
reach the correct operating point. However these weaknesses
are not specific to TFMCC - any safe single-rate multicast
congestion control mechanism will have these same limitations
if it is TCP-compatible. The implication is therefore
that single-rate multicast congestion control mechanisms like
TFMCC are only really well-suited to relatively long-lived
data streams. Fortunately it also appears that most current
multicast applications such as stock-price tickers or video
streaming involve just such long-lived data-streams.
6.1 Future Work
We plan to pursue this work further on several fronts. While
large-scale multicast experiments are hard to perform in the
real world, we plan to deploy TFMCC in a multicast filesystem
synchronization application (e.g. rdist) to gain small-scale
experience with a real application.
Some reliable multicast protocols build an application-level
tree for acknowledgment aggregation. We have devised a
hybrid rate/window-based variant of TFMCC that uses implicit
RTT measurement combined with suppression within
the aggregation nodes. This variant does not need to perform
explicit RTT measurements or end-to-end feedback suppres-
sion. Whilst at first glance this would seem to be a big improvement
over the variant in this paper, in truth it moves
the complex initialization problem from RTT measurement
to scalable ack-tree construction, which shares many of the
problems posed by RTT measurement. Still, this seems to be
a promising additional line of research.
Finally, the basic equation-based rate controller in TFMCC
would also appear to be suitable for use in receiver-driven
layered multicast, especially if combined with dynamic layering
[4] to eliminate problems with unpredictable multicast
leave latency.
Acknowledgements
We would like to thank Sally Floyd and Luigi Rizzo for their
invaluable comments. We would also like to acknowledge
feedback and suggestions received from RMRG members on
earlier versions of TFMCC.
--R
A web server's view of the transport layer.
The loss path multiplicity problem in multicast congestion con- trol
A reliable multicast framework for light-weight sessions and application level framing
On the scaling of feedback algorithms for very large multicast groups.
Order statistics from the gamma distribu- tion
Session directories and scalable Internet multicast address allocation.
RFC 2357: IETF criteria for evaluating reliable multi-cast transport and application protocols
The macroscopic behavior of the congestion avoidance al- gorithm
Internet timekeeping around the globe.
Scalable feed-back for large groups
Modeling TCP Reno performance: a simple model and its empirical validation.
TEAR: TCP emulation at receivers - flow control for multimedia stream- ing
Extremum feedback for very large multicast groups.
Extending equation-based congestion control to multicast applications
--TR
Receiver-driven layered multicast
A reliable multicast framework for light-weight sessions and application level framing
Session directories and scalable Internet multicast address allocation
Scalable feedback for large groups
Modeling TCP Reno performance
pgmcc
Equation-based congestion control for unicast applications
FLID-DL
A web server''s view of the transport layer
--CTR
Jiang Li , Murat Yuksel , Shivkumar Kalyanaraman, Explicit rate multicast congestion control, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.15, p.2614-2640, October 2006
Hualiang Chen , Zhongxin Liu , Zengqiang Chen , Zhuzhi Yuan, Extending TCP congestion control to multicast, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.11, p.3090-3109, August, 2007
Jiang Li , Murat Yuksel , Xingzhe Fan , Shivkumar Kalyanaraman, Generalized multicast congestion control, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.6, p.1421-1443, April, 2007
Alexandre Brandwajn, A model of periodic acknowledgement, Performance Evaluation, v.52 n.4, p.221-235, May
Jrg Widmer , Catherine Boutremans , Jean-Yves Le Boudec, End-to-end congestion control for TCP-friendly flows with variable packet size, ACM SIGCOMM Computer Communication Review, v.34 n.2, April 2004
I. El Khayat , P. Geurts , G. Leduc, Machine-learnt versus analytical models of TCP throughput, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.10, p.2631-2644, July, 2007
Injong Rhee , Lisong Xu, Limitations of equation-based congestion control, ACM SIGCOMM Computer Communication Review, v.35 n.4, October 2005
Jrme Viron , Thierry Turletti , Kav Salamatian , Christine Guillemot, Source and channel adaptive rate control for multicast layered video transmission based on a clustering algorithm, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.158-175, 1 January 2004
X. Brian Zhang , Simon S. Lam , Dong-Young Lee, Group rekeying with limited unicast recovery, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.44 n.6, p.855-870, 22 April 2004
John W. Byers , Gu-In Kwon , Michael Luby , Michael Mitzenmacher, Fine-grained layered multicast with STAIR, IEEE/ACM Transactions on Networking (TON), v.14 n.1, p.81-93, February 2006
Ch. Bouras , A. Gkamas, SRAMT: a hybrid sender and receiver-based adaptation scheme for TCP friendly multicast transmission, Computer Networks and ISDN Systems, v.47 n.4, p.551-575, 15 March 2005
Injong Rhee , Lisong Xu, Limitations of equation-based congestion control, IEEE/ACM Transactions on Networking (TON), v.15 n.4, p.852-865, August 2007
Puneet Thapliyal , Sidhartha , Jiang Li , Shivkumar Kalyanaraman, LE-SBCC: Loss-Event Oriented Source-Based Multicast Congestion Control, Multimedia Tools and Applications, v.17 n.2-3, p.257-294, July-August 2002
Yuan Gao , Jennifer C. Hou , Sanjoy Paul, RACCOOM: A Rate-Based Congestion Control Approach for Multicast, IEEE Transactions on Computers, v.52 n.12, p.1521-1534, December
Christoph Neumann , Vincent Roca , Rod Walsh, Large scale content distribution protocols, ACM SIGCOMM Computer Communication Review, v.35 n.5, October 2005
C. Bouras , A. Gkamas , A. Karaliotas , K. Stamos, Architecture and Performance Evaluation for Redundant Multicast Transmission Supporting Adaptive QoS, Multimedia Tools and Applications, v.25 n.1, p.85-110, January 2005 | feedback;TCP-friendliness;single-rate;congestion control;suppression;multicast |
383193 | Hash based parallel algorithms for mining association rules. | In this paper, we propose four parallel algorithms (NPA, SPA, HPA and HPA-ELD) for mining association rules on shared-nothing parallel machines to improve its performance. In NPA, candidate itemsets are just copied amongst all the processors which can lead to memory overflow for large transaction databases. The remaining three algorithms partition the candidate itemsets over the processors. If it is partitioned simply (SPA), transaction data has to be braodcast to all processors. HPA partitions the candidate itemsets using a hash function to eliminate broadcasting, which also reduces the comparison workload significantly. HPA-ELD fully utilizes the available memory space by detecting the extremely large itemsets and copying them, which is also very effective at flattering the load over the processors. We implemented these algorithms in a shared-nothing environment. Performance evaluations show that the best algorithm, HPA-ELD, attains good linearity on speedup ratio and is effective for handling skew. | Introduction
Recently, "Database Mining" has begun to attract
strong attention. Because of the progress of bar-code
technology, point-of-sales systems in retail company
become to generate large amount of transaction data,
but such data being archived and not being used effi-
ciently. The advance of microprocessor and secondary
storage technologies allows us to analyze this vast
amount of transaction log data to extract interesting
customer behaviors. Database mining is the method
of efficient discovery of useful information such as
rules and previously unknown patterns existing between
data items embedded in large databases, which
allows more effective utilization of existing data.
One of the most important problems in database
mining is mining association rules within a database
[1], so called the " basket data analysis" problem. Basket
data type typically consist of a transaction identifier
and the bought items par-transaction. By analyzing
transaction data, we can extract the association
rule such as "90% of the customers who buy both A
and B also buy C".
Several algorithms have been proposed to solve
the above problem[1][2][3][4][5][6][7]. However most
of these are sequential algorithms. Finding association
rules requires scanning the transaction database
repeatedly. In order to improve the quality of the
rule, we have to handle very large amounts of transaction
data, which requires incredibly long computation
time. In general, it is difficult for a single processor
to provide reasonable response time. In [7], we examined
the feasibility of parallelization of association rule
mining 1 . In [6], a parallel algorithm called PDM, for
mining association rules was proposed. PDM copies
the candidate itemsets among all the processors. As
we will explain later, in the second pass of the Apriori
algorithm, introduced by R.Agrawal and R.Srikant[2],
the candidate itemset becomes too large to fit in the
local memory of a single processor. Thus it requires
reading the transaction dataset repeatedly from disk,
which results in significant performance degradation.
In this paper, we propose four different parallel algorithms
(NPA, SPA, HPA and HPA-ELD) for mining
association rules based on the Apriori algorithm. In
NPA (Non Partitioned Apriori), the candidate itemsets
are just copied among all the processors. PDM
mentioned above corresponds to NPA. The remaining
three algorithms partition the candidate itemsets
over the processors. Thus exploiting the aggregate
memory of the system effectively. If it is partitioned
simply Simply Partitioned Apriori), transaction
data has to be broadcast to all the processors.
HPA (Hash Partitioned Apriori) partitions the candidate
itemsets using a hash function as in the hash
join, which eliminates transaction data broadcasting
and can reduce the comparison workload significantly.
In case the size of candidate itemset is smaller than
1 The paper was presented at a local workshop in Japan.
the available system memory, HPA does not use the
remaining free space. However HPA-ELD (HPA with
Extremely Large itemset Duplication) does utilize the
memory by copying some of the itemsets. The itemsets
are sorted based on their frequency of appearance.
HPA-ELD chooses the most frequently occurring itemsets
and copies them over the processors so that all
the memory space is used, which contributes to further
reduce the communication among the processor.
HPA-ELD, an extension of HPA, treats the frequently
occurring itemsets in a special way, which can reduce
the influence of the transaction data skew.
The implementation on a shared-nothing 64-node
parallel computer, the Fujitsu AP1000DDV, shows
that the best algorithm, HPA-ELD, attains satisfactory
linearity on speedup and is also effective at skew
handling.
This paper is organized as follows. In next section,
we describe the problem of mining association rules.
In section 3, we propose four parallel algorithms. Performance
evaluations and detail cost analysis are given
in section 4. Section 5 concludes the paper.
Mining Association Rules
First we introduce some basic concepts of association
rules, using the formalism presented in [1]. Let
be a set of literals, called items.
be a set of transactions, where
each transaction t is a sets of items such that t ' I. A
transaction has an associated unique identifier called
T ID. We say each transaction contains a set of
items I. The itemset X has support s
in the transaction set D if s% of transactions in D
contain X, here we denote
association rule is an implication of the form X
Each rule has
two measures of value, support and confidence: The
support of the rule X ) Y is support(X [ Y ). The
confidence c of the rule X ) Y in the transaction
set D means c% of transactions in D that contain X
also contain Y , which is can be written as the ratio
)=support(X). The problem of mining
association rules is to find all the rules that satisfy a
user-specified minimum support and minimum confi-
dence, which can be decomposed into two subproblems
1. Find all itemsets that have support above the
user-specified minimum support. These itemset
are called the large itemsets.
2. For each large itemset, derive all rules that
have more than user-specified minimum confidence
as follows: for a large itemset X and any
minimum confidence, then the rule X0Y
is derived.
For example, let T
5g be the
transaction database. Let minimum support and
minimum confidence be 60% and 70% respectively.
Then, the first step generates the large itemsets
3g. In the second step, an association
derived.
After finding all large itemsets, association rules are
derived in a straightforward manner. This second sub-problem
is not a big issue. However because of the
large scale of transaction data sets used in database
mining, the first subproblem is a nontrivial problem.
Much of the research to date has focused on the first
subproblem.
Here we briefly explain the Apriori algorithm for
finding all large itemsets, proposed in [2], since the
parallel algorithms to be proposed by us in section
3 are based on this algorithm. Figure 1 gives an
overview of the algorithm, using the notation given
in
Table
1.
k-itemset An itemset having k items.
of large k-itemsets, whose support
is larger than user-specified
minimum support.
of candidate k-itemsets, which is
potentially large itemset
Table
1: Notation
In the first pass (pass 1), support count for each
item is counted by scanning the transaction database.
Hereafter we prepare a field named support count for
each itemset, which is used to measure how many
times the itemset appeared in transactions. Since
itemset here contains just single item, each item has
a support count field. All the items which satisfy the
minimum support are picked out. These items are
called large 1-itemset (L 1 ). Here k-itemset is defines
a set of k items. The second pass (pass 2), the 2-
itemsets are generated using the large 1-itemset which
is called the candidate 2-itemsets (C 2 ). Then the
support count of the candidate 2-itemsets is counted
by scanning the transaction database. Here support
count of the itemset means the number of transactions
which contain the itemset. At the end of scan-
large 1-itemsets
while (L k01 6= ;) do
The candidates of size k generated from L k01
transactions t 2 D
Increment the support count of all candidates in
C k that are contained in t
candidates in C k which satisfy minimum
support
Answer :=
Figure
1: Apriori algorithm
ning the transaction data, the large 2-itemsets (L 2 )
which satisfy minimum support are determined. The
following denotes the k-th iteration, pass k.
1. Generate candidate itemset:
The candidate k-itemsets (C k ) are generated using
large (k 0 1)-itemsets (L k01 ) which were determined
in the previous pass (see Section 2.1).
2. Count support :
The support count for the candidate k-itemsets
are counted by scanning the transaction database.
3. Determine large itemset:
The candidate k-itemsets are checked for whether
they satisfy the minimum support or not, the
large k-itemsets (L k ) which satisfy the minimum
support are determined.
4. The procedure terminates when the large itemset
becomes empty. Otherwise k
"1".
2.1 Apriori Candidate Generation
The procedure for generating candidate k-itemsets
using 1)-itemsets is as follows: Given a large (k 0
1)-itemset, we want to generate a superset of the set
of all large k-itemsets. Candidate generation occurs
in two steps. First, in the join step, join large (k 0 1)-
itemset with (k 0 1)-itemset. Next, in the prune step,
delete all of the itemsets in the candidate k-itemset
where some of the (k 01)-subset of candidate itemsets
are not in the large (k 0 1)-itemset.
3 Parallel Algorithms
In this section, we describe four parallel algorithms
(NPA, SPA, HPA and HPA-ELD) for the first sub-
problem, which we call count support processing here-
after, finding all large itemsets for shared-nothing parallel
machines.
3.1 Algorithm Design
In the sequential algorithm, the count support processing
requires the largest computation time, where
the transaction database is scanned repeatedly and a
large number of candidate itemsets are examined. We
designed a parallel algorithm for count support processing
If each processor can hold all of the candidate item-
sets, parallelization is straightforward 2 . However for
large scale transaction data sets, this assumption does
not hold. Figure 2 shows the number of candidate
itemsets and the large itemsets in each pass. These
statistics were taken from the real point-of-sales data.
In figure 2, the vertical axis is a log scale. The candi-100100001e+061e+08
memory
usage
of
itemsets
in
bytes
pass number
candidate itemsets
large itemsets
Figure
2: real point-of-sales data
date itemset of pass 2 is too large to fit within the local
memory of a single processor. In NPA, the candidate
itemsets are just copied amongst all the processors. In
the case where all of the candidate itemsets do not fit
within the local memory of a single processor, the candidate
itemsets are partitioned into fragments, each of
which fits in the memory size of a processor. Support
count processing requires repetitive scanning transaction
database. The remaining three algorithms, SPA,
HPA and HPA-ELD, partition the candidate itemsets
over the memory space of all the processors. Thus
SPA, HPA and HPA-ELD can exploit the total sys-
tem's memory effectively as the number of processors
increases. For simplicity, we assume that the size of
the candidate itemsets is larger than the size of local
memory of single processor but is smaller than the
sum of the memory of all the processors. It is easy
We will later introduce an algorithm named NPA, where
the reason why the parallelization is so easy will be clarified.
to extend this algorithm to handle candidate itemsets
whose size exceeds the sum of all the processors memories
3.2 Non Partitioned Apriori : NPA
In NPA, the candidate itemsets are copied over all
the processors, each processor can work independently
and the final statistics are gathered into a coordinator
processor where minimum support conditions are
examined. Figure 3 gives the behavior of pass k of
the p-th processor in NPA, using the notation given in
Table
2.
items
fC d
fragments each of which
fits in a processor's local memory
do
do
Increment the support count of all candidates in
1 that are contained in t
Send the support count of C d
1 to the coordinator
=3 Coordinator determine L d
which satisfy
user-specified minimum support in C d
1 and
broadcast L d
1 to all processors 3=
Receive L d
1 from the coordinator
d L dk:= 2
while (L k01 6= ;) do
The candidates of size k generated from L k01
fC d
Partition C k into fragments each of which fits
in a processor's local memory
do
do
Increment the support count of all candidates in
k that are contained in t
Send the support count of C d
k for to the coordinator
=3 Coordinator determine L d
k which satisfy
user-specified minimum support in C d
k and
broadcast L d
1 to all processors 3=
Receive L d
k from the coordinator
d L d
Figure
3: NPA algorithm
Each processor works as follows:
1. Generate the candidate itemsets:
of all the large k-itemsets.
of all the candidate k-itemsets
The size of C k in bytes.
M The size of main memory in
bytes.
Transactions stored in the local
disk of the p-th processor
Sets of fragment of candidate
k-itemsets. Each fragment
fits in the local memory of a
processor.
j The size of C d
k in bytes.
k Sets of large k-itemsets derived
from C d
k .
Table
2: Notation
Each processor generates the candidate k-itemsets
using the large (k 0 1)-itemsets, and insert
it into the hash table.
2. Scan the transaction database and count the support
count value:
Each processor reads the transaction database
from its local disk, generates k-itemsets from the
transaction and searches the hash table. If a hit
occurs, increment its support count value.
3. Determine the large itemsets:
After reading all the transaction data, all pro-
cessor's support count are gathered into the co-ordinator
and checked to determine whether the
minimum support condition is satisfied or not.
4. If large k-itemset is empty, the algorithm termi-
nates. Otherwise k and the coordinator
broadcasts large k-itemsets to all the processors
and goto "1".
If the size of all the candidate itemsets exceeds
the local memory of a single processor, the candidate
itemsets are partitioned into fragments, each of which
can fits within the processor's local memory and the
above process is repeated for each fragment. Figure
3, beginning at the while loop, shows the method by
which each of the candidate itemsets are divided into
fragments with each fragment being processed sequentially
Although this algorithm is simple and no transaction
data are exchanged among processors in the second
phase, the disk I/O cost becomes very large, since
this algorithm reads the transaction database repeatedly
if the candidate itemsets are too large to fit within
the processor's local memory.
3.3 Simply Partitioned Apriori : SPA
In NPA, the candidate itemsets are not partitioned
but just copied among the processors. However the
candidate itemsets usually becomes too large to fit
within the local memory of single processor, which
generally occurs during the second pass
SPA partitions the candidate itemsets equally over the
memory space of all the processors. Thus it can exploit
the aggregate memory of the systems, while memory
efficiency is very low in copy based NPA.
Since the candidate itemsets are partitioned among
the processors, each processor has to be broadcast its
own transaction data to all the processors at second
phase, while no such broadcast is required in NPA.
Figure
4 gives the behavior of pass k by the p-th processor
in SPA, using the notation in Table 3. Here we
assume the size of candidate itemset is smaller than
the size of sum of all the processor's memory. Extension
of the algorithm to handle much larger candidate
itemset is easy. We can divide the candidate itemsets
into fragments like in NPA.
of all the large k-itemsets.
of all the candidate k-itemsets.
Transactions stored in the local disk
of the p-th processor
Sets of candidate k-itemsets assigned
k the p-th processor
means the number of processors)
k Sets of large k-itemsets derived from
Table
3: Notation
Each processor works as follows:
1. Generate the candidate itemsets:
Each processor generates the candidate k-itemsets
using the large (k 0 1)-itemsets and inserts
a part of the candidate itemsets into its own
hash table. The candidate k-itemsets are assigned
to processors in a round-robin manner 3 .
2. Scan the transaction database and count the support
count value:
3 The k-itemsets are assigned equally to all of the processors
in a round-robin manner. By round-robin we mean that the
candidates are assigned to the processors in a cyclical manner
with the i-th candidate assigned to processor i mod n,
is the number of processors in the system.
items assigned to the p-th processor
do
Broadcast t to all the other processors
Receive the transaction sent from the other processors
and increment the support count of all
candidates that are contained in received trans-
action
All the candidates in C p
which satisfy
user-specified minimum support
=3 Each processor can determine individually
whether assigned candidate k-itemset
user-specified minimum
support or not
1 to the coordinator
=3 Coordinator make up L 1 :=
1 and
broadcast it to all the other processors 3=
Receive L 1 from the coordinator
while (L k01 6= ;) do
k g:= The candidates of size k, assigned to the
p-th processor, which is generated from
do
Broadcast t to all the processors
Receive the transaction sent from the other processors
and increment the support count of all
candidates that are contained in the received
transaction
All the candidates in C p
which satisfy the
user-specified minimum support
k to the coordinator
=3 Coordinator make up L k :=
k and
broadcast it to all the processors 3=
Receive L k from the coordinator
Figure
4: SPA algorithm
Each processor reads the transaction database
from its local disk and also broadcasts it to all
the other processors. For each transaction en-
try, when read from its own disk or received from
another processors, the support count is incremented
in the same way as in NPA.
3. Determine the large itemsets:
After reading all the transaction data, each processor
can determine individually whether each
candidate k-itemset satisfy user-specified minimum
support or not. Each processor send L p
k to
the coordinator, where L k :=
are derived.
4. If large k-itemset is empty, the algorithm termi-
nates. Otherwise k and the coordinator
broadcasts large k-itemsets to all the processors
and goto "1".
Although this algorithm is simple and easy to im-
plement, the communication cost becomes very large,
since this algorithm broadcasts all the transaction
data at second phase.
3.4 Hash Partitioned Apriori : HPA
HPA partitions the candidate itemsets among the
processors using the hash function like in the hash
join, which eliminates broadcasting of all the trans-action
data and can reduce the comparison workload
significantly. Figure 5 gives the behavior of pass k by
the p-th processor in HPA, using the notation in Table
3.
Each processor works as follows:
1. Generate the candidate itemsets:
Each processor generates the candidate k-itemset
using the large (k 0 1)-itemsets, applies the hash
function and determines the destination processor
ID. If the ID is its own, insert it into the hash
table. If not, it is discarded.
2. Scan the transaction database and count the support
count:
Each processor reads the transaction database
from its local disk. Generates k-itemsets from
that transaction and applies the same hash function
used in phase 1. Derives the destination processor
ID and sends the k-itemset to it. For the
itemsets received from the other processors and
those locally generated whose ID equals the pro-
cessor's own ID, search the hash table. If hit,
increment its support count value.
3. Determine the large itemset:
Same as in SPA.
items assigned to the p-th processor
based on hashed value
do
forall items x 2 t do
Determine the destination processor ID by applying
the same hash function which is used in item
partitioning, and send that item to it. If it is its
own ID, increment the support count for the item.
Receive the item from the other processors and increment
the support count for that item
All the candidates in C p
1 with minimum sup-
port
=3 Each processor can determine individually
whether assigned candidate k-itemset
user-specified minimum support or
not 3=
1 to the coordinator
=3 Coordinator make up L 1 :=
1 and
broadcast to all the processors 3=
Receive L 1 from the coordinator
while (L k01 6= ;) do
All the candidate k-itemsets, whose
hashed value corresponding to the p-th
processor
do
do
Determine the destination processor ID by applying
the same hash function which is used in
item partitioning, and send that k-itemset to it.
If it is its own ID, increment the support count
for the itemset.
Receive k-itemset from the other processors and
increment the support count for that itemset
All the candidates in C p
k with minimum
support
k to the coordinator
=3 Coordinator make up L k :=
k and
broadcast to all the processors 3=
Receive L k from the coordinator
Figure
5: HPA algorithm
4. If large k-itemset is empty, the algorithm termi-
nates. Otherwise k and the coordinator
broadcasts large k-itemsets to all the processors
and goto "1".
3.5 HPA with Extremely Large Itemset
Duplication
In case the size of candidate itemset is smaller than
the available system memory, HPA does not use the
remaining free space. However HPA-ELD does utilize
the memory by copying some of the itemsets. The
itemsets are sorted based on their frequency of ap-
pearance. HPA-ELD chooses the most frequently occurring
itemsets and copies them over the processors
so that all the memory space is used, which contributes
to further reduce the communication among the pro-
cessor. In HPA, it is generally difficult to achieve a flat
workload distribution. If the transaction data is highly
skewed, that is, some of the itemsets appear very frequently
in the transaction data, the processor which
has such itemsets will receive a much larger amount
of data than the others. This might become a system
bottleneck. In real situations, the skew of items is
easily discovered. In retail applications certain items
such as milk and eggs appear more frequently than
others. HPA-ELD can handle this problem effectively
since it treats the frequently occurring itemset entries
in a special way.
HPA-ELD copies such frequently occurring itemsets
among the processors and counts the support
count locally like in NPA. In the first phase, when
the processors generate the candidate k-itemset using
the large (k01)-itemsets, if the sum of the support values
for each large itemset exceeds the given threshold,
it is inserted in all the processor's hash table. The remaining
candidate itemsets are partitioned as in HPA.
The threshold is determined so that all of the available
memory can be fully utilized using sort. After reading
all the transaction data, all processor's support count
are gathered and checked whether it satisfies the minimum
support condition or not. Since most of the
algorithm steps are equal to HPA, we omit a detailed
description of HPA-ELD.
Performance Evaluation
Figure
6 shows the architecture of Fujitsu
AP1000DDV system, on which we have measured the
performance of the proposed parallel algorithms for
mining association rules, NPA, SPA, HPA and HPA-
ELD. AP1000DDV employs a shared-nothing archi-
tecture. A 64 processor system was used, where each
processor, called cell, is a 25MHz SPARC with 16MB
local memory and a 1GB local disk drive. Each pro-
Host
S-net
B-net
T-net
Cell Disk
Figure
Organization of the AP1000DDV system
jDj the number of transactions
jtj the average number of items in par-
transactions
jIj the average number of items in maximal
potentially large itemsets
Name jtj jIj jDj Size
Table
4: Parameters of data sets
cessor is connected to three independent networks (T-
net, B-net and S-net). The communication between
processors is done via a torus mesh network called the
T-net, and broadcast communication is done via the
B-net. In addition, a special network for barrier syn-
chronization, called the S-net is provided.
To evaluate the performance of the four algo-
rithms, synthetic data emulating retail transactions
is used, where the generation procedure is based on
the method described in [2]. Table 4 shows the meaning
of the various parameters and the characteristics
of the data set used in the experiments.
4.1 Measurement of Execution Time
Figure
7 shows the execution time of the four proposed
algorithms using three different data sets with
varying minimum support values. 16(4 2 processors
are used in these experiments. Transaction data
is evenly spread over the processor's local disks. In
these experiments, each parallel algorithm is adopted
only for pass 2, the remaining passes are performed
using NPA, since the single processor's memory cannot
hold the entire candidate itemsets only for pass 2
and if it fits NPA is most efficient.
HPA and HPA-ELD significantly outperforms SPA.
elapsed
time
(sec)
HPA
HPA-ELD
elapsed
time
(sec)
HPA
HPA-ELD
elapsed
time
(sec)
HPA
HPA-ELD
Figure
7: Execution time varying minimum support
value
Since all transaction data is broadcast to all of the
processors in SPA, its communication costs are much
larger in SPA than in HPA and HPA-ELD where the
data is not broadcasted but transfered to just one processor
determined by a hash function. In addition SPA
transmits the transaction data, while HPA and HPA-
ELD transmit the itemsets, which further reduces the
communication costs.
In NPA, the execution time increases sharply when
the minimum support becomes small. Since the candidate
itemsets becomes large for small minimum sup-
port, the single processor's memory cannot hold the
entire candidate itemsets. NPA has to divide the candidate
itemsets into fragments. Processors have to
scan the transaction data repetitively for each frag-
ment, which significantly increases the execution time.
4.2 Communication Cost Analysis
Here we analyze the communication costs of each
algorithm. Since the size of the transaction data is
usually much larger than that of the candidate item-
set, we focus on the transaction data transfer. In NPA,
the candidate itemsets are initially copied over all the
processors, which incurs processor communication. In
addition during the last phase of the processing, each
processor sends the support count statistics to the co-ordinator
where the minimum support condition is ex-
amined. This also incurs communications overhead.
But here we ignore such overhead and concentrate on
the transaction data transfer for SPA and HPA in second
phase.
In SPA, each processor broadcasts all transaction
data to all the other processors. The total amount of
communication data of SPA at pass k can be expressed
as follows.
where
N the number of processors
t ip the number of items in i-th transaction of
p-th processor
T p the number of p-th processor's transactions
jDj the number of all the transactions
In HPA, the itemsets of the transaction are transmitted
to the limited number of processors instead of
broadcasting. The number of candidates is dependent
on the data synthesized by the generator. The total
amount of communication for HPA at pass k can be
expressed as follows.
ip
One transaction potentially generate t ip C k candi-
dates. However in practice most of them are filtered
out, as is denoted by the parameter ff k
ip . Since ff is
usually small 4 , M SPA
k . Since it is difficult
to derive ff, we measured the amount of data received
by each processor. Figure 8 shows the total amounts of
received messages of SPA, HPA and HPA-ELD where
t15.I4 transaction data was used with 0.4% minimum
support. As you can see in Figure 8, the amount of
messages received of HPA is much smaller then that of
SPA. In HPA-ELD, the amount of messages received is
further reduced, since a part of the candidate itemset
is handled separately and the itemsets which correspond
to them are not transmitted but just locally
processed.5001500amount
of
receive
message
size
SPA HPA HPA-ELD
Figure
8: the amount of messages received (pass 2)
4.3 Search Cost Analysis
In the second phase, the hash table which consists
of the candidate itemsets are probed for each transaction
itemset.
4 If the number of processors is very small and the number
of items in transaction is large, then M HPA
k could be larger
than M SPA
reasonable number of processors, this does
not happen as you can see in Figure 8. We are currently doing
experiments on mining association rules with item's classification
hierarchy, where combination of items becomes much larger
than the ordinary mining association rules.
When ff k increases, M HPA
k tends to increase as well. We will
report on this case in a future paper.
In NPA, the number of probes at pass k can be
expressed as follows.
CAN
ip
CAN
where
CAN the amount of the candidate itemset in
bytes
M the size of main memory of a single processor
in bytes
In NPA, if the candidate itemsets are too large to fit
in a single processor's memory, the candidate itemsets
are divided and the supports are counted by scanning
the transaction database repeatedly.
In SPA, every processor must process all the trans-action
data. The number of searches at pass k can be
expressed as follows.
ip
In HPA and HPA-ELD, the number of searches at
pass k can be expressed as follows.
ip
The search cost of HPA and HPA-ELD is always
smaller than SPA. It is apparent that S HPA
k .
Not only the communication cost but also search cost
also can be reduced significantly by employing hash
based algorithms, which is quite similar to the way
in which the hash join algorithm works much better
than nested loop algorithms. In NPA, the search cost
depends on the size of the candidate itemsets. If the
candidate itemset becomes too large, S NPA
k could be
larger than S SPA
k . But if it fits, S NPA
k , that is, the search cost is much smaller than
SPA and almost equal to HPA. Figure 9 shows the
search cost of the three algorithms for each pass, where
the t15.I4 data set is used under 16 processors with the
minimum support 0.4%. In the experimental results
we have so far shown, all passes except pass 2 adopts
NPA algorithm. We applied different algorithms only
for pass 2, which is computationally heaviest part of
the total processing. However, here in order to focus
on the search cost of individual algorithm more clearly,
each algorithm is applied for all passes. The cost of101000
number
of
probe
(millions)
pass number
HPA
Figure
9: the search cost of SPA, NPA and HPA
NPA changes drastically for pass 2. The search cost of
NPA is highly dependent on the size of available main
memory. If memory is insufficient, NPA's performance
deteriorates significantly due to the cost increase at
pass 2. In Figure 9, the search cost of NPA is less than
SPA. However as we explained before, it incurred a lot
of additional I/O cost. Therefore the total execution
time of NPA is much longer than that of SPA.
4.4 Comparison of HPA and HPA-ELD
In this section, the performance comparison between
HPA and HPA-ELD is described. In HPA-ELD,
we treat the most frequently appearing itemsets sepa-
rately. In order to determine which itemset we should
pick up, we use the statistics accumulated during pass
1. As the number of pass increases, the size of the
candidate itemsets decreases. Thus we focused on
pass 2. The number of the candidate itemsets to be
separated is adjusted so that sum of non-duplicated
itemsets and duplicated itemsets would just fit in the
available memory.
Figure
shows the execution time of HPA and
HPA-ELD for t15.I4 varying the minimum support
value on a 16 processors system. HPA-ELD is always
faster than HPA. The smaller the minimum support,
the larger the ratio of the difference between the execution
times of the two algorithms becomes. As the minimum
support value decreases, the number of candidate
itemsets and the count of support increases. The
candidate itemsets which are frequently found cause
large amounts of communication. The performance of
HPA is degraded by this high communications traffic.100300500
elapsed
time
(sec)
HPA
HPA-ELD
Figure
10: Execution time of HPA and HPA-ELD at
pass 2
Figure
11 shows the number of probes in each processor
for HPA and HPA-ELD for t15.I4 using a
processor system for pass 2. We picked up an example
which is highly skewed. Horizontal axis denotes
processor ID. In HPA, the distribution of the number
processor
HPA
HPA-ELD
number
of
probe
(millions)
Figure
11: The number of search of HPA and HPA-
ELD at pass 2
of probes is not flat. Since each candidate itemset is
allocated to just one processor, the large amount of
messages concentrate at a certain processor which has
many candidate itemsets occurring frequently.
In HPA-ELD, the number of probes is comparatively
flat. HPA-ELD handle certain candidate itemsets
separately, thus reducing the influence of the data
skew. However, as you can see in Figure 11, there still
remain the deviation of the load amongst processors.
If we parallelize the mining over more than 64 proces-
sors, we have to introduces more sophisticated load
balancing mechanism, which requires further investigation
4.5 Speedup
Figure
12 shows the speedup ratio for pass 2 varying
the number of processors used, 16, 32, 48 and 64,
where the curve is normalized with the 16 processor
execution time. The minimum support value was set
to 0.4%.0.51.52.53.54.5
speedup
ratio
number of processors
HPA
HPA-ELD
ideal
Figure
12: Speedup curve
NPA, HPA and HPA-ELD attain much higher linearity
than SPA. HPA-ELD, an extension of HPA
for extremely large itemset decomposition further increases
the linearity.
HPA-ELD attains satisfactory speed up ratio. This
algorithm just focuses on the item distribution of the
transaction file and picks up the extremely frequently
occurring items. Transferring such items could result
in network hot spots. HPA-ELD tries not to send such
items but to process them locally. Such a small modification
to the original HPA algorithm could improve
the linearity substantially.
4.6 Effect of increasing transaction
database size (Sizeup)
Figure
13 shows the effect of increasing transaction
database size as the number of transactions is
increased from 256,000 to 2 million transactions. We
used the data set t15.I4. The behavior of the results
does not change with increased database size. The
minimum support value was set to 0.4%. The number
of processors is kept at 16. As shown, each of the
parallel algorithms attains linearity.
5 Summary and related work
In this paper, we proposed four parallel algorithms
for mining association rules. A summary of the four200600100014001800
elapsed
time
(sec)
amount of transaction (thousands)
HPA
HPA-ELD
Figure
13: Sizeup curve
algorithms is shown in Table 5. In NPA, the candidate
itemsets are just copied amongst all the proces-
sors. Each processor works on the entire candidate
itemsets. NPA requires no data transfer when the
supports are counted. However in the case where the
entire candidate itemsets do not fit within the memory
of a single processor, the candidate itemsets are
divided and the supports are counted by scanning the
transaction database repeatedly. Thus Disk I/O cost
of NPA is high. PDM, proposed in [6] is the same as
which copies the candidate itemsets among all
the processors. Disk I/O for PDM should be also high.
The remaining three algorithms, SPA, HPA and
HPA-ELD, partition the candidate itemsets over the
memory space of all the processors. Because it better
exploits the total system's memory, disk I/O cost is
low. SPA arbitrarily partitions the candidate itemsets
equally among the processors. Since each processor
broadcasts its local transaction data to all other pro-
cessors, the communication cost is high. HPA and
HPA-ELD partition the candidate itemsets using a
hash function, which eliminates the need for transaction
data broadcasting and can reduce the comparison
workload significantly. HPA-ELD detects frequently
occurring itemsets and handles them separately, which
can reduce the influence of the workload skew.
6 Conclusions
Since mining association rules requires several scans
of the transaction file, its computational requirements
are too large for a single processor to have a reasonable
response time. This motivates our research.
In this paper, we proposed four different parallel
algorithms for mining association rules on a shared-nothing
parallel machine, and examined their viabil-
NPA SPA HPA HPA-ELD
Candidate copy partition partition
itemset (partially copy)
I/O cost high low
Communica-
tion cost
high low
Skew
handling
Table
5: characteristics of algorithms
ity through implementation on a 64 node parallel ma-
chine, the Fujitsu AP1000DDV.
If a single processor can hold all the candidate item-
sets, parallelization is straightforward. It is just sufficient
to partition the transaction over the processors
and for each processor to process the allocated
transaction data in parallel. We named this algorithm
NPA. However when we try to do large scale
data mining against a very large transaction file, the
candidate itemsets become too large to fit within the
main memory of a single processor. In addition to the
size of a transaction file, a small minimum support
also increases the size of the candidate itemsets. As
we decrease the minimum support, computation time
grows rapidly, but in many cases we can discover more
interesting association rules.
SPA, HPA and HPA-ELD not only partition the
transaction file but partition the candidate itemsets
among all the processors. We implemented these algorithms
on a shard-nothing parallel machine. Performance
evaluations show that the best algorithm,
HPA-ELD, attains good linearity on speedup by fully
utilizing all the available memory space, which is also
effective for skew handling. At present, we are doing
the parallelization of mining generalized association
rules described in [9], which includes the taxonomy
(is-a hierarchy). Each item belongs to its own class
hierarchy. In such mining, associations between the
higher class and the lower class are also examined.
Thus the candidate itemset space becomes much larger
and its computation time also takes even longer than
the naive single level association mining. Parallel processing
is essential for such heavy mining processing.
Acknowledgments
This research is partially supported as a priority
research program by ministry of education. We would
like to thank the Fujitsu Parallel Computing Research
Center for allowing us to use their AP1000DDV systems
--R
"Min- ing Association Rules between Sets of Items in Large Databases"
"Fast Algorithms for Mining Association Rules"
"An Effective Hash-Based Algorithm for Mining Association Rules"
"Ef- ficient Algorithms for Discovering Association Rules"
"An Effective Algorithm for Mining Association Rules in Large Databases"
"Efficient Parallel Data Mining for Association Rules"
"Considera- tion on Parallelization of Database Mining"
"Perfor- mance Evaluation of the AP1000 -Effects of message handling, broadcast, and barrier synchronization on benchmark performance-"
"Mining Generalized Association Rules"
--TR
--CTR
Ferenc Kovcs , Sndor Juhsz, Performance evaluation of the distributed association rule mining algorithms, Proceedings of the 4th WSEAS International Conference on Software Engineering, Parallel & Distributed Systems, p.1-6, February 13-15, 2005, Salzburg, Austria
Mohammed J. Zaki, Parallel and Distributed Association Mining: A Survey, IEEE Concurrency, v.7 n.4, p.14-25, October 1999
Takahiko Shintani , Masaru Kitsuregawa, Parallel mining algorithms for generalized association rules with classification hierarchy, ACM SIGMOD Record, v.27 n.2, p.25-36, June 1998
David W. Cheung , Kan Hu , Shaowei Xia, Asynchronous parallel algorithm for mining association rules on a shared-memory multi-processors, Proceedings of the tenth annual ACM symposium on Parallel algorithms and architectures, p.279-288, June 28-July 02, 1998, Puerto Vallarta, Mexico
Masahisa Tamura , Masaru Kitsuregawa, Dynamic Load Balancing for Parallel Association Rule Mining on Heterogenous PC Cluster Systems, Proceedings of the 25th International Conference on Very Large Data Bases, p.162-173, September 07-10, 1999
Masaru Kitsuregawa , Masashi Toyoda , Iko Pramudiono, Web community mining and web log mining: commodity cluster based execution, Australian Computer Science Communications, v.24 n.2, p.3-10, January-February 2002
Eui-Hong (Sam) Han , George Karypis , Vipin Kumar, Scalable Parallel Data Mining for Association Rules, IEEE Transactions on Knowledge and Data Engineering, v.12 n.3, p.337-352, May 2000
Takayuki Tamura , Masato Oguchi , Masaru Kitsuregawa, Parallel database processing on a 100 Node PC cluster: cases for decision support query processing and data mining, Proceedings of the 1997 ACM/IEEE conference on Supercomputing (CDROM), p.1-16, November 15-21, 1997, San Jose, CA
Masato Oguchi , Masaru Kitsuregawa, Optimizing transport protocol parameters for large scale PC cluster and its evaluation with parallel data mining, Cluster Computing, v.3 n.1, p.15-23, 2000
David W. Cheung , Kan Hu , Shaowei Xia, An Adaptive Algorithm for Mining Association Rules on Shared-Memory Parallel Machines, Distributed and Parallel Databases, v.9 n.2, p.99-132, March 2001
Dejiang Jin , Sotirios G. Ziavras, A Super-Programming Approach for Mining Association Rules in Parallel on PC Clusters, IEEE Transactions on Parallel and Distributed Systems, v.15 n.9, p.783-794, September 2004
Valerie Guralnik , George Karypis, Parallel tree-projection-based sequence mining algorithms, Parallel Computing, v.30 n.4, p.443-472, April 2004
David W. Cheung , Yongqiao Xiao, Effect of Data Distribution in Parallel Mining of Associations, Data Mining and Knowledge Discovery, v.3 n.3, p.291-314, September 1999
Masato Oguchi , Masaru Kitsuregawa, Dynamic remote memory acquisition for parallel data mining on ATM-connected PC cluster, Proceedings of the 13th international conference on Supercomputing, p.246-252, June 20-25, 1999, Rhodes, Greece
Lilian Harada , Naoki Akaboshi , Kazutaka Ogihara , Riichiro Take, Dynamic skew handling in parallel mining of association rules, Proceedings of the seventh international conference on Information and knowledge management, p.76-85, November 02-07, 1998, Bethesda, Maryland, United States
Claudio Silvestri , Salvatore Orlando, Distributed approximate mining of frequent patterns, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
D. W. Cheung , S. D. Lee , V. Xiao, Effect of Data Skewness and Workload Balance in Parallel Data Mining, IEEE Transactions on Knowledge and Data Engineering, v.14 n.3, p.498-514, May 2002
Frans Coenen , Paul Leng, Partitioning strategies for distributed association rule mining, The Knowledge Engineering Review, v.21 n.1, p.25-47, March 2006
Y. Sung , Zhao Li , Chew L. Tan , Peter A. Ng, Forecasting Association Rules Using Existing Data Sets, IEEE Transactions on Knowledge and Data Engineering, v.15 n.6, p.1448-1459, November
John D. Holt , Soon M. Chung, Parallel mining of association rules from text databases, The Journal of Supercomputing, v.39 n.3, p.273-299, March 2007
Vipin Kumar , Mohammed Zaki, High performance data mining (tutorial PM-3), Tutorial notes of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, p.309-425, August 20-23, 2000, Boston, Massachusetts, United States | memory overflow;shared nothing environment;SPA;memory space;performance evaluations;knowledge acquisition;hash function;broadcasting;candidate itemsets;large transaction databases;HPA;NPA;HPA-ELD;hash based parallel algorithms;shared nothing parallel machines;association rule mining |
383198 | Querying the World Wide Web. | The World Wide Web is a large, heterogeneous, distributed collection of documents connected by hypertext links. The most common technology currently used for searching the Web depends on sending information retrieval requests to index servers. One problem with this is that these queries cannot exploit the structure and topology of the document network. In this paper we propose a query language, Web-SQL, that takes advantage of multiple index servers without requiring users to know about them, and that integrates textual retrieval with structure and topology-based queries. We give a formal semantics for Web-SQL using a calculus based on a novel query locality, that is, how much of the network must be visited to answer a particular query. Finally, we describe a prototype implementation of WebSQL written in Java. | Introduction
The World Wide Web[BLCL + 94] is a large, heteroge-
neous, distributed collection of documents connected by
hypertext links. Current practice for finding documents
of interest depends on browsing the network by following
links and searching by sending information retrieval
requests to "index servers" that index as many documents
as they can find by navigating the network.
Correspondence to: Alberto O. Mendelzon
preliminary version of this paper was presented at the 1996
Symposium on Parallel and Distributed Information Systems
The limitations of browsing as a search technique are
well-known, as well as the disorientation resulting in
the infamous "lost-in-hyperspace" syndrome. As far as
keyword-based searching, one problem with it is that
users must be aware of the various index servers (over
a dozen of them are currently deployed on the Web),
of their strengths and weaknesses, and of the peculiarities
of their query interfaces. To some degree this can be
remedied by front-ends that provide a uniform interface
to multiple search engines, such as Multisurf[HGN
Savvysearch [Dre96], and Metacrawler [SE95].
A more serious problem is that these queries cannot
exploit the structure and topology of the document net-
work. For example, suppose we are looking for an IBM
catalog with prices for personal computers. A keyword
search for the terms "IBM," "personal computer," and
"price," using MetaCrawler, returns 92 references, including
such things as an advertisement for an "I Bought
Mac" T-shirt and the VLDB '96 home page. If we know
the web address (called URL, or "uniform resource loca-
tor") of the IBM home page is www.ibm.com, we would
like to be able to restrict the search to only pages directly
or indirectly reachable from this page, and stored
on the same server. With currently available tools, this
is not possible because navigation and query are distinct
phases: navigation is used to construct the indexes, and
query is used to search them once constructed. We propose
instead a tool that can combine query with naviga-
tion. The emphasis must be however on controlled navi-
gation. There are currently tens of millions of documents
on the Web, and growing; and network bandwidth is still
a very limited resource. It becomes important therefore
to be able to distinguish in queries between documents
that are stored on the local server, whose access is relatively
cheap, and those that are stored on remote serves,
whose access is more expensive. It is also important to
be able to analyze a query to determine its cost in terms
of how many remote document accesses will be needed
to answer it.
In this paper we propose a query language, WebSQL,
that takes advantage of multiple index servers without
requiring users to know about them, and that in-
Alberto O. Mendelzon, George A. Mihaila, and Tova Milo
tegrates textual retrieval with structure and topology-based
queries. After introducing the language in Section
2, in Section 3 we give a formal semantics for WebSQL
using a calculus based on a novel "virtual graph" model
of a document network. In Section 4, we propose a new
theory of query cost based on the idea of "query local-
ity," that is, how much of the network must be visited to
answer a particular query. Query cost analysis is a pre-requisite
for query optimization; we give an algorithm
for characterizing WebSQL queries with respect to query
locality that we believe will be useful in the query optimization
process, although we do not develop query optimization
techniques in this paper. Finally, in Section
5 we describe a prototype implementation of WebSQL
written in Java. We conclude in Section 6.
Related Work
There has been work in query languages for hyper-text
documents [BK90, CM89, MW93] as well as query
languages for structured or semi-structured documents
Our work
differs significantly from both these streams. None of
these papers make a distinction between documents
stored locally or remotely, or make an attempt to capitalize
on existing index servers. As far as document struc-
ture, we only support the minimal attributes common to
most HTML documents (URL, title, type, length, modification
date). We do not assume the internal document
structure is known or partially known, as does the work
on structured and semi-structured documents. As a con-
sequence, our language does not exploit internal document
structure when it is known; but we are planning to
build this on top of the current framework.
Closer to our approach is the W3QLwork by Shmueli
and konopnicki[KS95]. Our motivation is very similar to
theirs, but the approach is substantially different. They
emphasize extensibility and interfacing to external user-written
programs and Unix utilities. While extensibility
is a highly desirable goal when the tool runs in a known
environment, we aim for a tool that can be downloaded
to an arbitrary client and run with minimal interaction
with the local environment. For this reason, our query
engine prototype is implemented in the Java programming
language [SM] and can be downloaded as an applet
by a Java-aware browser. Another difference is that
we provide formal query semantics, and emphasize the
distinction between local and remote documents. This
makes a theory and analysis of query locality possible.
On the other hand, they support filling out forms encountered
during navigation, and discuss a view facility
based on W3QL, while we do not currently support ei-
ther. In this regard, it is not our intention to provide
a fully functional tool, but a clean and minimal design
with well-defined semantics that can be extended with
bells and whistles later.
Another recent effort in this direction is the We-
bLog language of Lakshmanan et al. [LSS96]. Unlike
WebSQL, WebLog emphasizes manipulating the internal
structure of Web documents. Instead of regular expressions
for specifying paths, they rely on Datalog-like
recursive rules. The paper does not describe an implementation
or formal semantics.
The work we describe here is not specifically addressed
to digital libraries. However, a declarative language
that integrates web navigation with content-based
search has obvious applications to digital library construction
and maintenance: for example, it can be used
for building and maintaining specialized indices and on-line
bibliographies, for defining virtual documents stored
in a digital library, and for building applications that integrate
access to specific digital libraries with access to
related web pages.
2 The WebSQL Language
In this section we introduce our SQL-like language for
the World Wide Web. We begin by proposing a relational
model of the WWW. Then, we give a few examples for
queries, and present the syntax of the language. The formal
semantics of queries is defined in the following section
One of the difficulties in building an SQL-like query
language for the Web is the absence of a database
schema. Instead of trying to model document structure
with some kind of object-oriented schema, as in [CACS94,
95], we take a minimalist relational approach. At
the highest level of abstraction, every Web object is identified
by its Uniform Resource Locator (URL) and has
a binary content whose interpretation depends on its
type (HTML, Postscript, image, audio, etc. Also, Web
servers provide some additional information such as the
type, length, and the last modification date of an object.
Moreover, an HTML document has a title and a text.
So, for query purposes, we can associate a Web object
with a tuple in a virtual relation:
where all the attributes are character strings. The url is
the key, and all other attributes can be null.
Once we define this virtual relation, we can express
any content query, that is, a query that refers only to
the content of documents, using an SQL-like notation.
Example 1. Find all HTML documents about "hyper-
text".
FROM Document d
SUCH THAT d MENTIONS "hypertext"
Since we are not interested just in document content,
but also in the hypertext structure of the Web, we make
hypertext links first-class citizens in our model. In partic-
ular, we concentrate on HTML documents and on hyper-text
links originating from them. A hypertext link is specified
inside an HTML document by a sequence, known as
an anchor, of the form !A HREF=href?label!/A? where
(standing for hypertext reference) is the URL of the
referenced document, and label is a textual description
Querying the World Wide Web 3
of the link. Therefore, we can capture all the information
present in a link into a tuple:
where base is the URL of the HTML document containing
the link, href is the referred document and label is
the link description. 2 All these attributes are character
strings.
Now we can pose queries that refer to the links
present in documents.
Example 2. Find all links to applets from documents
about "Java".
FROM Document x
Anchor y SUCH THAT base = x
WHERE y:label CONTAINS "applet";
In order to study the topology of the Web we will
want sometimes to make a distinction between links that
point within the same document where they appear, to
another document stored at the same site, or to a document
on a remote server.
Definition 1. A hypertext link is said to be:
- interior if the destination is within the source document; 3
local if the destination and source documents are different
but located on the same server;
global if the destination and the source documents
are located on different servers.
This distinction is important both from an expressive
power point of view and from the point of view of the
query cost analysis and the locality theory presented in
Sect. 4.
We assign an arrow-like symbol to each of the three
link types: let 7! denote an interior link, ! a local link
and ) a global link. Also, let = denote the empty path.
Path regular expressions are built from these symbols using
concatenation, alternation (j) and repetition ( ). For
is a regular expression that represents
the set of paths containing the zero length path
and all paths that start with a global link and continue
with zero or more local links.
Now we can express queries referring explicitly to the
hypertext structure of the Web.
Example 3. Starting from the Department of Computer
Science home page, find all documents that are linked
through paths of length two or less containing only local
links. Keep only the documents containing the string
'database' in their title.
2 Note that Anchor is not, strictly speaking, a relation, but a
multiset of tuples - a document may contain several links to the
same destination, all having the same label.
3 In HTML, links can point to specific named fragments within
the destination document; the fragment name is incorporated into
the URL.For example, http://www.royalbank.com/fund.html#DP;
refers to the fragment named DP within the document with URL
http://www.royalbank.com/fund.html. We will ignore this detail
in the rest of the paper.
SELECT d:url; d:title
FROM Document d SUCH THAT
WHERE d:title CONTAINS "database";
Of course, we can combine content and structure
specifications in a query.
Example 4. Find all documents mentioning 'Computer
Science' and all documents that are linked to them
through paths of length two or less containing only local
links.
FROM Document x SUCH THAT
x MENTIONS "Computer Science",
Document y
Note we are using two different keywords, MENTIONS
and CONTAINS, to do string matching in the
FROM and WHERE clauses respectively. The reason
is that they mean different things. Conditions in the
FROM clause will be evaluated by sending them to index
servers. The result of the FROM clause, obtained by
navigation and index server query, is a set of candidate
URL's, which are then further restricted by evaluating
the conditions in the WHERE clause. This distinction
is reflected both in the formal semantics and in the im-
plementation. We could remove the distinction and let
a query optimizer decide which conditions are evaluated
by index servers and which are tested locally. However,
we prefer to keep the explicit distinction because index
servers are not perfect or complete, and the programmer
may want to control which conditions are evaluated by
them and which are not.
The BNF specification of the language syntax is
given in the Fig. 1. The syntax follows the standard
SQL SELECT statement. All queries refer to the WWW
database schema introduced above. That is, Table can
only be Document or Anchor and Field can only be a
valid attribute of the table it applies to.
In this section we introduce a formal foundation for Web-
SQL. Starting from the inherent graph structure of the
WWW, we define the notion of virtual graph and construct
a calculus-based query language in this abstract
setting. Then we define the semantics of WebSQL queries
in terms of this calculus.
3.1 Data Model
We assume an infinite set D of data values, and a finite
set T of simple Types whose domains are subsets of D.
Tuple types with attributes a i of
simple type t i , are defined in the standard
way. The domain of a type t is denoted by dom(t). For a
4 Alberto O. Mendelzon, George A. Mihaila, and Tova Milo
Query := SELECT AttrList FROM DomainSpec
Attribute := Field j TableVar.Field
Field :=
TableVar :=
DomainSpec := DomainTerm f, DomainTermg
DomainTerm := Table TableVar SUCH THAT DomainCond
DomainCond := Node PathRegExp TableVar
Node := StringConstant
Condition := BoolTerm fOR BoolTermg
BoolTerm := BoolTerm fAND BoolTermg
Attribute CONTAINS StringRegExp
PathFactor := PathPrimary[*]
PathPrimary := Link
Fig. 1. WebSQL Syntax
tuple x, we denote by x:a i the value v i associated with
the attribute a i .
We distinguish a simple type Oid 2 T of object
identifiers, and two tuple types Node and Link with the
following structure:
The attribute names in the two definitions are all dis-
tinct. We shall refer to tuples of the first type as Node
objects and to tuples of the second type as Link objects.
In our model of the World Wide Web, documents will
be mapped to Node objects and the hypertext links between
them to Link objects. In this context, the object
identifiers (Oid) will be the URL's and the Node and
Link tuples will model the WebSQL Document and Anchor
virtual tables introduced informally in Section 2.
Virtual Graphs
The set of all the documents in the Web, although finite,
is undetermined: no one can produce a complete list of
all the documents available at a certain moment. There
are only two ways one can find documents in the Web:
navigation starting from known documents and querying
of index servers.
Given any URL, an agent can either fetch the associated
document or give an error message if the document
does not exist. This behavior can be modeled by
a computable partial function mapping Oid's to Node
objects. Once a document is fetched, one can determine
a finite set of outgoing hypertext links from that docu-
ment. This can also be modeled by a computable partial
function mapping Oid's to sets of Link objects. In prac-
tice, navigation is done selectively, by following only certain
links, based on their properties. In order to capture
this, we introduce a finite set of unary link predicates
The second way to discover documents is by querying
index servers. To model the lists of URL's returned
by index servers we introduce a (possibly infinite) set of
unary node predicates
each predicate P we are interested in the set fxjx 2
trueg. For example, a particular
node predicate may be associated with a keyword, and it
will be true of all documents that contain that keyword
in their text.
Definition 2. A virtual graph is a 4-tuple
are computable partial
functions, PNode is a set of unary predicates on
dom(Oid), and P Link is a finite set of unary predicates
on dom(Link).
- The set fae Node (oid)joid 2 dom(Oid) and ae Node is
defined on oidg is finite;
- for all oid 2 dom(Oid): ae Node (oid) is defined ,
ae Link (oid) is defined;
finite and for all e 2
and ae(e:to) is defined (we say that
e is an edge from v
ae Node (e:to));
- every predicate ff 2 P Link is a partial computable
Boolean function on the set dom(Link), and ff is defined
on all the links in ae Link (oid) whenever ae Link (oid)
is defined.
- the function val defined by
Example 5. A virtual
where:
isiting T oronto?::: 00
national capital::: 00
ourist information::: 00
url ae Link (url)
to Ottawa 00 ];
"Back to T oronto 00
"Back to T oronto 00
and fContains tourist ; Contains capital g, P
fContains go ; Contains back g.
Querying the World Wide Web 5
Note that a virtual graph
induces an underlying directed graph E)
oid2dom(Oid) ae Link (oid).
However, a calculus cannot manipulate this graph directly
because of the computability issues presented
above. This captures our intuition about the World Wide
Web hypertext graph, whose nodes and links can only be
discovered through navigation.
3.2 The Calculus
Now we proceed to define our calculus for querying virtual
graphs. We introduce path regular expressions to
specify connectivity-based queries. We then present the
notions of range expressions and ground variables to restrict
queries so that their evaluation does not require
enumerating every node of the virtual graph, and finally
we define calculus queries.
Path regular expressions
Consider a virtual graph \Gamma and denote its underlying
graph E). A path in \Gamma is defined in the same
way as in a directed
path if and only if for every index
path p is called simple
if there are no different edges e i 6= e j in p with the same
starting or ending points.
In order to express queries based on connectivity,
we need a way to define graph patterns. Recall P Link
is the set of link properties in a virtual graph. Let
link property, and let e be a link
object. If true then we say that e has the property
ff. We define the set of properties of a link e by
trueg. We sometimes choose
to view (e) as a formal language on the alphabet P Link .
For each property ff that is true of e, (e) contains the
single-character string ff. To study the properties of a
path, we extend the definition of the set of properties of
a link to paths as follows: if is a path then
we define
is the concatenation
of the languages L and L 0 .
Example 6. Suppose we want to require that a property
ff hold on all the links of a path
can be expressed easily in terms of by requiring that
In order to specify constraints like in the example
above we introduce path regular expressions, which are
nothing more than regular expressions over the alphabet
P Link . With each regular expression R over the alphabet
P Link we associate a language L(R) ' P
Link in the
usual way.
Definition 3. We say that the path p matches the path
regular expression R if and only if: (p) " L(R)
In other words, the path p matches the path regular expression
R if and only if there is a word w in the set of
properties (p) that matches the regular expression R.
Range expressions
The algebra for a traditional relational database is based
on operators like select (oe), project (-) and Cartesian
product (\Theta). Because all the contents of the database is
assumed to be available to the query engine, all these
operations can be executed, in the worst case by enumerating
all the tuples. In the case of the World Wide
Web, the result of a select operation cannot be computed
in this way, simply because one cannot enumerate
all the documents. Instead, navigation and querying of
index servers must be used. We want our calculus to express
only queries that can be evaluated without having
to enumerate the whole Web. To enforce this restriction,
we introduce range conditions, that will serve as restrictions
for variables in the queries.
Definition 4. Let
a virtual graph. Let G(\Gamma E) be its underlying
graph. A range atom is an expression of one of the following
forms:
are Oids or variable names,
and R is a path regular expression;
x is an Oid or a variable
are Oids or variable names;
A range expression is an expression of the
are range atoms, x are all the
variables occurring in them and T i 2 fNode; Linkg specifies
the type of the variable x i , for
Consider a valuation - : fx;
that maps each variable into a node or an edge of
the underlying graph. We extend - to dom(Oid) by
(oid), that is, - maps each Oid appearing
in an atom to the corresponding node. The following
definition assigns semantics to range atoms.
Definition 5. Let
a virtual graph. Let A be a range atom. We say that A
is validated by the valuation - if:
there exists a simple path from
-(u) to -(x) matching the path regular expression R;
Now we can give semantics to range expressions.
Definition 6. Consider a range expression
Amg. Then the set of tuples
is an valuation s:t: A 1 ; :::; Am
are all validated by -g is called the range of E .
Example 7. The set of all nodes satisfying a certain node
predicate P together with all their outgoing links may be
specified by the following range
6 Alberto O. Mendelzon, George A. Mihaila, and Tova Milo
Ground Variables and Ground Expressions
Although Definition 6 gives a well-defined semantics for
all range expressions, problems may arise when examining
the evaluation of \Psi (E) for certain expressions E . For
example, expressions like fx :
(find all pairs of nodes connected by a link of type ff)
nodes and
all links outgoing from them) cannot be algorithmically
evaluated on an arbitrary virtual graph, since their evaluation
would involve the enumeration of all nodes. We
impose syntactic restrictions to disallow such range ex-
pressions, in a manner similar to the definition of safe
expressions in Datalog.
Consider a virtual graph
Let us examine the evaluation of atoms for all the three
cases in Definition 4:
determining all pairs of nodes
separated by simple paths matching the path
regular expression R is possible only if u is a constant.
Indeed, if u is known, we can traverse the graph starting
from u to generate all simple paths matching R,
thus determining the values of x. If u is a variable
and x is a constant, since Web links can only be traversed
in one direction, there is no way to determine
the values of u for arbitrary R without enumerating
all the nodes in the graph. All the more so when both
u and x are variables. A similar argument shows that
u must be a constant in A = F rom(u; x).
- the atoms of the form P (x) where P 2 PNode pose
no problem since the set fx 2 V jP
computable (by Definition 2).
The above considerations lead to the following definition.
Definition 7. A variable x occurring in a range atom A
is said to be independent in A if it is the only variable in
the atom. If two variables u and x appear in an atom A in
this order (i.e.
then we say that x depends on u in A.
The idea is that independent variables can be determined
directly, whereas dependent variables can be determined
only after the variables they depend upon have been assigned
values. The following definition gives a syntactic
restriction over range expressions that will ensure computability
Definition 8. Let
be a range expression. A variable x xng is said
to be ground in E if there exists an atom A i such that
either x is independent in A i , or x depends in A i on a
variable u that is ground in E . The expression E is said
to be ground if all the variables in E are ground.
Theorem 1. Consider a virtual graph
Amg. If E is ground then
\Psi (E) is computable.
Proof. Consider the following dependency graph
being
the dependency relation (i.e. there is an edge from x i to
only if x j depends on x i in some atom A k ).
We distinguish two cases depending on the presence of
cycles in GD . We will consider first the acyclic case and
then we will reduce the cyclic case to the acyclic one by
transforming the expression E into an equivalent one.
Case I (GD acyclic): By doing a topological sort we
can construct a total order among the variables which
is compatible with the dependency relation, that is, a
permutation oe 2 S(n) s.t.
j.
To simplify the notation, we can rename the variables
according to the permutation oe so that each variable
depends only on variables preceding it in the list
For each variable x i we define the set I x i ' f1; :::; mg
as the set of the indices of the atoms where x i occurs
either as an independent or dependent variable. Since
every variable is ground, all sets I x i are non-empty.
does not depend on any other variable and
is ground, all its occurrences in atoms are independent
occurrences. This means that we can compute the set of
values of x 1 in \Psi (E):
Furthermore, let us consider one element c 1 in this set
(if this set is empty, then \Psi (E) is also empty and its
computation is complete). We replace all occurrences of
x 1 in atoms with the constant c 1 (denote the transformed
atoms by A m). The occurrences of x 2
in the atoms A i with i 2 I x2 are either independent or
dependent on x 1 . Therefore, after the substitution of x 1
by c 1 all the occurrences of x 2 in A i [x 1
became independent. This means that we can compute
the set of all values of x 2 in the tuples where x 1 is
Now we consider an arbitrary element c 2 of the above set
(if it is empty, then go back and choose another value for
We replace all occurrences of x 2 in atoms with the
constant c 2 and iterate the process, sequentially for all
the other variables. That is, we compute the sets:
sequentially, for 2 - k - n. Once we have computed the
last set, for every element c n in that set we add the tuple
to \Psi (E). Then, recursively, we take another
value for c n\Gamma1 in its set and recompute the set for xn ,
and so on, until we compute all the tuples. This recursive
procedure is described in Fig. 2.
Querying the World Wide Web 7
Procedure
for all c 2 M
else
end for
/* Main Program */
OUTPUT: \Psi (E)
array 1::n of Object
Fig. 2. Algorithm computing \Psi (E)
Case II (GD cyclic): Consider a cycle in GD : x
. From the definition of atoms we
infer that all the variables in the cycle are of the type
Node (a Link variable always has outdegree zero in the
dependency graph). From the fact that all variables are
ground we infer that at least one of the vertices in the
cycle has an incoming edge from outside the cycle or has
an independent occurrence in some atom. Without loss
of generality we can consider that
has this property.
Fig. 3. Breaking a Dependency Cycle
We introduce a new variable xn+1 and replace all occurrences
of x
in the atoms where it depends on x i k by
xn+1 . In this way, the edge x
is replaced by an
thus breaking the cycle (see Fig. 3).
Also, we add a new atom
to E . Please note that all variables are still ground in
the modified expression. We denote the new expression
'. The atom Am+1 ensures that in all tuples in \Psi
different nodes cannot be separated by
an empty path). This means that \Psi
The new dependency graph G 0
D has at least one cycle
less than GD . By iterating this procedure until there
are no cycles left we obtain an expression F that can
be evaluated using the method from Case I. Then, as
the last step, we compute \Psi
concludes the theorem's proof.
Remark 1. The converse of the above theorem is not
true, since there are computable range expressions which
are not ground. For example, consider
is an unsatisfiable link predicate. Here y and z are not
ground but \Psi trivially computable.
However, one can prove 4 that every computable range
expression is equivalent to a ground range expression.
Queries
After restricting the domainfrom a large, non-computable
set of nodes and links to a computable set, we may
use the traditional relational selection and projection to
impose further conditions on the result set of a query.
This allows us to introduce the general format of queries
in our calculus. We assume a given set P s of binary
predicates over simple types. Examples of predicates include
equality (for any type), various inequalities (for
numeric types), and substring containment (for alphanumeric
types).
Definition 9. A virtual graph query is an expression of
the form: -L oe OE E where:
Amg is a range expression
- L is a comma separated list of expressions of the form
some attribute of the type T
- OE is a Boolean expression constructed from binary
predicates from P s applied to expressions x i :a j and
constants using the standard operators -, and :;
The semantics of the select (oe) and project (-) operators
is the standard one.
3.3 WebSQL Semantics
We are now ready to define the semantics of our WebSQL
language in terms of the formal calculus introduced
above. To do this, we need to model the Web as a virtual
graph
is the infinite set of all syntactically correct URL's, and
for every element url 2 dom(Oid), ae Node (url) is either
the document referred to by url, or is undefined,
if the URL does not refer to an existing document. Note
that ae Node (url) is computable (its value can be computed
by sending a request to the Web server specified
in the URL). Moreover, ae Link (url) is the set of all
anchors in the document referred to by url, or is un-
defined, if the URL does not refer to an existing doc-
ument. One can extract all the links appearing in an
HTML document by scanning the contents in search of
4 This comes as a consequence of a more general theorem in
8 Alberto O. Mendelzon, George A. Mihaila, and Tova Milo
!A? and !/A? tags. This means that the partial function
ae Link (url) is computable. In order to model content
queries we consider the following set of Node pred-
icates: where, for each w 2 \Sigma ,
true if the document n contains the string w.
Finally, we consider the following set of Link predicates:
in accordance with the definition of
path regular expressions in WebSQL.
The semantics of a WebSQL query is defined as usual
in terms of selections and projections. Thus a query of
the form:
translates to the following calculus query: -L oe OE
Ang is obtained from
by using the following transformation rules:
then A
then A
then A
We only allow as legal WebSQL queries those that
translate into calculus queries that satisfy the syntactic
restrictions of Sect. 3.2.
4 Query Locality
Cost is an important aspect of query evaluation. The
conventional approach in database theory is to estimate
query evaluation time as a function of the size of the
database. In the web context, it is not realistic to try to
evaluate queries whose complexity would be considered
feasible in the usual theory, such as polynomial or even
linear time.
For a query to be practical, it should not attempt
to access too much of the network. Query analysis thus
involves, in this context, two tasks: first, estimate what
part of the network may be accessed by the query, and
then the cost of the query can be analyzed in traditional
ways as a function of the size of this sub-network. In
this section, we concentrate on the first task. Note that
this is analogous, in a conventional database context, to
analyzing queries at the physical level to estimate the
number of disk blocks that they may need to access.
For this first task, we need some way to measure the
"locality" of a query, that is, how far from the originating
site do we have to search in order to answer it.
Having a bound on the size of the sub-network needed
to evaluate a query means that the rest of the network
can be ignored. In fact, a query that is sensitive only to a
bounded sub-network should give the same result if evaluated
in one network or in a different network containing
this sub-network. This motivates our formal definition of
query locality.
An important issue is the cost of accessing such a sub-
network. In the current web architecture, access to remote
documents is often done by fetching each document
and analyzing it locally. The cost of an access is thus affected
by document properties (e.g. size) and the by the
cost of communication between the site where the query
is being evaluated and the site where the document is
stored. Recall that we model the web as a virtual graph.
To model access costs, we extend the definition of virtual
graphs, adding a function ae c : dom(Oid) \Theta dom(Oid) !
is the the cost of accessing node j from
node i.
4.1 Locality Cost
We now define the formal notion of locality. For that we
first explain what it means for two networks to contain
the same sub-network. We assume below that all the
virtual graphs being discussed have the same sets of node
and link predicate names.
Link ) be two (extended)
virtual graphs. Let W ' dom(Oid). We say that
agrees
Link (w),
ae c (w; w
2. for all the node predicates PNode , if
defined, then PNode (n) holds in \Gamma iff it holds in \Gamma 0 ,
3. for all the link predicates P Link and for all links l 2
ae Link (w), P Link (l) holds in \Gamma iff it holds in \Gamma 0 .
Informally this means that the two graphs contain
the sub-network induced by W , the nodes of W have
the same properties in both graphs, and in both graphs
this sub-network is linked to the rest of the world in the
same way.
We next consider locality of queries. In our context, a
query is a mapping from the domain of virtual graphs, to
the domain of sets of tuples over simple types. As in the
standard definition of queries, one can further require
the mapping to be generic and computable. Since this is
irrelevant to the following discussion, we ignore this issue
here. Formal definitions of genericity and computability
in the context of Web queries can be found in [AV97,
MM96].
Definition 11. Let Q be a query, let G be a class of
virtual graphs, let \Gamma 2 G be a graph, and let W '
dom(Oid). We say that query Q when evaluated at node
i depends on W , (for \Gamma and G), if i 2 W and for every
that agrees with \Gamma about W ,
Q(\Gamma ), and there is no subset of W satisfying this.
W is a minimal set of documents needed for computing
Q. Note that W may not be unique. This is reason-able
since the same information may be stored in several
places on the network. If Q is evaluated at some node i,
then the cost of accessing all the documents in such W
is the sum of (ae c (i; w)) over all documents w in W such
that ae Node (w) is defined. We are interested in bounding
Querying the World Wide Web 9
cost with some function of the cost of accessing all the
nodes of the network, that is, the sum of ae c (i;
j such that ae Node (j) is defined.
Definition 12. The locality cost of a query Q, when
evaluated at node i, is the maximum, over all virtual
G, and over all sets W on which Q depends
in \Gamma , of the cost of accessing every document in W from
node i.
We are interested in bounding the locality cost of a
query with some function of the cost of accessing the
whole network. If this total cost is n, note that the locality
cost of a query is at most linear in n. Obviously,
queries with O(n) locality are impractical - the whole
network needs to be accessed in order to answer them.
We will be interested in constant bounds, where the constants
may depend on network parameters such as number
of documents in a site, maximal number of URL's in
a single document, certain communication costs, etc.
In general, access to documents on the local server
is considered cheap, while documents in remote servers
need to be fetched and are thus relatively expensive. To
simplify the discussion and highlight the points of interest
we assume below a rather simple cost function. We
assume that local accesses are free, while the access cost
to remote documents is bounded by some given constant.
(Similar results can be obtained for a more complex cost
function). For a few examples, consider the query
FROM Document x SUCH THAT
where "http://www.cs.toronto.edu" is at the local server.
The query accesses local documents pointed to by the
home page of the Toronto CS department, and no remote
ones. Thus the locality cost is O(1). On the other
hand, the query
FROM Document x SUCH THAT
accesses both local and remote documents. The number
of remote documents being accessed depends on the
number of anchors in the home page that contain remote
URL's. In the worst case, all the URL's in the page are
remote. If k is a bound on the number of URL's in a
single document, then the locality cost of this query is
O(k).
As another example, consider the queries
FROM Document x SUCH THAT
FROM Document x SUCH THAT
FROM Document x SUCH THAT
Query local documents reachable from the
CS department home page, and is thus of locality O(1).
Query Q 3 accesses all documents reachable by one global
link followed by an unbounded number of local links. If
k is a bound on the number of URL's in a single doc-
ument, and s is a bound on the number of documents
in a single server, then the locality cost of the query is
O(ks). This is because in the worst case all the URL's
in the CS department home page reference documents in
distinct servers, and all the documents on those servers
are reachable from the referenced documents. The last
query accesses all reachable documents. In the worst case
it may attempt to access the whole network, thus its cost
is O(n).
The locality analysis of various features of a query
language can identify potentially expensive components
of a query. The user can then be advised to rephrase
those specific parts, or to give some cost bounds for them
in terms of time, number of sites visited, CPU cycles con-
sumed, etc., or, if enough information is available, dol-
lars. The query evaluation would monitor resource usage
and interrupt the query when the bound is reached.
A query Q can be computed in two phases. First, the
documents W on which Q depends on are fetched, and
then the query is evaluated locally. Of course, for this
method to be effective, computing which documents need
to be fetched should not be more complex than computing
itself. The following result shows that computing
W is not harder, at least in terms of how much of the
network needs to be scanned, than computing Q.
Proposition 1. For every class of graphs G and every
query Q, the query Q 0 that given a graph \Gamma 2 G returns
a W s.t. Q depends on W , also depends on W .
Proof. We use the following auxiliary definition.
Definition 13. Let Q be a query, let G be a class of
graphs, let G 2 G be a graph, and let W be a set of
nodes in G. We say that Q is W -local for G and G,
if for every graph G 0 2 G that agrees with G about W ,
Note that a query Q depends on W for G and G, if
Q is W -local and there is no W 0 ae W s.t. Q is W 0 -local
for G. We shall call such W a window of Q in G. (Not
that there may be many windows for Q in G.)
The proof is based on the following claim:
Claim. For every two graphs G 1 ; G 2 , every set of nodes
W belonging to both graphs, and every query Q, the
following hold:
(i) If G 1 agrees with G 2 about W and Q is W -local for
-local for G 2 .
(ii) If G 1 agrees with G 2 about W and Q depends on W
for G 1 , then Q also depends on W for G 2 .
Proof. (Sketch) We first prove claim (i). If G 1 is W -
local, then for every graph G 0 2 G that agrees with G 1
about W , This in particular holds for
G 2 . Also, since G 2 agrees with G 1 about W , then the
set of graphs that agree with G 1 about W is exactly the
set containing all the graphs agreeing with G 2 about W .
Alberto O. Mendelzon, George A. Mihaila, and Tova Milo
Thus for every graph G 0 2 G that agrees with G 2 about
follows immediately from claim (i).
We are now ready to prove the proposition. The proof
works by contradiction. Clearly the window W 0 on which
depends must contain W (since W is part of the answer
of Q Assume that W ae W 0 . We shall show that
Q 0 is W -local, a contradiction to the minimality of W 0 .
Assume Q 0 is not W -local. Then there must be some
graph G 0 that agrees with G about W but where Q 0 has
a different answer. i.e. Q does not depend on W for G 0 .
But claim (ii) above says that if G agrees with G 0 about
W and Q depends on W for G,then Q also depends on
W for G 0 . A contradiction.
Although encouraging, the above result is in general
not of practical use. It says that computing W is not
harder in terms of the data required, but it does not say
how to compute W . In fact it turns out that if the query
language is computationally too powerful, the problem of
computing W can be undecidable. For example, if your
query language is relational calculus augmented with
Web-SQL features, the W of a query fx j OE - Q 4 g is
the whole network if OE is satisfiable, and is empty other-
wise. (OE here is a simple relational calculus formula and
has no path expressions).
If the query language is too complex, locality analysis
may be very complex or even impossible. Nevertheless,
there are many cases where locality cost can be analyzed
effectively and efficiently. This in particular is the case
for the WebSQL query language. The fact that the language
makes the usage of links and links traversal explicit
facilitates the analysis task. In the next subsection, we
show that locality of WebSQL queries can be determined
in time polynomial in the size of the query.
4.2 Locality of WebSQL queries
We start by considering simple queries where the FROM
clause consists of a single path atom starting from the
local server, as in the examples above. We then analyze
general queries.
Analyzing single path expressions
The analysis is based on examining the types of links (in-
ternal, local, or global) that can be traversed by paths
described by the path expression. Particular attention
is paid to "starred" sub-expressions since they can describe
paths of arbitrary length. Assume that the query
is evaluated at some node i, and let n denote the cost
of accessing the whole network graph from i. Let k be
some bound on the number of URL's appearing in a
single document, 5 and s some bound on the number of
documents in a single server.
5 If no such bound exists, every path expression containing a
global link is of locality cost O(n) (because in the worst case a
single document may point to all the nodes in the network).
1. Expressions with no global links can access only local
documents and thus have locality O(1).
2. Expressions containing global links that appear in
"starred" sub-expression, can potentially access all
the documents in the network. Thus the locality is
3. Expressions with global links, but where none of the
"starred" sub-expressions contain a global link sym-
bol, can access remote documents, but the number of
those is limited. All the paths defined by such expressions
are of the form ! l
The number of documents
accessed by such path is bounded by
where k and s are the bounds above. This is because
the number of different documents reachable
by a path ! l i is at most min(s; k l i ).Each of these
documents may contain k global links, and in the
worst case all the k min(s; k l i ) links in those documents
point to files on distinct servers. The number
of documents reached at the end of the path is thus
at most (k min(s; k l In order to reach those docu-
ments, in the current Web architecture, all files along
the path need to be fetched. To get a bound on this,
we multiply the number by the length of the path.
The number m is bounded by the number of global
links appearing in the given path expression. l can be
computed by analyzing the regular expression. 6 All
this can be done in time polynomial in the size of the
expression. Thus the whole bound can be effectively
computed.
The above expression is a simple upper bound on the
locality cost. A tight bound can also be computed
in polynomial time. We chose to present the above
bound since the exact expression is very complex and
does not add much insight to the analysis, so we omit
it.
Observe that if any of the starred sub-expressions
contains a local link, then l = s. This is because,
in the worst case, such sub-expressions will attempt
to access all the documents in the server. In this
case the bound becomes min(n; (m(s
Since servers may contain many documents, the locality
cost may be very high. This indicates that such
queries are potentially expensive and that the user
should be advised to provide the query evaluator certain
bounds on resources.
Analyzing queries
To analyze a WebSQL query it is not sufficient to look at
individual path atoms. The whole FROM clause needs
to be analyzed. For example, consider a query
6 It suffices to build a NFA for the expression and compute the
lengths of the maximal path between two successive global links,
not counting epsilon moves and internal links. l i is infinite if the
path between two successive global links contain a cycle with at
least one local link, in which case min(s; k l i
Querying the World Wide Web 11
FROM Document x
Anchor y SUCH THAT base = x
Document z SUCH THAT y ! z
Document w SUCH THAT x ! w
The path expressions in the query involve only local
links. But since the links returned by the index server
may point to remote documents, the paths traversed by
the sub-condition
FROM Document x
Document w SUCH THAT x ! w
are actually of the form ((!j)): !). Similarly, since the
links traversed in "Anchor y SUCH THAT base = x"
can be internal, local, or global, the paths traversed by
the sub-condition
FROM Document x
Anchor y SUCH THAT base = x
Document z SUCH THAT y ! z
are of the form ((!j)):(7! Thus the regular
expression describing the path accessed by the query
is
that querying an index server can be done with locality
c, and that the number of URL's returned by an index
server on a single query is bounded by some number
m. The locality cost of evaluating this query is therefore
bounded by c plus m times the locality bound of
Interestingly, every FROM clause of a WebSQL query
can be transformed into a regular expression describing
the paths accessed in its evaluation. This, together with
the locality cost of querying the index severs used in
the query, and the bounds on the size of their answers,
lets us determine the locality of the FROM clause in
time polynomial on the size of the query. Bounding the
locality of the FROM clause provides an upper bound on
the locality of the whole query; a slightly better bound
can be obtained by analyzing the SELECT and WHERE
clauses.
We first sketch below an algorithm for building a regular
expression corresponding to the paths traversed by
the FROM clause. Next we explain how the SELECT
and WHERE clauses can be used in the locality analysis
To build the regular expression we use the auxiliary
notion of determination.
Definition 14. Recall that every domain term A i in the
FROM clause corresponds to an atom A 0
i in the calculus.
We say that A i determines a variable x, if x is independent
in A 0
i or it depends on another variable of A 0
i . For
two domain terms A i ; A j , we say that A i determines A j
if A i determines any of the variables in A j .
For simplicity, we assume below that every variable
in the FROM clause is determined be a single domain
term. Observe that this is not a serious restriction because
every query can be transformed to an equivalent
query that satisfies the restriction: whenever a variable
depends on several terms, the variables in all these terms
but one can be renamed, and equality conditions equating
the new variables and the old one can be added to
the WHERE clause.
Given this assumption, the determination relationship
between domain terms can be described by a forest.
This forest is then used to derive the regular expression.
The forest is built is follows: The nodes are the atoms
in the FROM clause, and the edges describe the dependency
relationship between variables.
Note that the roots in such a forest are either index
terms (that is, MENTIONS terms), or anchor/path
terms with a constant URL as the starting point. The
non-root nodes are anchor or path terms. For example,
the forest of query Q 5 above contains a single tree of the
FROM Document x SUCH THAT x MENTIONS "VLDB96"
. j
. Document z SUCH THAT y(!) z
The next step is to replace each term node in the forest
by a node containing a corresponding path expres-
sion: Index terms are replaced by ! j ). Non-root anchor
terms are also replaced by 7!
ones are replaced by the same expression, if the constant
URL appearing in them is at the local server, and
by
are replaced by the regular expression appearing in the
atom. Root ones with a URL at the local server are also
replaced by this regular expression, and otherwise by
concatenated to the expression. So for example, the
above tree becomes
Finally, we take the obtained forest and build from it a
regular expression. This is done by starting at the leaves
and going up the forest, concatenating the regular expression
of each node to the union of the expressions
built for the children, (and finally taking the union of all
the roots of the forest). The expression thus obtained for
the above tree is
The locality of the FROM clause is a bound on the
locality of the query. A slightly better bound can be obtained
by analyzing the SELECT and WHERE clauses.
For the SELECT clause, if the documents corresponding
to variables on which no other variable depend are
not used, (i.e. only data from their URL is retrieved),
it means that the last link of the path is not traversed.
7 Which is equivalent to the expression (((!j)): !) j ((!j)
we obtained in the intuitive discussion above.
Alberto O. Mendelzon, George A. Mihaila, and Tova Milo
This can be easily incorporated into the locality computation
of the regular expression. For the WHERE clause,
if the condition there is unsatisfiable, the locality can be
immediately reduced to O(1).
5 Implementation
This section presents our prototype implementation of a
WebSQL compiler, query engine, and user interface.
Both the WebSQL compiler and query engine are implemented
entirely in Java [SM], the language introduced
by Sun Microsystems with the specific purpose of adding
executable content to Web documents. Java applications
incorporated in HTML documents, called applets, reside
on a Web server but are transferred on demand to the
client's site and are interpreted by the client.
A prototype user interface for the WebSQL system is
accessible from the WebSQL home page
through a CGI script. Also, we have developed a stan-
dalone, GUI-based, Java application that supports interactive
evaluation of queries.
The WebSQL system architecture is depicted in Fig. 4.
User
Interface
WebSQL Compiler
Virtual Machine Query Engine
World Wide Web
File
Query
Object Code
Requests
Lists of URLs
Traversal and
Fig. 4. The Architecture of the WebSQL System
The Compiler and Virtual Machine
Starting from the BNF specification of WebSQL we built
a recursive descent compiler that, while checking for syntactic
correctness, recognizes the constructs in a query
and stores all the relevant information in internal structures
After this parsing stage is complete, the compiler generates
a set of nested loops that will evaluate the range
atoms in the FROM clause. Consider the following query
template:
The WebSQL compiler translates the above query to
a program in a custom-designed object language implementing
the pseudo-code algorithm depicted in Fig. 5.
Compute
for all x 1 2 D 1
Compute
for all x
Compute
for all xn 2 Dn
Write
end for
end for
end for
Fig. 5. The Nested Loops Generated by the Compiler
Note that this nested loops algorithm is equivalent
to the recursive algorithm used in the proof of Theorem
1 (for this restricted form of domain specifying ex-
pressions).
Fig. 6. The WebSQL User Interface
The object program is executed by an interpreter
that implements a stack machine. Its stack is heteroge-
neous, that is, it is able to store any type of object, from
integers and strings to whole vectors of Node and Link
objects. The evaluation of range atoms is done via specially
designed operation codes whose results are vectors
of Node or Link objects.
Querying the World Wide Web 13
The Query Engine
Whenever the interpreter encounters an operation code
corresponding to a range atom, the query engine is invoked
to perform the actual evaluation. There are three
types of atoms, according to Definition 4. Let us examine
each of them in sequence:
the engine generates all simple
paths starting at u that match R, thus determining
the list of all qualifying values of x. Mendelzon and
Wood give in [MW95] an algorithm for finding all
the simple paths matching a regular expression R in
a labeled graph. We adapted this algorithm for the
virtual graph context. (For full details see [Mih96]).
Cw the engine queries a customizable set
of known index servers (currently Yahoo and Lycos)
with the string w and builds a sorted list of URL's
by merging the individual answer sets;
the engine determines first if u
is an HTML document and if it is, it parses it and
builds a list of Link objects out of the set of all the
anchor tags; if u is not an HTML document, the engine
returns the empty list.
The User Interface
In order to make WebSQL available to all WWW users,
we have designed a CGI C program invoked from a
HTML form. The appearance of the HTML form is
shown in Fig. 6, a screen shot of the Hotjava browser.
The input form can be used as a template for the
most common WebSQL queries making it easier for the
user to submit a query. If the query is more complicated
it can always be typed into an alternative text field. After
the query is entered it may be submitted by pressing
the appropriate button. At that point, the Java applet
collects all the data from the input fields and assembles
the WebSQL query. Then the query is sent to the Parser,
which checks the syntax and produces the object code.
The object code is then executed by the Interpreter and
finally a query result set is computed. This set is formatted
as an HTML document and displayed by the browser.
All URL fields that appear in the result are formatted
as anchors so that the user may jump easily to the associated
documents. Fig. 7 contains a screen shot of a
typical result document.
Performance
The execution time of a WebSQL query is influenced by
various factors related to the network accesses performed
in the process of building the result set. Among these factors
we can mention the number and size of the transferred
documents, the available network bandwidth, and
the performance and load of the accessed Web servers.
Because our query processing system does not maintain
any persistent local information between queries it has
to access the Web for every new query. Therefore, care
Fig. 7. WebSQL Query Results
must be taken when formulating queries by estimating
the number of documents that have to be retrieved. We
executed a number of queries by running the Java applet
described in the previous section from within an instance
of the HotJava browser running under Solaris 2.3 on a
SUN Sparcserver 20/612 with 2 CPUs and 256 Mbytes
of RAM.
The execution times for the queries we tested vary
between under ten seconds, for simple content queries,
to several minutes for structural queries involving the
exploration of Web subgraphs with about 500 nodes.
6 Discussion
We have presented the WebSQL language for querying
the World Wide Web, given its formal semantics in terms
of a new virtual graph model, proposed a new notion of
query cost appropriate for Web queries, and applied it
to the analysis of WebSQL queries. Finally, we described
the current prototype implementation of the language.
Looking at Fig. 6, one is skeptical that this complex
interface will replace simple keyword-based search en-
gines. However, this is not its purpose. Just as SQL is
by and large not used by end users, but by programmers
who build applications, we see WebSQL as a tool for
helping build Web-based applications more quickly and
reliably. Some examples:
Selective indexing: As the Web grows larger, we will
often want to build indexes on a selected portion of
the network. WebSQL can be used to specify this
portion declaratively.
14 Alberto O. Mendelzon, George A. Mihaila, and Tova Milo
View definition: This is a generalization of the previous
point, as an index is a special kind of view. Views and
virtual documents are likely to be an important facil-
ity, as discussed by konopnicki and Shmueli [KS95],
and a declarative language is needed to specify them.
Link maintenance: Keeping links current and checking
whether documents that they point to have changed
is a common task that can be automated with the
help of a declarative query language.
Several directions for extending this work present
themselves. First, instead of being limited to a fixed
repertoire of link types (internal, local, and global), we
would like to extend the language with the possibility of
defining arbitrary link types in terms of their properties,
and use the new types in regular expressions. For exam-
ple, we might be interested in links pointing to nodes in
Canada such that their labels do not contain the strings
"Back" or "Home."
Second, we would like to make use of internal document
structure when it is known, along the lines of [CACS94]
There is also a great deal of scope for query opti-
mization. We do not currently attempt to be selective
in the index servers that are used for each query, or to
propagate conditions from the WHERE to the FROM
clause to avoid fetching irrelevant documents. It would
also be interesting to investigate a distributed architecture
in which subqueries are sent to remote servers to be
executed there, avoiding unnecessary data movement.
Acknowledgements
This work was supported by the Information Technology
Research Centre of Ontario and the Natural Sciences and
Engineering Research Council of Canada. We thank the
anonymous reviewers for their suggestions.
--R
Querying and updating the file.
Queries and computation on the Web.
A logical query language for hypertext systems.
The World-Wide Web
From structured documents to novel query facilities.
Expressing structural hypertext queries in Graphlog.
Savvysearch home page.
An algebra for structured office documents.
Visual Web surfing with Hy
A query system for the World Wide Web.
A declarative language for querying and re-structuring the Web
Formal models of web queries.
Queries on structure in hypertext.
Finding regular simple paths in graph databases.
A model to query documents by contents and structure.
Querying semistructured heterogeneous informa- tion
--TR
--CTR
Ke Wang , Huiqing Liu, Discovering typical structures of documents: a road map approach, Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, p.146-154, August 24-28, 1998, Melbourne, Australia
Converting the syntactic structures of hierarchical data to their semantic structures, Information organization and databases: foundations of data organization, Kluwer Academic Publishers, Norwell, MA, 2000
Bhavini C. Patel , Rajshekhar Sunderraman, Querying web data: an object-oriented approach, Proceedings of the 38th annual on Southeast regional conference, April 07-08, 2000, Clemson, South Carolina
Ke Wang , Huiqing Liu, Discovering Structural Association of Semistructured Data, IEEE Transactions on Knowledge and Data Engineering, v.12 n.3, p.353-371, May 2000
Ellen Spertus , Lynn Andrea Stein, Just-in-time databases and the World-Wide Web, Proceedings of the seventh international conference on Information and knowledge management, p.30-37, November 02-07, 1998, Bethesda, Maryland, United States
Kazunori Katoh , Atsuyuki Morishima , Hiroyuki Kitagawa, Agent-based processing of navigational queries in INFOWEAVER, Information organization and databases: foundations of data organization, Kluwer Academic Publishers, Norwell, MA, 2000
Michael Johnson , Farshad Fotouhi , Sorin Draghici, Query-by-structure approach for the web, Data mining: opportunities and challenges, Idea Group Publishing, Hershey, PA,
Athman Bouguettaya , Boualem Benatallah , Mourad Ouzzani , Lily Hendra, WebFindIt: An Architecture and System for Querying Web Databases, IEEE Internet Computing, v.3 n.4, p.30-41, July 1999
Pranam Kolari , Anupam Joshi, Web Mining: Research and Practice, Computing in Science and Engineering, v.6 n.4, p.49-53, July 2004
Tova Milo , Dan Suciu, Type inference for queries on semistructured data, Proceedings of the eighteenth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.215-226, May 31-June 03, 1999, Philadelphia, Pennsylvania, United States
Ying Chen , Qiang Zhu , Nengbin Wang, Query processing with quality control in the World Wide Web, World Wide Web, v.1 n.4, p.241-255, 1998
Mary Fernandez , Daniela Florescu , Jaewoo Kang , Alon Levy , Dan Suciu, STRUDEL: a Web site management system, ACM SIGMOD Record, v.26 n.2, p.549-552, June 1997
Laurent Amsaleg , Michael J. Franklin , Anthony Tomasic, Dynamic Query Operator Scheduling for Wide-Area Remote Access, Distributed and Parallel Databases, v.6 n.3, p.217-246, July 1998
S. Tenier , Y. Toussaint , A. Napoli , X. Polanco, Instantiation of Relations for Semantic Annotation, Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence, p.463-472, December 18-22, 2006
Masum Z. Hasan , Alberto O. Mendelzon , Dimitra Vista, Applying database visualization to the World Wide Web, ACM SIGMOD Record, v.25 n.4, p.45-49, Dec. 1996
Tolga Urhan , Michael J. Franklin , Laurent Amsaleg, Cost-based query scrambling for initial delays, ACM SIGMOD Record, v.27 n.2, p.130-141, June 1998
S. Bhowmick , Sanjay Madria , Wee-Keong Ng , Ee-Peng Lim, Cost-benefit analysis of web bag in a web warehouse: An analytical approach, World Wide Web, v.3 n.3, p.165-184, 2000
S. Bhowmick , Wee Keong Ng , Sanjay Madria, Constraint-driven join processing in a web warehouse, Data & Knowledge Engineering, v.45 n.1, p.33-78, April
Alberto O. Mendelzon , Tova Milo, Formal models of Web queries, Proceedings of the sixteenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, p.134-143, May 11-15, 1997, Tucson, Arizona, United States
Athman Bouguettaya , Boualem Benatallah , Brahim Medjahed , Mourad Ouzzani , Lily Hendra, Adaptive web-based database communities, Information modeling for internet applications, Idea Group Publishing, Hershey, PA,
Athena Vakali , Yannis Manolopoulos, Caching across heterogeneous information sources: an object-based approach, Information processing and technology, Nova Science Publishers, Inc., Commack, NY, 2001
Serge Abiteboul , Victor Vianu, Regular path queries with constraints, Proceedings of the sixteenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, p.122-133, May 11-15, 1997, Tucson, Arizona, United States
Uwe Hohenstein , Andreas Ebert, Automatic migration of files into relational databases, Proceedings of the 2nd international workshop on Web information and data management, p.17-21, November 02-06, 1999, Kansas City, Missouri, United States
Tao Guan , Kam Fai Wong, Nstar: an interactive tool for local web search, Information and Management, v.41 n.2, p.213-225, December
Michael Johnson , Farshad Fotouhi , Sorin Drghici , Ming Dong , Duo Xu, Discovering Document Semantics QBYS: A System for Querying the WWW by Semantics, Multimedia Tools and Applications, v.24 n.2, p.155-188, November 2004
Mengchi Liu , Tok Wang Ling, A Conceptual Model and Rule-Based Query Language for HTML, World Wide Web, v.4 n.1-2, p.49-77, 2001
S. Bhowmick , Sanjay Kumar Madria , Wee Keong Ng, Detecting and Representing Relevant Web Deltas in WHOWEDA, IEEE Transactions on Knowledge and Data Engineering, v.15 n.2, p.423-441, February
Agostino Operational and abstract semantics of the query language G-Log, Theoretical Computer Science, v.275 n.1-2, p.521-560, March 28 2002
Avigdor Gal , John Mylopoulos, Toward Web-Based Application Management Systems, IEEE Transactions on Knowledge and Data Engineering, v.13 n.4, p.683-702, July 2001
M. Ouzzani , B. Benatallah , A. Bouguettaya, Ontological Approach for Information Discovery in Internet Databases, Distributed and Parallel Databases, v.8 n.3, p.367-392, July 2000
Jackie Assa , Daniel Cohen-Or , Tova Milo, Displaying data in multidimensional relevance space with 2D visualization maps, Proceedings of the 8th conference on Visualization '97, p.127-ff., October 18-24, 1997, Phoenix, Arizona, United States
S. Bhowmick , Sanjay Madria , Wee Keong Ng, What can a web bag discover for you?, Data & Knowledge Engineering, v.43 n.1, p.79-119, October 2002
Athman Bouguettaya , Boualem Benatallah , Lily Hendra , Mourad Ouzzani , James Beard, Supporting Dynamic Interactions among Web-Based Information Sources, IEEE Transactions on Knowledge and Data Engineering, v.12 n.5, p.779-801, September 2000
Peter Buneman, Semistructured data, Proceedings of the sixteenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, p.117-121, May 11-15, 1997, Tucson, Arizona, United States
graph-based approach for extracting terminological properties from information sources with heterogeneous formats, Knowledge and Information Systems, v.8 n.4, p.462-497, November 2005
Shi-Kuo Chang , Taieb Znati, Adlet: An Active Document Abstraction for Multimedia Information Fusion, IEEE Transactions on Knowledge and Data Engineering, v.13 n.1, p.112-123, January 2001
Dan Suciu, Distributed query evaluation on semistructured data, ACM Transactions on Database Systems (TODS), v.27 n.1, p.1-62, March 2002
Serge Abiteboul , Jason McHugh , Michael Rys , Vasilis Vassalos , Janet L. Wiener, Incremental Maintenance for Materialized Views over Semistructured Data, Proceedings of the 24rd International Conference on Very Large Data Bases, p.38-49, August 24-27, 1998
Silvana Castano , Valeria De Antonellis , Sabrina De Capitani di Vimercati, Global Viewing of Heterogeneous Data Sources, IEEE Transactions on Knowledge and Data Engineering, v.13 n.2, p.277-297, March 2001
Paolo Atzeni , Giansalvatore Mecca, Cut and paste, Proceedings of the sixteenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, p.144-153, May 11-15, 1997, Tucson, Arizona, United States
Mengchi Liu , Tok Wang Ling, Towards semistructured data integration, Web-enabled systems integration: practices and challenges, Idea Group Publishing, Hershey, PA,
Dan Suciu, Semistructured data and XML, Information organization and databases: foundations of data organization, Kluwer Academic Publishers, Norwell, MA, 2000
David Konopnicki , Oded Shmueli, Database-inspired search, Proceedings of the 31st international conference on Very large data bases, August 30-September 02, 2005, Trondheim, Norway
Victor Vianu, A Web odyssey: from codd to XML, ACM SIGMOD Record, v.32 n.2, June
Jeffrey Hsu, Critical and future trends in data mining: a review of key data mining technologies/applications, Data mining: opportunities and challenges, Idea Group Publishing, Hershey, PA,
Victor Vianu, A Web Odyssey: from Codd to XML, Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.1-15, May 2001, Santa Barbara, California, United States
Raymond Kosala , Hendrik Blockeel, Web mining research: a survey, ACM SIGKDD Explorations Newsletter, v.2 n.1, p.1-15, June, 2000 | hypermedia;web searching;query cost;query locality;WebSQL query language;large heterogeneous distributed document collection;hypertext links;formal semantics;information retrieval requests;textual retrieval;virtual graph model;document network;topology-based queries;multiple index servers;calculus;World Wide Web querying |
383204 | The Strobe algorithms for multi-source warehouse consistency. | A warehouse is a data repository containing integrated information for efficient querying and analysis. Maintaining the consistency of warehouse data is challenging, especially if the data sources are autonomous and views of the data at the warehouse span multiple sources. Transactions containing multiple updates at one or more sources, e.g., batch updates, complicate the consistency problem. In this paper we identify and discuss three fundamental transaction processing scenarios for data warehousing. We define four levels of consistency for warehouse data and present a new family of algorithms, the Strobe family, that maintain consistency as the warehouse is updated, under the various warehousing scenarios. All of the algorithms are incremental and can handle a continuous and overlapping stream of updates from the sources. Our implementation shows that the algorithms are practical and realistic choices for a wide variety of update scenarios. | Introduction
A data warehouse is a repository of integrated information
from distributed, autonomous, and possibly
heterogeneous, sources. Figure 1 illustrates the
basic warehouse architecture. At each source, a monitor
collects the data of interest and sends it to the
warehouse. The monitors are responsible for identifying
changes in the source data, and notifying the
warehouse. At the warehouse, the integrator receives
the source data, performs any necessary data integration
or translation, adds any extra desired informa-
tion, such as timestamps for historical analysis, and
tells the warehouse to store the data. In effect, the
warehouse caches a materialized view of the source
data[13]. The data is then readily available to user
applications for querying and analysis.
Most current commercial warehousing systems
(e.g., Prism, Redbrick) focus on storing the data for
efficient access, and on providing extensive querying
facilities at the warehouse. They ignore the comple-
This work was partially supported by Rome Laboratories
under Air Force Contract F30602-94-C-0237 and by an equipment
grant from Digital Equipment Corporation.
Monitor
Data Warehouse
Monitor Monitor
Integrator
User Applications
Figure
1: Data warehouse architecture
mentary problem of consistently integrating new data,
assuming that this happens "off line" while queries are
not being run. Of course, they are discovering that
many customers have international operations in multiple
time zones, so there is no convenient down time,
a "night" or "weekend" when all of the recent updates
can be batched and processed together, and materialized
views can be recomputed. Furthermore, as more
and more updates occur, the down time window may
no longer be sufficient to process all of the updates [7].
Thus, there is substantial interest in warehouses
that can absorb incoming updates and incrementally
modify the materialized views at the warehouse, without
halting query processing. In this paper we focus
on this process and on how to ensure that queries see
consistent data. The crux of the problem is that each
arriving update may need to be integrated with data
from other sources before being stored at the ware-
house. During this processing, more updates may arrive
at the warehouse, causing the warehouse to become
inconsistent.
The following example illustrates some of the inconsistencies
that may arise. For simplicity, we assume
that both the warehouse and the sources use
the relational model, and that the materialized view
kept at the warehouse contains the key for each participating
relation. In this example, each update is a
separate transaction at one of the sources. We also
assume that the integrator is tightly coupled with the
warehouse. Therefore, although the view maintenance
computation is done by the integrator, and the actual
view operation is done by the warehouse, we use the
term warehouse (WH) to denote the combination of
the integrator and the warehouse in Figure 1.
Example 1: View maintenance anomaly over
multiple sources
view V be defined as
are three relations residing on sources x, y
and z, respectively. Initially, the relations are
The materialized view at the warehouse is
;. We consider two source updates: U
2]). Using a
conventional incremental view maintenance algorithm
[2], the following events may occur at the WH.
1. The WH receives U
source y. It generates query
r 3 . To evaluate Q 1 , the WH first sends query
3] to source x.
2. The WH receives A 1
3] from source x.
Query
is sent to source z for
evaluation.
3. The WH receives U
source x. Since the current view is empty, no
action is taken for this deletion.
4. The WH receives A 2
3; 4] from source z,
which is the final answer for Q 1 . Since there are
no pending queries or updates, the answer is inserted
into MV and This final
view is incorrect. 2
In this example, the interleaving of query Q 1 with
updates arriving from the sources causes the incorrect
view. Note that even if the warehouse view is
updated by completely recomputing the view - an
approach taken by several commercial systems, such
as Bull and Pyramid - the warehouse is subject to the
same anomalies caused by the interleaving of updates
with recomputation.
There are two straightforward ways to avoid this
type of inconsistency, but we will argue that in general,
neither one is desirable. The first way is to store copies
of all relations at the warehouse. In our example,
could then be atomically evaluated at the warehouse,
causing tuple [1; 2; 3; 4] to be added to MV . When
arrives, the tuple is deleted from MV , yielding a
correct final warehouse state. While this solution may
be adequate for some applications, we believe it has
several disadvantages. First, the storage requirement
at the warehouse may be very high. For instance,
suppose that r 3 contains data on companies, e.g., their
name, stock price, and profit history. If we copy all
of r 3 at the warehouse, we need to keep tuples for all
companies that exist anywhere in the world, not just
those we are currently interested in tracking. (If we
do not keep data for all companies, in the future we
may not be able to answer a query that refers to a new
company, or a company we did not previously track,
and be unable to atomically update the warehouse.)
Second, the warehouse must integrate updates for all
of the source data, not just the data of interest. In our
company example, we would need to update the stock
prices of all companies, as the prices change. This can
represent a very high update load [4], much of it to
data we may never need. Third, due to cost, copyright,
or security, storing copies of all of the source data may
not be feasible. For example, the source access charges
may be proportional to the amount of data we track
at the warehouse.
The second straightforward way to avoid inconsistencies
is to run each update and all of the actions
needed to incrementally integrate it into the warehouse
as a distributed transaction spanning the warehouse
and all the sources involved. In our example,
runs as part of a distributed transaction, then
it can read a consistent snapshot and properly update
the warehouse. However, distributed transactions require
a global concurrency control mechanism spanning
all the sources, which may not exist. And even if
it does, the sources may be unwilling to tolerate the
delays that come with global concurrency control.
Instead, our approach is to make queries appear
atomic by processing them intelligently at the warehouse
(and without requiring warehouse copies of all
relations). In our example, the warehouse notes that
deletion U 2 arrived at the warehouse while it was processing
query Q 1 . Therefore, answer A 1 may contain
some tuples that reflect the deleted r 1 tuple. Indeed,
A 1 contains [1; 2; 3; 4], which should not exist after
was deleted from r 1 . Thus, the warehouse removes
this tuple, leaving an empty answer. The materialized
view is then left empty, which is the correct
state after both updates take place. The above example
gives the "flavor" of our solution; we will present
more details as we explain our algorithms.
Note that the intelligent processing of updates at
the warehouse depends on how and if sources run
transactions. If some sources run transactions, then
we need to treat their updates, whether they came
from one source or multiple sources, as atomic units.
Combining updates into atomic warehouse actions introduces
additional complexities that will be handled
by our algorithms. Since we do not wish to assume a
particular transaction scenario, in this paper we cover
the three main possibilities: sources run no transac-
tions, some sources run local (but not global) transac-
tions, and some sources run global transactions.
Although we are fairly broad in the transaction scenarios
we consider, we do make two key simplifying
assumptions: we assume that warehouse views are
defined by relational project, select, join (PSJ) op-
erations, and we assume that these views include the
keys of all of the relations involved. We believe that
PSJ views are the most common and therefore, it is a
good subproblem on which to focus initially. We believe
that requiring keys is a reasonable assumption,
since keys make it easier for the applications to interpret
and handle the warehouse data. Furthermore, if
a user-specified view does not contain sufficient key
information, the warehouse can simply add the key
attributes to the view definition. (We have developed
view maintenance algorithms for the case where some
key data is not present, but they are not discussed
here. They are substantially more complex than the
ones presented here - another reason for including
keys in the view.)
In our previous work [17] we considered a very restricted
scenario: all warehouse data arrived from a
single source. Even in that simple case, there are consistency
problems, and we developed algorithms for
solving them. However, in the more realistic multi-source
scenario, it becomes significantly more complex
to maintain consistent views. (For instance, the ECA
and ECA-Key algorithms of [17] do not provide consistency
in Example 1; they lead to the same incorrect
execution shown.) In particular, the complexities not
covered in our earlier work are as follows.
ffl An update from one source may need to be integrated
with data from several other sources.
However, gathering the data corresponding to one
view update is not an atomic operation. No matter
how fast the warehouse generates the appropriate
query and sends it to the sources, receiving
the answer is not atomic, because parts of it come
from different, autonomous sources. Nonetheless,
the view should be updated as if all of the sources
were queried atomically.
ffl Individual sources may batch several updates into
a single, source-local, transaction. For example,
the warehouse may receive an entire day's updates
in one transaction. These updates - after
integration with data from other sources -
should appear atomically at the warehouse. Fur-
thermore, updates from several sources may together
comprise one, global, transaction, which
again must be handled atomically.
These complexities lead to substantially different
solutions. In particular, the main contributions of this
paper are:
1. We define and discuss all of the above update and
transaction scenarios, which require increasingly
complex algorithms.
2. We identify four levels of consistency for warehouse
views defined on multiple sources, in increasing
order of difficulty to guarantee. Note
that as concurrent query and update processing
at warehouses becomes more common, and as
warehouse applications grow beyond "statistical
analysis," there will be more concern from users
about the consistency of the data they are accessing
[7]. Thus, we believe it is important to
offer customers a variety of consistency options
and ways to enforce them.
3. We develop the Strobe family of algorithms to
provide consistency for each of the transaction
scenarios. We have implemented each of the
Strobe algorithms in our warehouse prototype
[16], demonstrating that the algorithms are practical
and efficient.
4. We map out the space of warehouse maintenance
algorithms (Figure 2). The algorithms we present
in this paper provide a wide number of options for
this consistency and distribution space.
The remainder of the paper is organized as follows.
We discuss related work in Section 2. In Section 3,
we define the three transaction scenarios and specify
our assumptions about the order of messages and
events in a warehouse environment. In Section 4 we
define four levels of consistency and correctness, and
discuss when each might be desirable. Then we describe
our new algorithms in Section 5 and apply the
algorithms to examples. We also demonstrate the levels
of consistency that each algorithm achieves for the
different transaction scenarios. In Section 6, we adapt
the algorithms so that the warehouse can reflect every
update individually, and show that the algorithms
will terminate. We conclude in Section 7 by outlining
optimizations to our algorithms and our future work.
Related research
The work we describe in this paper is closely related
to research in three fields: data warehousing, data consistency
and incremental maintenance of materialized
views. We discuss each in turn.
Data warehouses are large repositories for analytical
data, and have recently generated tremendous
interest in industry. A general description of the
data warehousing idea may be found in [11]. Companies
such as Red Brick and Prism have built specialized
data warehousing software, while almost all other
database vendors, such as Sybase, Oracle and IBM,
are targeting their existing products to data warehousing
applications.
A warehouse holds a copy of the source data, so essentially
we have a distributed database system with
replicated data. However, because of the autonomy of
the sources, traditional concurrency mechanisms are
often not applicable [3]. A variety of concurrency control
schemes have been suggested over the years for
such environments. They either provide weaker notions
of consistency, e.g., [6], or exploit the semantics
of applications. The algorithms we present in this paper
exploit the semantics of materialized view maintenance
to obtain consistency without traditional distributed
concurrency control. Furthermore, they offer
a variety of consistency levels that are useful in the
context of warehousing.
Many incremental view maintenance algorithms
have been developed for centralized database systems,
e.g., [2, 9, 5] and a good overview of materialized views
and their maintenance can be found in [8]. Most of
these solutions assume that a single system controls
all of the base relations and understands the views and
hence can intelligently monitor activities and compute
all of the information that is needed for updating the
views. As we showed in Example 1, when a centralized
algorithm is applied to the warehouse, the warehouse
user may see inconsistent views of the source data.
These inconsistent views arise regardless of whether
the centralized algorithm computes changes using the
old base relations, as in [2], or using the new base re-
lations, as in [5]. The crux of the warehouse problem
is that the exact state of the base relations (old or
new) when the incremental changes are computed at
the sources is unknown, and our algorithms filter out
or add in recent modifications dynamically.
Previous distributed algorithms for view mainte-
nance, such as those in [14, 12], rely on timestamping
the updated tuples. For a warehousing environment,
sources can be legacy systems so we cannot assume
that they will help by transmitting all necessary data
or by attaching timestamps.
Hull and Zhou [10] provide a framework for supporting
distributed data integration using materialized
views. However, their approach first materializes
each base relation (or relevant portion), then computes
the view from the materialized copies; on the
other hand, we propose algorithms to maintain joined
views directly, without storing any auxiliary data. We
compare our definition of consistency with theirs in
Section 4. Another recent paper by Baralis, et al. [1]
also uses timestamps to maintain materialized views
at a warehouse. However, they assume that the warehouse
never needs to query the sources for more data,
hence circumventing all of the consistency problems
that we address.
A warehouse often processes updates (from one or
more transactions) in batch mode. Conventional algorithms
have no way to ensure that an entire transaction
is reflected in the view at the same time, or that
a batch representing an entire day (or hour, or week,
or minute) of updates is propagated to the view simul-
taneously. In this paper we present view maintenance
algorithms that address these problems.
Finally, as we mentioned in Section 1, in [17] we
showed how to provide consistency in a restricted
single-source environment. Here we study the more
general case of multiple sources and transactions that
may span sources.
3 Warehouse transaction environment
The complexity of designing consistent warehouse
algorithms is closely related to the scope of transactions
at the sources. The larger the scope of a trans-
action, the more complex the algorithm becomes. In
this section, we define three common transaction sce-
narios, in increasing order of complexity, and spell out
our assumptions about the warehouse environment.
In particular, we address the ordering of messages between
sources and the warehouse, and define a source
event. We use the relational model for simplicity; each
update therefore consists of a single tuple action such
as inserting or deleting a tuple.
3.1 Update transaction scenarios
The three transaction scenarios we consider in this
paper are:
1. Single update transactions. Single update transactions
are the simplest; each update comprises
its own transaction and is reported to the warehouse
separately. Actions of legacy systems that
do not have transactions fall in this category: as
each change is detected by the source monitor, it
is sent to the warehouse as a single update trans-action
2. Source-local transactions. A source-local trans-action
is a sequence of actions performed at the
same source that together comprise one transac-
tion. The goal is therefore to reflect all of these
actions atomically at the warehouse. We assume
that each source has a local serialization schedule
of all of its source-local transactions. Single up-date
transactions are special cases of source-local
transactions. Database sources, for example, are
likely to have source-local transactions. We also
consider batches of updates that are reported together
to be a single, source-local, transaction.
3. Global transactions. In this scenario there are
global transactions that contain actions performed
at multiple sources. We assume that there
is a global serialization order of the global trans-
actions. (If there is not, it does not matter how
we order the transactions at the warehouse.) The
goal is therefore to reflect the global transactions
atomically at the warehouse. Depending on how
much information the warehouse receives about
the transaction, this goal is more or less achiev-
able. For example, unless there are global trans-action
identifiers, or the entire transaction is reported
by a single source, the warehouse cannot
tell which source-local transactions together comprise
a global transaction.
For each transaction scenario, we make slightly different
assumptions about the content of messages.
3.2 Messages
There are two types of messages from the sources to
the warehouse: reporting an update and returning the
answer to a query. There is only one type of message
in the other direction; the warehouse may send queries
to the sources.
We assume that each single update transaction and
source-local transaction is reported in one message, at
the time that the transaction commits. For exam-
ple, a relational database source might trigger sending
a message on transaction commit [15]. However,
batching multiple transactions into the same message
does not affect the algorithms of Section 5. For global
transactions, updates can be delivered in a variety of
ways. For example, the site that commits the transaction
may collect all of the updates and send them to
the warehouse at the commit point. As an alternative,
each site may send its own updates, once it knows the
global transaction has committed. In Section 5.4 we
discuss the implications of the different schemes.
3.3 Event Ordering
Each source action, plus the resulting message sent
to the warehouse, is considered one event. For ex-
ample, evaluating a query at a source and sending
the answer back to the warehouse is considered one
event. We assume events are atomic, and are ordered
by the sequence of the corresponding actions. (In [18]
we discuss what to do when this assumption does not
hold.) We also assume that any two messages sent
from one source to the warehouse are delivered in the
same order as they were sent. (This can be enforced
by numbering messages.) We place no restrictions on
the order in which messages sent from different sources
to the warehouse are delivered.
3.4 Discussion
In practice, the update transaction scenario seen at
the warehouse depends primarily on the capabilities
of the underlying sources. For example, it is currently
common practice to report updates from a source pe-
riodically. Instead of reporting each change, a monitor
might send all of the changes that occurred over the
last hour or day to the warehouse, as a single batch
transaction. Periodic snapshots may be the only way
for the monitor of an unsophisticated legacy source to
report changes, or a monitor might choose to report
updates lazily when the warehouse does not need to
be kept strictly up to date.
In general, smarter monitors (those which help to
group or classify updates or those which coordinate
global transactions) save the warehouse processing
and may enable the warehouse to achieve a higher level
of consistency, as we will see in Section 5.4. We believe
that today most warehouse transaction environments
will support either single-update transactions or
source-local transactions (or both), but will not have
any communication or coordination between sources.
Still, for completeness, we believe it is important to
understand the global transaction scenario, which may
be more likely in the future.
4 Correctness and consistency
Before describing our algorithms, we first define
what it means for an algorithm to be correct in an environment
where activity at the sources is decoupled
from the view at the warehouse. In particular, we are
concerned with what it means for a warehouse view to
be consistent with the original source data. Since each
source update may involve fetching data from multiple
sources in order to update the warehouse view, we
first define states at the sources and at the warehouse.
4.1 Source and warehouse states
Each warehouse state ws represents the contents of
the warehouse. The warehouse state changes whenever
the view is updated. Let the warehouse states be
(We assume there is a final
warehouse state after all activity ceases.) We consider
one view V at the warehouse, which is defined over a
set of base relations at one or more sources. The view
at state ws j is V (ws j ).
Let there be u sources, where each source has a
unique id x (1 - x - u). A source state ss is a vector
that contains u elements and represents the (visible)
state of each source at a given instant in time. The
x th component, ss[x], is the state of source x. Source
states represent the contents of source base relations.
We assume that source updates are executed is a serializable
fashion across all sources, i.e., there is some
serial schedule S that represents execution of the up-
dates. (However, what constitutes a transaction varies
according to the scenario.) We assume that ss q is the
final state after S completes. V (ss) is the result of
computing the view V over the source state ss. That
is, for each relation r at source x that contributes to
the view, V (ss) is evaluated over r at the state ss[x].
Each source transaction is guaranteed to bring the
sources from one consistent state to another. For any
serial schedule R, we use result(R) to refer to the
source state vector that results from its execution.
4.2 Levels of consistency
Assume that the view at the warehouse is initially
synchronized with the source data, i.e., V (ss
(ws 0 ). We define four levels of consistency for warehouse
views. Each level subsumes all prior levels.
These definitions are a generalization of the ones in
[17] for a multi-source warehouse environment.
1. Convergence: For all finite executions,
That is, after the last update
and after all activity has ceased, the view is consistent
with the source data.
2. Weak consistency: Convergence holds and, for
all ws i , there exists a source state vector ss j
such that V (ws
each source x, there exists a serial schedule
of (a subset of all) transactions such
that That is, each warehouse
state reflects a valid state at each source,
and there is a locally serializable schedule at each
source that achieves that state. However, each
source may reflect a different serializable schedule
and the warehouse may reflect a different set
of committed transactions at each source.
3. Strong consistency: Convergence holds and
there exists a serial schedule R and a mapping
m, from warehouse states into source states, with
the following properties: (i) Serial schedule R is
equivalent to the actual execution of transactions
at the sources. It defines a sequence of source
states ss reflects the first j
transactions (i.e., ss
the R prefix with j transactions). (ii) For all ws i ,
(iii) If ws That
is, each warehouse state reflects a set of valid
source states, reflecting the same globally serializable
schedule, and the order of the warehouse
states matches the order of source actions.
4. Completeness: In addition to strong consis-
tency, for every ss j defined by R, there exists a
ws i such that m(ws i That is, there is a
complete order-preserving mapping between the
states of the view and the states of the sources.
Hull and Zhou's definition of consistency for replicated
data [10] is similar to our strong consistency,
except that they also require global timestamps across
sources, which we do not. Also, our strong consistency
is less restrictive than theirs in that we do not
require any fixed order between two non-conflicting
actions. Our definition is compatible with standard
serializability theory. In fact, our consistency definition
can be rephrased in terms of serializability theory,
by treating the warehouse view evaluation as a read
only transaction at the sources [18].
Although completeness is a nice property since it
states that the view "tracks" the base data exactly,
we believe it may be too strong a requirement and un-necessary
in most practical warehousing scenarios. In
some cases, convergence may be sufficient, i.e., knowing
that "eventually" the warehouse will have a valid
state, even if it passes through intermediate states that
are invalid. In most cases, strong consistency is desir-
able, i.e., knowing that every warehouse state is valid
with respect to a source state. In the next section, we
show that an algorithm may achieve different levels
of consistency depending on the update transaction
scenario to which it is applied.
Algorithms
In this section, we present the Strobe family of
algorithms. The Strobe algorithms are named after
strobe lights, because they periodically "freeze" the
constantly changing sources into a consistent view
at the warehouse. Each algorithm was designed
to achieve a specific level of correctness for one of
the three transaction processing scenarios. We discuss
the algorithms in increasing level of complex-
ity: the Strobe algorithm, which is the simplest,
achieves strong consistency for single update trans-
actions. The Transaction-Strobe algorithm achieves
strong consistency for source-local transactions, and
the Global-Strobe algorithm achieves strong consistency
for global transactions. In Section 6 we present
modifications to these algorithms that attain completeness
for their respective transaction scenarios.
5.1 Terminology
First, we introduce the terminology that we use to
describe the algorithms.
view V at the warehouse over n relations
is defined by a Project-Select-Join (PSJ) expression
Any two relations may reside at the same or at different
sources, and any relational algebra expression
constructed with project, select, and join operations
can be transformed into an equivalent expression of
this form. Moreover, although we describe our algorithms
for PSJ views, our ideas can be used to adapt
any existing centralized view maintenance algorithm
to a warehousing environment.
As we mentioned in the introduction, we assume
that the projection list contains the key attributes for
each relation. We expect most applications to require
anyway, and if not, they can be added to the view
by the warehouse.
When a view is defined over multiple sources, an up-date
at one source is likely to initiate a multi-source
query Q at the warehouse. Since we cannot assume
that the sources will cooperate to answer Q, the warehouse
must therefore decide where to send the query
first.
Suppose we are given a query Q that
needs to be evaluated. The function next source(Q)
returns the pair (x; is the next source
to contact, and Q i is the portion of Q that can be
evaluated at x. If Q does not need to be evaluated
further, then x is nil. A i is the answer received at the
warehouse in response to subquery Q i . Query QhA i i
denotes the remaining query after answer A i has been
incorporated into query Q. 2
For PSJ queries, next source will always choose a
source containing a relation that can be joined with
the known part of the query, rather than requiring the
source to ship the entire base relation to the warehouse
(which may not even be possible). As we will see later,
queries generated by an algorithm can also be unions
of PSJ expressions. For such queries, next source
simply selects one of the expressions for evaluation.
An improvement would be to find common subexpressions
Example 2: Using next source
Let relations r reside at sources x;
spectively, let
be an update to relation r 2 received at the ware-
house. Therefore, query
next When the
warehouse receives answer A 1 from x, QhA 1
r 3 . Then next
since there is only one relation left to join in the query.
A 2 is the final answer. 2
In the above example, the query was sent to source
x first. Alternatively, next
When there is more than one possible relation to join
with the intermediate result, next source may use
statistics (such as those used by query optimizers) to
decide which part of the query to evaluate next.
We are now ready to define the procedure
source evaluate, which loops to compute the next
portion of query Q until the final result answer A is re-
ceived. In the procedure, WQ is the "working query"
portion of query Q, i.e., the part of Q that has not yet
been evaluated.
Procedure source evaluate(Q)
While x is not nil do
to source x;
- When x returns A i , let
End Procedure
The procedure source evaluate(Q) may return an
incorrect answer when there are concurrent transactions
at the sources that interfere with the query eval-
uation. For example, in example 1, we saw that a
delete that occurs at a source after a subquery has
been evaluated there, but before the final answer is
computed, may be skipped in the final query result.
More subtle problems result when two subqueries of
the same query are sent to the same source for evaluation
at different times (to join with different relations)
and use different source states, or when two subqueries
are evaluated at two different sources in states that are
inconsistent with each other. The key idea behind the
Strobe algorithms is to keep track of the updates that
occur during query evaluation, and to later compen-
sate. We introduce the Strobe family with the basic
algorithm.
For simplicity, here we only consider insertions and
deletions in our algorithms. Conceptually, modifications
of tuples (updates sent to the warehouse) can be
treated at the warehouse simply as a deletion of the
old tuple followed by an insertion of the new tuple.
However, for consistency and performance, the delete
and the insert should be handled "at the same time."
Our algorithms can be easily extended for this type of
processing, but we do not do it here. Further discussion
of how to treat a modification as an insert and a
delete may be found in [8].
5.2 Strobe
The Strobe algorithm processes updates as they ar-
rive, sending queries to the sources when necessary.
However, the updates are not performed immediately
on the materialized view MV ; instead, we generate a
list of actions AL to be performed on the view. We
update MV only when we are sure that applying all of
the actions in AL (as a single transaction at the ware-
house) will bring the view to a consistent state. This
occurs when there are no outstanding queries and all
received updates have been processed.
When the warehouse receives a deletion, it generates
a delete action for the corresponding tuples (with
matching key values) in MV . When an insert arrives,
the warehouse may need to generate and process a
query, using procedure source evaluate(). While a Q
query is being answered by the sources, updates may
arrive at the warehouse, and the answer obtained may
have missed their effects. To compensate, we keep
a set pending(Q) of the updates that occur while Q
is being processed. After Q's answer is fully compen-
sated, an insert action for MV is generated and placed
on the action list AL.
Definition: The unanswered query set UQS is the
set of all queries that the warehouse has sent to some
source but for which it has not yet received an answer.Definition: The operation key delete(R; U i ) deletes
from relation R the tuples whose key attributes have
the same values as U i . 2
denotes the view expression V with
the tuple U substituted for U 's relation. 2
Algorithm 1: Strobe algorithm
At each source:
After executing update U i , send U i to the warehouse
Upon receipt of query Q i , compute the answer
A i over ss[x] (the current source state), and send
A i to the warehouse.
At the warehouse:
Initially, AL is set to empty h i.
Upon receipt of update U
is a deletion
- Add key delete(MV; U i ) to AL.
is an insertion
apply
apply AL to MV as a single
transaction, without adding duplicate tuples to
MV . Reset
End Algorithm 1
The following example applies the Strobe algorithm
to the warehouse scenario in Example 1 in the intro-
duction. Specifically, it shows why a deletion needs
to be applied to the answer of a previous query, when
the previous query's answer arrives at the warehouse
later than the deletion.
Example 3: Strobe avoids deletion anomaly
As in example 1, let view V be defined as
are three relations residing on
sources x, y and z, respectively. Initially, the relations
are
The materialized view We again consider
two source updates: U
apply the Strobe algorithm.
1. i. The WH receives U
from source y. It generates query
[2; 3] ./ r 3 . To evaluate Q 1 , the WH first sends
query
3] to source x.
2. The WH receives A 1
3] from source x.
Query
is sent to source z for
evaluation.
3. The WH receives U
source x. It first adds U 2 to pending(Q 1 ) and
then adds key delete(MV; U 2 ) to AL. The resulting
4. The WH receives A 2
3; 4] from source z.
Since pending(Q) is not empty, the WH applies
and the resulting answer A
;. Therefore, nothing is added to AL. There
are no pending queries, so the WH updates MV
by applying )i. The
resulting ;. The final view is correct and
strongly consistent with the source relations. 2
This example demonstrates how Strobe avoids the
anomaly that caused both ECA-key and conventional
view maintenance algorithms to be incorrect: by remembering
the delete until the end of the query,
Strobe is able to correctly apply it to the query result
before updating the view MV . If the deletion U 2
were received before Q 1
1 had been sent to source x,
then A 1
would have been empty and no extra action
would have been necessary.
The Strobe algorithm provides strong consistency
for all single-update transaction environments. A correctness
proof is given in [18]. The intuition is that
each time MV is modified, updates have quiesced and
the view contents can be obtained by evaluating the
view expression at the current source states. There-
fore, although not all source states will be reflected in
the view, the view always reflects a consistent set of
source states.
5.3 Transaction-Strobe
The Transaction-Strobe (T-Strobe) algorithm
adapts the Strobe algorithm to provide strong consistency
for source-local transactions. T-Strobe collects
all of the updates performed by one transaction and
processes these updates as a single unit. Batching the
updates of a transaction not only makes it easier to
enforce consistency, but also reduces the number of
query messages that must be sent to and from the
sources.
is the update list of a transaction
T . UL(T ) contains the inserts and deletes performed
by T , in order. IL(T ) ' UL(T ) is the insertion list of
contains all of the insertions performed by T . 2
denotes the key attributes of the
inserted or deleted tuple U i . If
then U i and U j denote the same tuple (although other
attributes may have been modified). 2
The source actions in T-Strobe are the same as in
we therefore present only the warehouse ac-
tions. First, the WH removes all pairs of insertions
and deletions such that the same tuple was first inserted
and then deleted. This removal is an optimization
that avoids sending out a query for the insertion,
only to later delete the answer. Next the WH adds
all remaining deletions to the action list AL. Finally,
the WH generates one query for all of the insertions.
As before, deletions which arrive at the WH after the
query is generated are subtracted from the query result
The following example demonstrates that the
Strobe algorithm may only achieve convergence, while
the T-Strobe algorithm guarantees strong consistency
for source-local transactions. Because the Strobe algo-
Algorithm 2: Transaction-Strobe algorithm
At the warehouse:
Initially,
Upon receipt of UL(T i ) for a transaction
For each U
an insertion, U k is a deletion, U
from UL(T i ).
ffi For every deletion U 2 UL(T i
add U to pending(Q j ).
- Add key delete(MV; U ) to AL.
and set pending(Q i
apply AL to MV , without
adding duplicate tuples to MV . Reset
End Algorithm 2
rithm does not understand transactions, it may provide
a view which corresponds to the "middle" of a
transaction at a source state. However, Strobe will
eventually provide the correct view, once the transaction
commits, and is therefore convergent.
Example 4: T-Strobe provides stronger consistency
than Strobe
Consider a simple view over one source defined as
Assume attribute A is the key of relation
r 1 . Originally, the relation is: r
Initially 2]). We consider one source
When the Strobe algorithm is applied to this sce-
nario, the warehouse firsts adds the deletion to AL.
Since there are no pending updates, AL is applied to
MV and MV is updated to which is not
consistent with r 1 either before or after T 1 . Then the
warehouse processes the insertion and updates MV
again, to the correct view
The T-Strobe algorithm, on the other hand, only
updates MV after both updates in the transaction
have been processed. Therefore, MV is updated directly
to the correct view,
The T-Strobe algorithm is inherently strongly consistent
with respect to the source states defined after
each source-local transaction. 1 T-Strobe can also pro-
1 Note incidentally that if modifications are treated as a
delete-insert pair, then T-Strobe can process the pair within a
single transaction, easily avoiding inconsistencies. However, for
performance reasons we may still want to modify T-Strobe to
handle modifications as a third type of action processed at the
cess batched updates, not necessarily generated by the
same transaction, but which were sent to the warehouse
at the same time from the same source. In this
case, T-Strobe also guarantees strong consistency if
we define consistent source states to be those corresponding
to the batching points at sources. Since it
is common practice today to send updates from the
sources periodically in batches, we believe that
Strobe is probably the most useful algorithm. On
single-update transactions, T-Strobe reduces to the
algorithm.
5.4 Global-strobe
While the T-Strobe algorithm is strongly consistent
for source-local transactions, it is only weakly consistent
if global transactions are present. In [18] we
present an example that illustrates this and develop a
new algorithm, Global-Strobe (G-Strobe), that guarantees
strong consistency for global transactions. G-
Strobe is the same as T-Strobe except that it only
updates MV (with the actions in AL) when the following
three conditions have all been met. (T-Strobe
only requires condition 1). Let TT be the set of trans-action
identifiers that the warehouse has received since
it last updated MV .
1.
2. For each transaction T i in TT that depends (in
the concurrency control sense) on another trans-action
is also in TT ; and
3. All of the updates of the transactions in TT have
been received and processed.
Due to space limitations, we do not present G-
Strobe here.
6 Completeness and termination of the
algorithms
A problem with Strobe, T-Strobe, and G-Strobe is
that if there are continuous source updates, the algorithms
may not reach a quiescent state where UQS is
empty and the materialized view MV can be updated.
To address this problem, in this section we present an
algorithm, Complete Strobe (C-Strobe) that can up-date
MV after any source update. For example, C-
strobe can propagate updates to MV after a particular
batch of updates has been received, or after some long
period of time has gone by without a natural quiescent
point. For simplicity, we will describe C-strobe
enforcing an update to MV after each update; in this
case, C-strobe achieves completeness. The extension
to update MV after an arbitrary number of updates
is straightforward and enforces strong consistency.
To force an update to MV after update U i arrives
at the warehouse, we need to compute the resulting
view. However, other concurrent updates at the
sources complicate the problem. In particular, consider
the case where U i is an insertion. To compute
the next MV state, the warehouse sends a query Q i
to the sources. By the time the answer A i arrives, the
warehouse. As stated earlier, we do not describe this straight-forward
extension here.
warehouse may have received (but not processed) updates
may reflect the effects of
these later updates, so before it can use A i to update
MV , the warehouse must "subtract out" the effects of
later updates from A i , or else it will not get a consistent
state. If one of the later updates, say U j , is an
insert, then it can just remove the corresponding tuples
from A i . However, if U j is a delete, the warehouse
may need to add tuples to A i , but to compute these
missing tuples, it must send additional queries to the
sources! When the answers to these additional queries
arrive at the warehouse, they may also have to be adjusted
for updates they saw but which should not be
reflected in MV . Fortunately, as we show below, the
process does converge, and eventually the warehouse
is able to compute the consistent MV state that follows
U i . After it updates MV , the warehouse then
processes U i+1 in the same fashion.
Before presenting the algorithm, we need a few definitions
denotes the set of queries sent by
the warehouse to compute the view after insertion up-date
are the queries sent in response to up-date
U j that occurred while computing the answer for
a query in Q is used to distinguish
each query in Q i;j; as Q i;j;k . 2
In the scenario above, for insert U i we first generate
Q i;i;0 . When its answer A i;i;0 arrives, a deletion U j
received before A i;i;0 requires us to send out another
query, identified as Q i;j;new j
. In the algorithm, new j
is used to generate the next unique integer for queries
caused by U j in the context of processing U i .
When processing each update U i separately, no action
list AL is necessary. In the Strobe and T-strobe
algorithms, AL keeps track of multiple updates whose
processing overlaps. In the C-strobe algorithm outlined
below, each update is compensated for subse-
quent, "held," updates so that it can be applied directly
to the view. If C-strobe is extended (not shown
here) to only force updates to MV periodically, after a
batch of overlapping updates, then an action list AL is
again necessary to remember the actions that should
be applied for the entire batch.
is the resulting query after the
updated tuple in U i replaces its base relation in Q.
If the base relation of U i does not appear in Q, then
is the set of changes that need to
be applied to MV for one insertion update. Note that
Delta, when computed, would correspond to a single
insert(MV; Delta) action on AL if we kept an action
list. (Deletion updates can be applied directly to MV ,
but insertions must be compensated first. Delta collects
the compensations.) 2
We also use a slightly different version of key delete:
delete (Delta; U k ) only deletes from Delta those
tuples that match with U k on both key and non-key
attributes (not just on key attributes). Finally, when
we add tuples to Delta, we allow tuples with the same
key values but different non-key values to be added.
These tuples violate the key condition, but only appear
in Delta temporarily. However, it is important
to keep them in Delta for the algorithm to work cor-
rectly. (The reason for these changes is that when
we "subtract out" the updates seen by Q i;i;0 , we first
compensate for deletes, and then for all inserts. In
between, we may have two tuples with the same key,
one added from the compensation of a delete, and the
other to be deleted when we compensate for inserts.)
In algorithm C-Strobe, the source behavior remains
the same as for the Strobe algorithm, so we only describe
the actions at the warehouse. C-Strobe is complete
because MV is updated once after each update,
and the resulting warehouse state corresponds to the
source state after the same update. We prove the correctness
of C-Strobe in [18].
Algorithm 3: Complete Strobe
At the warehouse:
Initially,
As updates arrive, they are placed in a holding
queue.
Process each update U i in order of arrival:
is a deletion
Apply key delete(MV; U i ).
is an insertion
- Repeat for each A i;j;k until
Add A i;j;k to Delta (without adding duplicate
tuples).
ffi For all deletions U p received between U j
and A i;j;k :
When answer arrives, process starting 4
lines above.
- For all insertions U k received between U i
and the last answer, if :9U j ! U k such that
U j is a deletion and U j , U k refer to the same
tuple, then apply key delete (Delta; U k ).
End Algorithm 3
The compensating process (the loop in the al-
gorithm) always terminates because any expression
has one fewer base relation than
us assume that there are at most K updates
that can arrive between the time a query is sent
out and its answer is received, and that there are n
base relations. When we process insertion U i we send
out query Q when we get its answer we may have
to send out at most K compensating queries with
base relations each. For each of those queries, at most
K queries with relations may be sent, and so
on. Thus, the total number of queries sent in the loop
is no more than K n\Gamma2 , and the algorithm eventually
finishes processing U i and updates MV .
The number of compensating queries may be significantly
reduced by combining related queries. For ex-
ample, when we compensate for Q i;i;0 , the above algorithm
sends out up to K queries. However, since there
are only n base relations, we can group these queries
queries, where each combined query groups
all of the queries generated by an update to the same
base relation. If we continue to group queries by base
relation, we see that the total number of compensating
queries cannot exceed (n\Gamma1)\Theta(n\Gamma2)\Theta: :
That is, C-Strobe will update MV after at most
queries are evaluated. If the view involves
a small number of relations, then this bound will be
relatively small. Of course, this maximum number of
queries only occurs under extreme conditions where
there is a continuous stream of updates.
We now apply the C-Strobe algorithm to the warehouse
scenario in Example 1, and show how C-Strobe
processes this scenario differently from the Strobe algorithm
(shown in Example 3).
Example 5: Complete Strobe
As in examples 1 and 3, let view V be defined as
are three relations
residing on sources x, y and z, respectively. Initially,
the relations are
The materialized view We again consider
two source updates: U
apply the C-Strobe algo-
rithm. There are two possible orderings of events at
the warehouse. Here we consider one, and in the next
example we discuss the other.
1. ;. The WH receives from source y U
It generates query Q
[2; 3] ./ r 3 . To evaluate Q 1;1;0 , the WH first sends
query
3] to source x.
2. The WH receives A 1
3] from source x.
Query
is sent to source z
for evaluation.
3. The WH receives U
source x. It saves this update in a queue.
4. The WH receives A
from source z, which is the final answer to Q 1;1;0 .
received between Q 1;1;0 and A 1;1;0
and it is a deletion, the WH generates a query
sends it to source
z. Also, it adds A 1;1;0 to Delta, so
5. The WH receives A
to add it to Delta. Since it is a duplicate tuple,
remains the same.
so the WH updates the view to
7. Next the WH processes U 2 which is next in the
update queue. Since U 2 is a deletion, it applies
In this example, MV is updated twice, in steps 6
and 7. After step 6, MV is equal to the result of
evaluating V after U 1 but before U 2 occurs. Similarly,
after step 7, MV corresponds to evaluating V after
before any further updates occur, which is
the final source state in this example. In the next
example we consider the case where U 2 occurs before
the evaluation of the query corresponding to U 1 , and
we show that compensating queries are necessary.
Example applied again, with different
timing of the updates
Let the view definition, initial base relations and
source updates be the same as in example 5. We now
consider a different set of events at the WH.
1. ;. The WH receives from source y U
It generates query Q
[2; 3] ./ r 3 . To evaluate Q 1;1;0 , the WH first sends
query
3] to source x.
2. The WH receives U
source x. It saves this update in a queue.
3. The WH receives A 1
source x. This
implies that A received
between Q 1;1;0 and A 1;1;0 , the WH generates the
compensating query Q
and sends it to source z. Also, it adds A 1;1;0 to
and Delta is still empty.
4. The WH receives A adds it
to Delta.
5. Since UQS = ;, the WH updates the view to
6. The WH processes U 2 . Since U 2 is a deletion, it
applies key delete (MV;U 2 ) and
As mentioned earlier, C-Strobe can be extended to
update MV periodically, after processing every k up-
dates. In this case, we periodically stop processing
updates (placing them in a holding queue). We then
process the answers to all queries that are in UQS as
we did in C-Strobe, and then apply the action list AL
to the view MV . The T-Strobe algorithm can also be
made complete or periodic in a similar way. We call
this algorithm C-TStrobe, but do not describe it here
further.
Conclusions
In this paper, we identified three fundamental
transaction processing scenarios for data warehousing
and developed the Strobe family of algorithms to consistently
maintain the warehouse data. Figure 2 summarizes
the algorithms we discussed in this paper and
their correctness. In the figure, "Conventional" refers
to a conventional centralized view maintenance algo-
rithm, while "ECA" and "ECA-Key" are algorithms
from [17].
Conv-
entional
Single
Update
Trans.
Global
Trans
Single
Source
Consistent
Weakly-
Consistent
ECA-Key
Conv-
entional
ECA-Key
Transaction
Scenarios
Centralized
Inconsistent
Convergent
T-Strobe
G-Strobe
T-Strobe
C-TStrobe C-GStrobe
C-Strobe
Complete
Correctness
Multiple Sources
Figure
2: Consistency Spectrum
In
Figure
2, an algorithm is shown in a particular
scenario S and level of consistency L if it achieves L
consistency in scenario S. Furthermore, the algorithm
at (S; L) also achieves all lower levels of consistency
for S, and achieves L consistency for scenarios that
are less restrictive than S (scenarios to the left of S).
For example, Strobe is strongly consistent for single
update transactions at multiple sources. Therefore, it
is weakly consistent and convergent (by definition) in
that scenario. Similarly, Strobe is strongly consistent
for centralized and single source scenarios.
Regarding the efficiency of the algorithms we have
presented, there are four important points to make.
First, there are a variety of enhancements that can
improve efficiency substantially:
1. We can optimize global query evaluation. For ex-
ample, in procedure source evaluate(), the warehouse
can group all queries for one source into
one, or can find an order of sources that minimizes
data transfers. It can also use key information to
avoid sending some queries to sources.
2. We can find the optimal batch size for processing.
By batching together updates, we can reduce the
message traffic to and from sources. However,
delaying update processing means the warehouse
view will not be as up to date, so there is a clear
tradeoff that we would like to explore.
3. Although we argued against keeping copies of all
base relations at the warehouse, it may make
sense to copy the most frequently accessed ones
(or portions thereof), if they are not too large or
expensive to keep up to date. This also increases
the number of queries that can be answered locally
The second point regarding efficiency is that, even
if someone determines that none of these algorithms
is efficient enough for their application, it is still very
important to understand the tradeoffs involved. The
algorithms exemplify the inherent cost of keeping
a warehouse consistent. Given these costs, users
can now determine what is best for them, given their
consistency requirements and their transactional scenario
Third, when updates arrive infrequently at the
warehouse, or only in periodic batches with large gaps
in between, the Strobe algorithms are as efficient as
conventional algorithms such as [2]. They only introduce
extra complexity when updates must be processed
while other updates are arriving at the ware-
house, which is when conventional algorithms cannot
guarantee a consistent view.
Fourth, the Strobe algorithms are relatively inexpensive
to implement, and we have incorporated them
into the Whips (WareHousing Information Prototype
at Stanford) prototype [16]. In our implementation,
the Strobe algorithm is only 50 more lines of C++
code than the conventional view maintenance algo-
rithm, and C-strobe is only another 50 lines of code.
The core of each of the algorithms is about 400 lines of
C++ code (not including evaluating each query). The
ability to guarantee correctness (Strobe), the ability to
batch transactions, and the ability to update the view
consistently, whenever desired and without quiescing
updates (C-strobe) cost only approximately 100 lines
of code, and one programmer day.
As part of our ongoing warehousing work, we are
currently evaluating the performance of the Strobe
and T-Strobe algorithms and considering some of the
optimizations mentioned above. We are also extending
the algorithms to handle more general type of views,
for example, views with insufficient key information,
and views defined by more complex relational algebra
expressions. Our future work includes designing maintenance
algorithms that coordinate updates to multiple
warehouse views.
Acknowledgments
We would like to thank Jennifer Widom and Jose
Blakely for discussions that led to some of the ideas
in this paper.
--R
Conservative timestamp revised for materialized view maintenance in a data warehouse.
Efficiently updating materialized views.
A multidatabase system for tracking and retrieval of financial data.
Algorithms for deferred view mainte- nance
Improving performance in replicated databases through relaxed coherency.
Maintenance of materialized views: Problems
Maintaining views incrementally.
A framework for supporting data integration using the materialized and virtual approaches.
Rdb/VMS: Developing the Data Warehouse.
A snapshot differential refresh algo- rithm
Special Issue on Materialized Views and Data Warehousing
Updating distributed materialized views.
A system prototype for warehouse view maintenance.
View maintenance in a warehousing environment.
The Strobe algorithms for multi-source warehouse consistency
--TR
--CTR
Clemente Garcia, Real time self-maintenable data warehouse, Proceedings of the 44th annual southeast regional conference, March 10-12, 2006, Melbourne, Florida
Lyman Do , Pamela Drew , Wei Jin , Vish Jumani , David Van Rossum, Issues in Developing Very Large Data Warehouses, Proceedings of the 24rd International Conference on Very Large Data Bases, p.633-636, August 24-27, 1998
J. Labio , Yue Zhuge , Janet L. Wiener , Himanshu Gupta , Hctor Garca-Molina , Jennifer Widom, The WHIPS prototype for data warehouse creation and maintenance, ACM SIGMOD Record, v.26 n.2, p.557-559, June 1997
Ching-Ming Chao, Incremental maintenance of object-oriented data warehouses, Information SciencesInformatics and Computer Science: An International Journal, v.160 n.1-4, p.91-110, 22 March 2004
Bin Liu , Elke A. Rundensteiner , David Finkel, Maintaining large update batches by restructuring and grouping, Information Systems, v.32 n.4, p.621-639, June, 2007
Ding , Xin Zhang , Elke A. Rundensteiner, The MRE wrapper enabling incremental view maintenance of data warehouses defined on multi-relation information sources, Proceedings of the 2nd ACM international workshop on Data warehousing and OLAP, p.30-35, November 02-06, 1999, Kansas City, Missouri, United States
Miranda Chan , Hong Incremental update to aggregated information for data warehouses over Internet, Proceedings of the 3rd ACM international workshop on Data warehousing and OLAP, p.57-64, November 06-11, 2000, McLean, Virginia, United States
Ki Yong Lee , Jin Hyun Son , Myoung Ho Kim, Efficient incremental view maintenance in data warehouses, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
Kenneth Salem , Kevin Beyer , Bruce Lindsay , Roberta Cochrane, How to roll a join: asynchronous incremental view maintenance, ACM SIGMOD Record, v.29 n.2, p.129-140, June 2000
I. Stanoi , D. Agrawal , A. El Abbadi , S. H. Phatak , B. R. Badrinath, Data warehousing alternatives for mobile environments, Proceedings of the 1st ACM international workshop on Data engineering for wireless and mobile access, p.110-115, August 20-20, 1999, Seattle, Washington, United States
Magalhes Pequeno , Vnia Maria Ponte Vidal, Using full match classes for self-maintenance of mediated views, Enterprise information systems IV, Kluwer Academic Publishers, Hingham, MA,
Zohra Bellahsene, Schema evolution in data warehouses, Knowledge and Information Systems, v.4 n.3, p.283-304, July 2002
Zohra Bellahsene, View Adaptation in the Fragment-Based Approach, IEEE Transactions on Knowledge and Data Engineering, v.16 n.11, p.1441-1455, November 2004
D. Agrawal , A. El Abbadi , A. Singh , T. Yurek, Efficient view maintenance at data warehouses, ACM SIGMOD Record, v.26 n.2, p.417-427, June 1997
Ken C. K. Lee , Hong V. Leong , Antonio Si, Incremental maintenance for dynamic database-derived HTML pages in digital libraries, Proceedings of the seventh international conference on Information and knowledge management, p.20-29, November 02-07, 1998, Bethesda, Maryland, United States
Wang , Maria Orlowska , Weifa Liang, Efficient refreshment of materialized views with multiple sources, Proceedings of the eighth international conference on Information and knowledge management, p.375-382, November 02-06, 1999, Kansas City, Missouri, United States
Khalil M. Ahmed , Nagwa M. El-Makky , Yousry Taha, Effective data mining: a data warehouse-backboned architecture, Proceedings of the 1998 conference of the Centre for Advanced Studies on Collaborative research, p.1, November 30-December 03, 1998, Toronto, Ontario, Canada
Kyriakos Karenos , George Samaras , Panos K. Chrysanthis , Evaggelia Pitoura, Mobile agent-based services for view materialization, ACM SIGMOBILE Mobile Computing and Communications Review, v.8 n.3, July 2004
Gianluca Moro , Claudio Sartori, Incremental maintenance of multi-source views, Proceedings of the 12th Australasian database conference, p.13-20, January 29-February 01, 2001, Queensland, Australia
H. Engstr , S. Chakravarthy , B. Lings, Maintenance policy selection in heterogeneous data warehouse environments: a heuristics-based approach, Proceedings of the 6th ACM international workshop on Data warehousing and OLAP, November 07-07, 2003, New Orleans, Louisiana, USA
Yingwei Cui , Jennifer Widom , Janet L. Wiener, Tracing the lineage of view data in a warehousing environment, ACM Transactions on Database Systems (TODS), v.25 n.2, p.179-227, June 2000
Xin Zhang , Lingli Ding , Elke A. Rundensteiner, Parallel multisource view maintenance, The VLDB Journal The International Journal on Very Large Data Bases, v.13 n.1, p.22-48, January 2004
Songting Chen , Bin Liu , Elke A. Rundensteiner, Multiversion-based view maintenance over distributed data sources, ACM Transactions on Database Systems (TODS), v.29 n.4, p.675-709, December 2004
Dimitri Theodoratos , Timos K. Sellis, Data Warehouse Configuration, Proceedings of the 23rd International Conference on Very Large Data Bases, p.126-135, August 25-29, 1997
Dimitri Theodoratos , Mokrane Bouzeghoub, A general framework for the view selection problem for data warehouse design and evolution, Proceedings of the 3rd ACM international workshop on Data warehousing and OLAP, p.1-8, November 06-11, 2000, McLean, Virginia, United States
Waiman Cheung , Gilbert Babin, A metadatabase-enabled executive information system (part A): a flexible and adaptable architecture, Decision Support Systems, v.42 n.3, p.1589-1598, December 2006
Andreas Koeller , Elke A. Rundensteiner, A history-driven approach at evolving views under meta data changes, Knowledge and Information Systems, v.8 n.1, p.34-67, July 2005
Ladjel Bellatreche , Kamalakar Karlapalem , Mukesh Mohania, Some issues in design of data warehousing systems, Data warehousing and web engineering, IRM Press, Hershey, PA, 2002 | strobe algorithms;batch updates;overlapping update stream;incremental algorithms;integrated information;multi-source warehouse consistency;continuous update stream;transaction processing scenarios;efficient querying;multiple updates;efficient analysis;warehouse data consistency maintenance;very large databases;data repository |
383205 | Making views self-maintainable for data warehousing. | A data warehouse stores materialized views over data from one or more sources in order to provide fast access to the integrated data, regardless of the availability of the data sources. Warehouse views need to be maintained in response to changes to the base data in the sources. Except for very simple views, maintaining a warehouse view requires access to data that is not available in the view itself. Hence, to maintain the view, one either has to query the data sources or store auxiliary data in the warehouse. We show that by using key and referential integrity constraints, we often can maintain a select-project-join view without going to the data sources or replicating the base relations in their entirety in the warehouse. We derive a set of auxiliary views together are self-maintainablethey can be maintained without going to the data sources or replicating all base data. In addition, our technique can be applied to simplify traditional materialized view maintenance by exploiting key and referential integrity constraints. | Introduction
The problem of materialized view maintenance has
received increasing attention recently [6, 7, 11], particularly
due to its application to data warehousing
[3, 14]. A view is a derived relation defined in
terms of base relations. A view is said to be materialized
when it is stored in the database, rather
than computed from the base relations in response to
queries. The materialized view maintenance problem
is the problem of keeping the contents of the stored
view consistent with the contents of the base relations
as the base relations are modified.
Data warehouses store materialized views in order
to provide fast access to information that is integrated
from several distributed data sources [3]. The data
sources may be heterogeneous and/or remote from the
warehouse. Consequently, the problem of maintaining
a materialized view in a data warehouse differs from
the traditional view maintenance problem where the
view and base data are stored in the same database.
In particular, when changes are reported by one data
This work was supported by Rome Laboratories under Air
Force Contract F30602-94-C-023 and by equipment grants from
Digital and IBM Corporations.
source it may be necessary to access base data from
other data sources in order to maintain the view [9].
For any view involving a join, maintaining the view
when base relations change may require accessing base
data, even when incremental view maintenance techniques
are used [5, 8]. For example, for a view R 1 S,
when an insertion to relation R is reported it is usually
necessary to query S in order to discover which tuples
in S join with the insertion to R. In the warehousing
scenario, accessing base data means either querying
the data sources or replicating the base relations in the
warehouse. The problems associated with querying
the data sources are that the sources may periodically
be unavailable, may be expensive or time-consuming
to query, and inconsistencies can result at the warehouse
unless care is taken to avoid them through the
use of special maintenance algorithms [14]. The problems
associated with replicating base relations at the
warehouse are the additional storage and maintenance
costs incurred. In this paper we show that for many
views, including views with joins, if key and referential
integrity constraints are present then it is not necessary
to replicate the base relations in their entirety at
the warehouse in order to maintain a view. We give
an algorithm for determining what extra information,
called auxiliary views, can be stored at a warehouse
in order to maintain a select-project-join view without
accessing base data at the sources. The algorithm
takes key and referential integrity constraints into ac-
count, which are often available in practice, to reduce
the sizes of the auxiliary views. When a view together
with a set of auxiliary views can be maintained at the
warehouse without accessing base data, we say the
views are self-maintainable.
Maintaining materialized views in this way is especially
important for data marts-miniature data warehouses
that contain a subset of data relevant to a particular
domain of analysis or geographic region. As
more and more data is collected into a centralized
data warehouse it becomes increasingly important to
distribute the data into localized data marts in order
to reduce query bottlenecks at the central warehouse.
When many data marts exist, the cost of replicating
entire base relations (and their changes) at each data
mart becomes especially prohibitive.
1.1 Motivating example
We start with an example showing how the amount
of extra information needed to maintain a view can
be significantly reduced from replicating the base relations
in their entirety. Here we present our results
without explanation of how they are obtained. We
will revisit the example throughout the paper.
Consider a database of sales data for a chain of
department stores. The database has the following
relations.
store(store id, city, state, manager)
sale(sale id, store id, day, month,
line(line id, sale id, item id,
sales price)
item(item id, item name, category,
supplier name)
The first (underlined) attribute of each relation is a
key for the relation. The store relation contains the
location and manager of each store. The sale relation
has one record for each sale transaction, with the store
and date of the sale. A sale may involve several items,
one per line on a sales receipt, and these are stored in
the line relation, with one tuple for every item sold
in the transaction. The item relation contains information
about each item that is stocked. We assume
that the following referential integrity constraints
hold: (1) from sale.store id to store.store id, (2)
from line.sale id to sale.sale id, and (3) from
line.item id to item.item id. A referential integrity
constraint from S:B to R:A implies that for
every tuple s 2 S there must be a tuple r 2 R such
that
Suppose the manager responsible for toy sales in the
state of California is interested in maintaining a view
of this year's sales: "all toy items sold in California in
1996 along with the sales price, the month in which
the sale was made, and the name of the manager of
the store where the sale was made. Include the item
id, the sale id, and the line id."
SELECT store.manager, sale.sale id, sale.month,
item.item id, item.item name, line.line id,
line.sales price
FROM store, sale, line, item
WHERE store.store
line.sale id and
line.item
The question addressed in this paper is: Given a view
such as the one above, what auxiliary views can be
materialized at the warehouse so that the view and
auxiliary views together are self-maintainable?
Figure
shows SQL expressions for a set of
three auxiliary views that are sufficient to maintain
view cal toy sales for insertions and deletions to
each of the base relations, and are themselves self-
maintainable. In this paper we give an algorithm
for deriving such auxiliary views in the general case,
along with incremental maintenance expressions for
maintaining the original view and auxiliary views.
Materializing the auxiliary views in Figure 1 repre-
store AS
SELECT store id, manager
FROM store
SELECT sale id, store id, month
store id IN (SELECT store id FROM aux store)
item AS
SELECT item id, item name
FROM item
Figure
1: Auxiliary Views for Maintaining the
cal toy sales View
sents a significant savings over materializing the base
relations in their entirety, as illustrated in Table 1.
Suppose that each of the four base relations
contain the number of tuples listed in the first
column of Table 1. Assuming that the selectivity
of store.state="CA" is .02, the selectivity
of sale.year=1996 is .25, the selectivity of
item.category="toy" is .05, and that distributions
are uniform, the number of tuples passing local selection
conditions (selection conditions involving attributes
from a single relation) are given in the second
column of Table 1. A related proposal by Hull and
Zhou [10] achieves self-maintainability for base relation
insertions by pushing down projections and local
selection conditions on the base relations and storing
at the warehouse only those tuples and attributes of
the base relations that pass the selections and projec-
tions. Thus, their approach would require that the
number of tuples appearing in the second column of
Table
1 is stored at the warehouse to handle insertions.
We improve upon the approach in [10] by also taking
key and referential integrity constraints into ac-
count. For example, we don't need to materialize any
tuples from line, because the key and referential integrity
constraints guarantee that existing tuples in
line cannot join with insertions into the other rela-
tions. Likewise we can exclude tuples in sale that
do not join with existing tuples in store whose state
is California, because we are guaranteed that existing
tuples in sale will never join with insertions to store.
Using our approach can dramatically reduce the number
of tuples in the auxiliary views over pushing down
selections only. The number of tuples required by our
approach to handle base relation insertions in our example
appears in the third column of Table 1.
Tuples in Tuples Passing Tuples in Auxiliary
Base Relation Base Relation Local Selection Conditions Views of Figure 1
store
sale 80,000,000 20,000,000 400,000
line 800,000,000 800,000,000 0
item 1,000 50 50
Total 880,003,000 820,000,090 400,090
Table
1: Number of Tuples in Base Relations and Auxiliary Views
We can similarly use key constraints to handle deletions
to the base relations without all the base relations
being available. We can determine the effects
of deletions from sale, line, and item without referencing
any base relations because cal toy sales includes
for these relations. We simply join the
deleted tuples with cal toy sales on the appropriate
key. Even though the view does not include a key for
store, store is joined to sale on the key of sale, so
the effect of deletions from store can be determined
by joining the deleted tuples with sale and joining
the result with cal toy sales on the key of sale.
Now consider updates. If all updates were treated
as deletions followed by insertions, as is common in
view maintenance, then the properties of key and referential
integrity constraints that we use to reduce
the size of auxiliary views would no longer be guaranteed
to hold. Thus, updates are treated separately
in our approach. Note that in data warehousing environments
it is common for certain base relations not to
be updated (e.g., relations sale and line may be append
only). Even when base relations are updateable,
it may be that not all attributes are updated (e.g.,
we don't expect to update the state of a store). If
updates to the base relations in our example cannot
change the values of attributes involved in selection
conditions in the view, then the auxiliary views of Figure
are sufficient (even if attributes appearing in the
view may be updated). If, on the other hand, updates
to sale may change the year (for example), then an
additional auxiliary view:
line AS
line id, sale id, item id, sales price
FROM line
WHERE item id IN (SELECT item id FROM aux item)
would need to be materialized, which would have
40,000,000 tuples. That is, we would need to store
all purchases of items whose category is "toy," in case
the year of the corresponding sales record is changed
later to 1996. Further, if updates may change the category
of an item to "toy", we would need to keep all
of the line relation in order to maintain the view.
In practice we have found that the attributes appearing
in selection conditions in views tend to be attributes
that are not updated, as in our example. As
illustrated above and formalized later on, when such
updates do not occur, much less auxiliary information
is required for self-maintenance. Thus, exploiting
knowledge of permitted updates is an important feature
of our approach.
1.2 Self-Maintenance
Self maintenance is formally defined as follows.
Consider a view V defined over a set of base relations
R. Changes, ffi R, are made to the relations in R
in response to which view V needs to be maintained.
We want to compute ffi V , the changes to V , using as
little extra information as possible. If ffi V can be computed
using only the materialized view V and the set
of changes ffi R, then view V alone is self-maintainable.
If view V is not self-maintainable, we are interested in
finding a set of auxiliary views A defined on the same
relations as V such that the set of views fV g[A is
self-maintainable. Note that the set of base relations
R forms one such set of auxiliary views. However, we
want to find more "economical" auxiliary views that
are much smaller than the base relations. The notion
of a minimal set of auxiliary views sufficient to maintain
view V is formalized in Section 3.
A more general problem is to make a set
Vn of views self-maintainable, i.e., find auxiliary
views A such that A[V is self-maintainable. Simply
applying our algorithm to each view in V is not
since opportunities to "share" information
across original and auxiliary views will not be
recognized. That is, the final set A[V may not be
minimal. We intend to investigate sets of views as
future work.
1.3 Paper outline
The paper proceeds as follows. Section 2 presents
notation, terminology, and some assumptions. Section
3 presents an algorithm for choosing a set of auxiliary
views to materialize that are sufficient for maintaining
a view and are self-maintainable. Section 4
shows how the view is maintained using the auxiliary
views. Section 5 explains that the set of auxiliary
views is itself self-maintainable. Related work appears
in Section 6.
Preliminaries
We consider select-project-join (SPJ) views; that is,
views consisting of a single projection followed by a
single selection followed by a single cross-product over
a set of base relations. As usual, any combination of
selections, projections, and joins can be represented in
this form. We assume that all base relations have
but that a view might contain duplicates due to the
projection at the view. In this paper we assume single-attribute
and conjunctions of selection conditions
(no disjunctions) for simplicity, but our results carry
over to multi-attribute keys and selection conditions
with disjunctions. In Section 3 we will impose certain
additional restrictions on the view but we explain how
those restrictions can be lifted in the full version of the
paper [12]. We say that selection conditions involving
attributes from a single relation are local conditions;
otherwise they are join conditions. We say that attributes
appearing in the final projection are preserved
in the view.
In order to keep a materialized view up to date,
changes to base relations must be propagated to the
view. A view maintenance expression calculates the
effects on the view of a certain type of change: in-
sertions, deletions, or updates to a base relation. We
use a differential algorithm as given in [5] to derive
view maintenance expressions. For example, if view
then the maintenance expression calculating
the effect of insertions to R (4R) is
represents the tuples to insert
into V as a result of 4R.
Since in data warehousing environments updates
to certain base relations may not occur, or may not
change the values of certain attributes, we define each
base relation R as having one of three types of updates,
depending on how the updateable attributes are used
in the view definition:
ffl If updates to R may change the values of attributes
involved in selection conditions (local or
join) in the view, then we say R has exposed updates
ffl Otherwise, if updates to R will not change the values
of attributes involved in selection conditions
but may change the values of preserved attributes
(attributes included in the final projection), then
we say R has protected updates.
ffl Otherwise, if updates to R will not change the
values of attributes involved in selection conditions
or the values of preserved attributes, then
we say R has ignorable updates.
Ignorable updates cannot have any affect on the view,
so they do not need to be propagated. From now on
we consider only exposed and protected updates. Exposed
updates could cause new tuples to be inserted
into the view or tuples to be deleted from the view, so
we propagate them as deletions of tuples with the old
values followed by insertions of tuples with the new
values. For example, given a view
if the value of R:A for a tuple in R is changed from
9 to 10 then new tuples could be inserted into V as
a result. Protected updates can only change the attribute
values of existing tuples in the view; they cannot
result in tuples being inserted into or deleted from
the view. We therefore propagate protected updates
separately. An alternate treatment of updates is considered
in Section 4.1.2.
In addition to the usual select, project, and join
symbols, we use ?! to represent semijoin, ] to represent
union with bag semantics, and \Gamma \Delta to represent
minus with bag semantics. We further assume that
project (-) has bag semantics. The notation 1X represents
an equijoin on attribute X, while 1 key(R) represents
an equijoin on the key attribute of R, assuming
this attribute is in both of the joined relations. Insertions
to a relation R are represented as 4R, deletions
are represented as 5R, and protected updates
are represented as -R. Tuples in -R have two attributes
corresponding to each of the attributes of R:
one containing the value before update and another
containing the value after update. We use - old to
project the old attribute values and - new to project
the new attribute values.
3 Algorithm for determining auxiliary
views
We present an algorithm (Algorithm 3.1 below)
that, given a view definition V , derives a set of auxiliary
views A such that view V and the views in A
taken together are self-maintainable; i.e., can be maintained
upon changes to the base relations without requiring
access to any other data. Each auxiliary view
is an expression of the form:
That is, each auxiliary view is a selection and a projection
on relation R i followed by zero or more semijoins
with other auxiliary views. It can be seen that the
number of tuples in each AR i is never larger than the
number of tuples in R i and, as we have illustrated in
Section 1.1, may be much smaller. Auxiliary views of
this form can easily be expressed in SQL, and they
can be maintained efficiently as shown in [12].
Intuitively, the first part of the auxiliary view ex-
pression, (-oeR i ), results from pushing down projections
and local selection conditions onto R i . Tuples
in R i that do not pass local selection conditions cannot
possibly contribute to tuples in the view; hence
they are not needed for view maintenance and therefore
need not be stored in AR i at the warehouse. The
semijoins in the second part of the auxiliary view expression
further reduce the number of tuples in AR i
by restricting it to contain only those tuples joinable
with certain other auxiliary views. In addition, we
will show that in some cases the need for AR i can be
eliminated altogether.
We first need to present a few definitions that are
used in the algorithm.
Given a view V , let the join graph G(V ) of
a view be a directed graph hR; Ei. R is the
set of relations referenced in V , which form
the vertices of the graph. There is a directed
edge contains
a join condition R i and A is
a key of R j . The edge is annotated with RI if
there is a referential integrity constraint from
R i :B to R j :A.
We assume for now that the graph is a forest (a set
of trees). That is, each vertex has at most one edge
leading into it and there are no cycles. This assumption
still allows us to handle a broad class of views
that occur in practice. For example, views involving
chain joins (a sequence of relations R
where the join conditions are between a foreign key
of R i and a key of R n) and star joins
(one relation R 1 , usually large, joined to a set of relations
usually small, where the join conditions
are between foreign keys in R 1 and the keys of
In addition, we assume
that there are no self-joins. We explain how each of
these assumptions can be removed in [12].
The following definition is used to determine the set
of relations upon which a relation R i depends-that
is, the set of relations R j in which (1) a foreign key in
R i is joined to a key of R j , (2) there is a referential
integrity constraint from R i to R j , and (3) R j has
protected updates.
with RI and R j does not have exposed updates g
determines the set of auxiliary views to
which R i is semijoined in the definition of the auxiliary
view AR i for R i , given above. The reason for
the semijoins is as follows. Let R j be a member of
G). Due to the referential integrity constraint
from R i to R j and the fact that the join between R i
and R j is on a key of R j , each tuple t i 2 R i must join
with one and only one tuple does
not pass the local selection conditions on R j . Then
cannot contribute to tuples in the
view. Because updates to R j are protected (by the
definition of Dep(R never
contribute to tuples in the view, so it is not necessary
to include t i in AR i at the warehouse. It is sufficient
to store only those tuples of R i that pass the local selection
conditions on R i and join with a tuple in R j
that passes the local selection conditions on R j (i.e.,
the semijoin condition is the
same as the join condition between R i and R j in the
view). That R i can be semijoined with AR j , rather
than oeR j , in the definition of AR i follows from a similar
argument applied inductively.
The following definition is used to determine the
set of relations upon which relation R i transitively depends
(R is the transitive closure of Dep(R
(R is used to help determine whether it
is necessary to store AR i at the warehouse in order to
maintain the view or whether AR i can be eliminated
altogether. Intuitively, if Dep (R includes all relations
referenced in view V except R i , then AR i is not
needed for propagating insertions to any base relation
onto V . The reason is that the key and referential integrity
constraints guarantee that new insertions into
the other base relations can join only with new insertions
into R i , and not with existing tuples in R i . This
behavior is explained further in Section 4.
line
RI
RI
RI
store
item
Figure
2: Join Graph G(cal toy sales)
The following definition is used to determine the
set of relations with which relation R i needs to join so
that the key of one of the joining relations is preserved
in the view (where all joins must be from keys to foreign
keys). If no such relation exists then Need(R
includes all other relations in the view.
OE if the key of R i is preserved in
if the key of R i is not preserved
in V but there is an R j such
that
otherwise
Note that because we restrict the graph to be a forest,
there can be at most one R j such that e(R j ; R i ) is in
used to help determine whether
it is necessary to store auxiliary views. In particular,
an auxiliary view AR j is necessary if R j appears in
the Need set of some R i . Intuitively, if the key of R i
is preserved in view V , then deletions and protected
updates to R i can be propagated to V by joining them
directly with V on the key of R i . Otherwise, if the
key of R i is not preserved in V but R i is joined with
another relation R j on the key of R i and V preserves
the key of R j , then deletions and protected updates
to R i can be propagated onto V by joining them first
with R j , then joining the result with V . In this case R j
is in the Need set of R i , and hence AR j is necessary.
More generally, if the key of R i is not present in V
but R i joins with R j on the key of R i , then auxiliary
views for R j and each of the relations in Need(R
are necessary for propagating deletions and protected
updates to R i . Finally, if none of the above conditions
hold then auxiliary views for all relations referenced in
other than R i are necessary.
To illustrate the above definitions we consider again
the cal toy sales view of Section 1.1. Figure 2 shows
the graph G(cal toy sales). The
Need functions for each of the base relations are given
in
Table
2. Assume for now that each base relation has
protected updates.
Algorithm 3.1 appears in Figure 3. We will
explain how the algorithm works on our running
cal toy sales example. The auxiliary views generated
by the algorithm are exactly those given in Figure
1 of Section 1.1. They are shown in relational
algebra form in Table 3.
Algorithm 3.1
Input
View V .
Output
Set of auxiliary view definitions A.
Method
Let R be the set of relations referenced in V
Construct graph G(V )
for every relation R i
Construct (R
for every relation R i
(R
R such that R i
then AR i is not needed
else
P is the set of attributes in R i that are preserved in V , appear in join conditions, or are
a key of R i ,
S is the strictest set of local selection conditions possible on R i ,
C l is the join condition R i a key of R k l , and
Figure
3: Algorithm to Derive Auxiliary Views
Table
2: Dep and Need Functions for Base Relation
For each relation R i referenced in the view V , the
algorithm checks whether Dep (R includes every
other relation referenced in V and R i is not in
any relation R j referenced in V . If
so, it is not necessary to store any part of R i in order
to maintain V . Relation line is an example where an
auxiliary view for the relation is not needed.
Otherwise, two steps are taken to reduce the
amount of data stored in the auxiliary view AR i for
R i . First, it is possible to push down on R i local selection
conditions (explicit or inferred) in the view so
store id; manager oe state=CA store
oe year=1996 sale) ?! store id A store
Table
3: Auxiliary Views for Maintaining the
cal toy sales View
that tuples that don't pass the selection conditions
don't need to be stored; it is also possible to project
away all attributes from R i except those that are involved
in join conditions, preserved in V , or are a key
of R i . Second, if Dep(R i ; G) is not empty, it is possible
to further reduce the tuples stored in AR i to only
those tuples of R i that join with tuples in other auxiliary
views ARk where R k is in Dep(R i ; G). The auxiliary
view for sale is an example where both steps
have been applied. A sale is restricted by the semijoin
with A store to include only tuples that join with tuples
passing the local selection conditions on store.
The auxiliary views for store and item are examples
where only selection and projection can be applied.
Although view definitions are small and running
time is not crucial, we observe that the running time
of Algorithm 3.1 is polynomial in the number of re-
lations, and therefore is clearly acceptable. We now
state a theorem about the correctness and minimality
of the auxiliary views derived by Algorithm 3.1.
Theorem 3.1 Let V be a view with a tree-structured
join graph. The set of auxiliary views A produced
by Algorithm 3.1 is the unique minimal set of views
that can be added to V such that fV g[A is self-
maintainable. 2
The proof of Theorem 3.1 is given in [12]. By minimal
we mean that no auxiliary view can be removed
from A, and it is not possible to add an additional
selection condition or semijoin to further reduce the
number of tuples in any auxiliary view and still have
fV g[A be self-maintainable. We show in Section 4
how V can be maintained using A, and we explain in
Section 5 that we can maintain A without referencing
base relations.
3.1 Effect of exposed updates
Recall that so far in our example we have considered
protected updates only. Suppose sale had exposed
updates (i.e., updates could change the values of year,
sale id, or store id). We note that the definition of
Dep does not include any relation that has exposed
updates. Thus, the Dep function for line will not
include sale, and we get:
in which case an auxiliary view for line would be
created as:
line ?! item id A item
No selection or projection can be applied on line in
A line because there are no local selections on line in
the view and all attributes of line are either preserved
in the view or appear in join conditions. Section 4.1
explains why exposed updates have a different effect
on the set of auxiliary views needed than protected
updates.
4 Maintaining the view using the auxiliary
views
Recall that a view maintenance expression calculates
the effects on the view of a certain type of change:
insertions, deletions, or updates to a base relation.
View maintenance expressions are usually written in
terms of the changes and the base relations [5, 2]. In
this section we show that the set of auxiliary views
chosen by Algorithm 3.1 is sufficient to maintain the
view by showing how to transform the view maintenance
expressions written in terms of the changes and
the base relations to equivalent view maintenance expressions
written in terms of the changes, the view,
and the auxiliary views.
We give view maintenance expressions for each type
of change (insertions, deletions, and updates) sepa-
rately. In addition, for each type of change we apply
the changes to each base relation separately by
propagating the changes to the base relation onto the
view and updating the base relation. The reason we
give maintenance expressions of this form, rather than
maintenance expressions propagating several types of
changes at once, is that maintenance expressions of
this form are easier to understand and they are sufficient
for our purpose: showing that it is possible to
maintain a view using the auxiliary views generated
by Algorithm 3.1. View maintenance expressions for
insertions are handled in Section 4.1, for deletions are
handled in Section 4.2, and for protected updates are
handled in Section 4.3. Since exposed updates are
handled as deletions followed by insertions, they are
treated within Sections 4.1 and 4.2.
4.1 Insertions
In this section we show how the effect on a view
of insertions to base relations can be calculated using
the auxiliary views chosen by Algorithm 3.1. The view
maintenance expression for calculating the effects on
an SPJ view V of insertions to a base relation R is obtained
by substituting 4R (insertions to R) for base
relation R in the relational algebra expression for V .
For example, the view maintenance expressions calculating
the effects on our cal toy sales view (Sec-
tion 1.1) of insertions to store, sale, line, and item
appear in Table 4.
(- store id;manager oe state=CA 4St
store id - sale id;store id;month oe year=1996 Sa
1 item id - item id;item name oe category=toy I)
(- store id;manager oe state=CA St
store id - sale id;store id;month oe year=1996 4Sa
1 item id - item id;item name oe category=toy I)
(- store id;manager oe state=CA St
store id - sale id;store id;month oe year=1996 Sa
1 item id - item id;item name oe category=toy I)
(- store id;manager oe state=CA St
store id - sale id;store id;month oe year=1996 Sa
1 item id - item id;item name oe category=toy 4I)
Table
4: Maintenance Expressions for Insertions
A few words of explanation about the table are in
order.
ffl For convenience, in the table and hereafter we
abbreviate store, sale, line, and item as St,
Sa, L, and I, respectively.
ffl We abbreviate view cal toy sales as V .
ffl We have applied the general rule of "pushing selections
and projections down" to the maintenance
expressions.
ffl We use the notation 4VR to represent the insertions
into view V due to insertions into base relation
R. For example, 4V St represents insertions
into V due to insertions into St.
ffl Each of the maintenance expressions of Table 4
calculates the effect on view V of insertions to one
of the base relations. We show in Section 4.1.3
that even if insertions to multiple base relations
are propagated at once, the auxiliary views generated
by Algorithm 3.1 are still sufficient.
From the expressions of Table 4 it would appear
that beyond pushing down selections and projections,
nothing can be done to reduce the base relation data
required for evaluating the maintenance expressions.
If there are no referential integrity constraints, that
is indeed the case. However, referential integrity constraints
allow certain of the maintenance expressions
to be eliminated, requiring less base relation data in
the auxiliary views. Maintenance expressions are eliminated
due to the following property and corresponding
rule.
Property 4.1 (Insertion Property for Foreign
Keys) If there is a referential integrity constraint
from R j :B to R i :A (R j :B is the "foreign key"), A is
a key of R i , and R i does not have exposed updates,
In general, if the above
conditions hold then the following is true.
Property 4.1 holds because the referential integrity
constraint requires that each tuple in R j join with an
existing tuple in R i , and because it joins on a key of
R i it cannot join with any of the tuples in 4R i , so the
join of 4R i with R j must be empty.
Rule 4.1 (Insertion Rule for Foreign Keys) Let
G(V ) be the join graph for view V . The maintenance
expression calculating the effect on a view V of insertions
to a base relation R i is guaranteed to be empty
and thus can be eliminated if there is some relation R j
such that R i
Rule 4.1 is used to eliminate the maintenance expression
that calculates the effect on a view V of insertions
to a base relation R i if there is another relation
R j in V such that R i
the join graph for V . The rule holds because, by the
definition of Dep, R i is in Dep(R when the view
equates a foreign key of R j to the key of R i , there is a
referential integrity constraint from the foreign key in
R j to the key of R i , and R i has protected updates (the
effect of exposed updates is discussed in Section 4.1.2).
Since the maintenance expression that calculates the
effect of insertions to R i includes a join between 4R i
and R j , it must be empty by Property 4.1 and therefore
can be eliminated. Joins and referential integrity
constraints between keys and foreign keys are common
in practice, so the conditions of Rule 4.1 are often met.
Being able to eliminate certain maintenance expressions
when calculating the effect on a view V of insertions
to base relations can significantly reduce the cost
of maintaining V . Although view maintenance expressions
themselves are not the main theme of this paper,
nevertheless this is an important stand-alone result.
4.1.1 Rewriting the maintenance expressions
to use auxiliary relations. Eliminating certain
maintenance expressions using Rule 4.1 allows us to
use the auxiliary views instead of base relations when
propagating insertions. After applying Rule 4.1, the
remaining maintenance expressions are rewritten using
the auxiliary views generated by Algorithm 3.1
by replacing each -oeR i subexpression with the corresponding
auxiliary view AR i for R i .
For example, assuming for now that the base relations
have protected updates, the maintenance expressions
for 4V St , 4V Sa , and 4V I , in Table 4
can be eliminated by Rule 4.1 due to the referential
constraints between sale.store id and
store.store id, line.sale id and sale.sale id,
and line.item id and item.item id, respectively.
Only 4VL , the expression calculating the effect of insertions
to L, is not guaranteed to be empty. The
maintenance expression 4VL is rewritten using auxiliary
views as follows. Recall that the auxiliary views
are shown in Table 3.
Notice that the base relation L is never referenced
in the above maintenance expression, so an auxiliary
view for L is not needed. In addition, Sa is joined with
St in the maintenance expression, which is why it is
acceptable to store only the tuples in Sa that join with
existing tuples in St-tuples in Sa that don't join with
existing tuples in St won't contribute to the result. A
proof that the auxiliary views are sufficient in general
to evaluate the (reduced) maintenance expressions for
insertions appears in [12].
4.1.2 Effect of exposed updates. Suppose the
view contains a join condition R j A is a
key of R i , there is a referential integrity constraint
from R j :B to R i :A, but R i has exposed, rather than
protected, updates. Dep(R thus does not contain
R i . Recall that exposed updates can change the values
of attributes involved in selection conditions (local
or join). We handle exposed updates as deletions of
tuples with the old attribute values followed by insertions
of tuples with the new attribute values, since
exposed updates may result in deletions or insertions
in the view. Thus, if R i has exposed updates then
may include tuples representing the new values
of exposed updates. Because these tuples can join with
existing tuples in R j (without violating the referential
integrity or key constraints), Property 4.1 does not
hold and Rule 4.1 cannot be used to eliminate the
maintenance expression propagating insertions to R i .
For example, suppose updates may occur to the
year attribute of Sa. Then an auxiliary view for L
would be created as I as shown in
Section 3.1. We cannot semijoin L with A Sa in the
auxiliary view for L because new values of updated
tuples in Sa could join with existing tuples in L, where
the old values of the updated tuples didn't pass the
local selection conditions on Sa and hence weren't in
A Sa . That is, suppose the year of some sale tuple t was
changed from 1995 to 1996. Although the old value
of t doesn't pass the selection criteria year=1996 and
therefore wouldn't appear in A Sa , the new value of t
would, and since it could join with existing tuples in
L we cannot restrict AL to include only those tuples
that join with existing tuples in A Sa .
In this paper we assume that it is known in advance
whether each relation of a view V has exposed or protected
updates. If a relation has exposed updates, we
may need to store more information in the auxiliary
views in order to maintain V than if the relation had
protected updates. For example, we had to create an
auxiliary view for L when Sa had exposed updates,
where the auxiliary view for L wasn't needed when
Sa had protected updates.
An alternate way to consider updates, which
doesn't require advance knowledge of protected versus
exposed, is to assume that every base relation has
protected updates. Then, before propagating updates,
the updates to each base relation are divided into two
classes: updates that do not modify attributes involved
in selection conditions, and those that do. The
first class of updates can be propagated as protected
updates using the expressions of Section 4.3. Assuming
the second class of updates is relatively small, updates
in the second class could be propagated by issuing
queries back to the data sources.
4.1.3 Propagating insertions to multiple relations
at once. Maintenance expressions of the form
used in Table 4 propagate onto the view insertions to
one base relation at a time. To propagate insertions to
multiple base relations using the formulas in Table 4,
when 4VR i is calculated we assume that the insertions
to base relations R j (j ! i) have already been applied
to the base relations.
In [5, 8], maintenance expressions are given for
propagating changes to all base relations at once.
We consider the one relation at a time case because
the maintenance expressions are easier to explain;
the amount of data needed in the auxiliary views is
the same whether insertions (or deletions or updates)
are propagated one relation at a time or all at once
(see [12]).
4.2 Deletions
In this section we show how the effect on a view
of deletions to base relations can be calculated using
the auxiliary views. The view maintenance expression
for calculating the effects on an SPJ view V of
deletions to a base relation R is obtained similarly to
the expression for calculating the effects of insertions:
we substitute 5R (deletions to R) for base relation R
in the relational algebra expression for V . For exam-
ple, the view maintenance expressions for calculating
the effects on our cal toy sales view of deletions to
store, sale, line, and item appear respectively as
5V St , 5V Sa , 5VL , and 5V I in Table 5. We use the
notation 5VR to represent the deletions from view V
due to deletions from base relation R.
Often we can simplify maintenance expressions for
deletions to use the contents of the view itself if keys
(- store id;manager oe state=CA 5St
store id - sale id;store id;month oe year=1996 Sa
1 item id - item id;item name oe category=toy I)
(- store id;manager oe state=CA St
store id - sale id;store id;month oe year=1996 5Sa
1 item id - item id;item name oe category=toy I)
(- store id;manager oe state=CA St
store id - sale id;store id;month oe year=1996 Sa
1 item id - item id;item name oe category=toy I)
(- store id;manager oe state=CA St
store id - sale id;store id;month oe year=1996 Sa
1 item id - item id;item name oe category=toy 5I)
Table
5: Maintenance Expressions for Deletions
are preserved in the view. We do this using the following
properties and rule for deletions in the presence of
keys.
Property 4.2 (Deletion Property for Keys)
Given view
the key of a relation R i is preserved in V then the
following equivalence holds:
Consider the join graph G(V ) of view V . Property
4.2 says that if V preserves the key of some relation
then we can calculate
the effect on V of deletions to R i by joining V with
5R i on the key of R i . The property holds because
each tuple in V with the same value for the key of
R i as a tuple t in 5R i must have been derived from
t. Conversely, all tuples in V that were derived from
tuple t in 5R i must have the same value as t for the
key of R i . Therefore, the set of tuples in V that join
with t on the key of R i is exactly the set of tuples in
V that should be deleted when t is deleted from R i . A
similar property holds if the key of R i is not preserved
in V , but is equated by a selection condition in V to
an attribute C that is preserved in V . In this case the
effect of deletions from R i can be obtained by joining
V with 5R i using the join condition
Property 4.2 is used in [4] to determine when a view
is self-maintainable with respect to deletions from a
base relation. We extend their result with Property
4.3.
Property 4.3 (Deletion Property for Key
Joins) Given a view
satisfying the following
conditions:
1. V contains join conditions R i
2. attribute A is a key for R i+j (0 != j != k), and
3. R i+k :A is preserved in V ,
then the following equivalence holds (even without referential
Let G(V ) be the join graph for view V . Property
4.3 generalizes Property 4.2 to say that if V
preserves the key of some relation R i+k and R i
joins to R i+k along keys (that is, Need(R
and does not include all the base
relations of V ), then we can calculate the effect on V
of deletions to R i by joining 5R i with the sequence
of relations up to R i+k and then joining R i+k with
. The property holds because tuples in V with the
same value for the key of R i+k as a tuple t in R i+k
must have been derived from t as explained in Property
4.2. Furthermore, since the joins between R i+k
and R i are all along keys, each tuple in R i+k can join
with at most one tuple t 0 in R i , which means that tuples
in V that are derived from tuple t in R i+k must
also be derived from tuple t 0 in R i . Conversely, if a
tuple in V is derived from t 0 in R i , then it must have
the same value for the key of R i+k as some tuple t in
R i+k that t 0 joins with. Therefore, the set of tuples
in V that join on the key of R i+k with some tuple t
in R i+k that joins along keys with a tuple t 0 in R i is
exactly the set of tuples in V that should be deleted
when t 0 is deleted from R i . As before, a similar property
also holds if a key of R i+k is not preserved in
V but is equated by a selection condition in V to an
attribute C that is preserved in V . In this case the effect
of deletions from R i can be obtained by joining V
with R i+k using the join condition
Rule 4.2 (Deletion Rule) Let V be a view
with a tree structured join graph G(V ), and let
maintenance expression calculating the effect on a
view V of deletions to a base relation R i may be simplified
according to Property 4.3 to reference V unless
includes all the base relations of V except
R i . fi
Rule 4.2 is used to simplify maintenance expressions
for deletions to use the contents of the view and fewer
base relations. This allows us to rewrite the maintenance
expressions for deletions to use the auxiliary
views instead of base relations.
4.2.1 Rewriting the maintenance expressions
to use auxiliary relations. After simplifying the
maintenance expressions according to Rule 4.2, the
simplified expressions are rewritten to use the auxiliary
views generated by Algorithm 3.1 by replacing
each -oeR i subexpression in the simplified maintenance
expression with the corresponding auxiliary
view AR i for R i .
The maintenance expressions of Table 5 are simplified
using Rule 4.2 as follows.
A proof that the auxiliary views are sufficient in general
to evaluate the (simplified) maintenance expressions
for deletions appears in [12].
4.3 Protected updates
In this section we show how the effect on a view
of protected updates to base relations can be calculated
using the auxiliary views. (Recall that exposed
updates are treated separately as deletions followed
by insertions.) We give two maintenance expressions
for calculating the effect on a view V of protected updates
to a base relation R: one returning the tuples to
delete from the view (denoted as 5 - VR ) and another
returning the tuples to insert into the view (denoted
as 4 - VR ). In practice, these pairs of maintenance expressions
usually can be combined into a single SQL
update statement.
The view maintenance expression for calculating
the tuples to delete from an SPJ view V due to protected
updates to a base relation R is obtained by
substituting - old -R (the old attribute values of the
updated tuples in R) for base relation R in the relational
algebra expression for V . (Recall that -R, - old ,
and - new were defined in Section 2.) The view maintenance
expression for calculating the tuples to insert
is obtained similarly by substituting - new -R (the new
attribute values of the updated tuples in R) for base
relation R in the relational algebra expression for V .
For example, the view maintenance expressions calculating
the tuples to delete from our cal toy sales
view due to protected updates to each of the base relations
are given in Table 6. Expressions calculating
the tuples to insert into the view cal toy sales are
not shown but can be obtained by substituting - new
for - old in the expressions of Table 6. Note that the
Table
6 expressions are similar to the deletion expressions
of Table 5.
We simplify the maintenance expressions for protected
updates similarly to the way we simplify
the maintenance expressions for deletions, by using
the contents of the view itself if keys are preserved
in the view. We give the following properties
and rule for updates in the presence of
preserved keys. In the following let P (-R
(- old
store id;manager oe state=CA -St
store id - sale id;store id;month oe year=1996 Sa
1 item id - item id;item name oe category=toy I)
(- store id;manager oe state=CA St
store id - old
sale id;store id;month oe year=1996 -Sa
1 item id - item id;item name oe category=toy I)
(- store id;manager oe state=CA St
store id - sale id;store id;month oe year=1996 Sa
line id;sale id;item id;sales price -L
1 item id - item id;item name oe category=toy I)
(- store id;manager oe state=CA St
store id - sale id;store id;month oe year=1996 Sa
item id;item name oe category=toy -I)
Table
Maintenance Expressions for Removing
Old Updates
use
old
and - new
to project the old and new attribute
values respectively of preserved attributes in
-R i and the (regular) attribute values for preserved
attributes of other relations in V . We use 1oldkey(R
to denote joining on the attribute in which the key
value before the update is held.
Property 4.4 (Protected Update Property for
Keys) Given a view
the key of a
relation R i is preserved in V , then the following equivalences
hold:
new
Property 4.5 (Protected Update Property for
Key Joins) Given a view
satisfying the following
conditions:
1. view V contains join conditions R i
R i+k :B
2. attribute A is a key for R i+j (0 != j != k), and
3. R i+k :A is preserved in V ,
then the following equivalences hold (even without referential
1- old oe-R i 1-oeR
1- new oe-R i 1-oeR
new
Properties 4.4 and 4.5 are similar to the corresponding
properties for deletions. Attributes of R i that are
involved in selection conditions are guaranteed not to
be updated, so it does not matter whether we test
the old or new value in selection conditions. Property
4.4 is used in [4] to determine when a view is
self-maintainable for base relation updates.
Consider the join graph G(V ) of view V . Property
4.5 generalizes Property 4.4; Property 4.5 says
that if V preserves the key of some relation R i+k and
joins to R i+k along keys (that is, Need(R
and does not include all the base relations
of V ), then we can calculate the effect on V of
protected updates to R i by joining -R i with the sequence
of relations up to R i+k and then joining R i+k
with V . As for deletions, a similar property also holds
if a key of R i+k is not preserved in V but is equated
by a selection condition in V to an attribute C that
is preserved in V . In this case the effect of updates to
R i can be obtained by joining V with R i+k using the
join condition
Rule 4.3 (Protected Update Rule) Let V be a
view with a tree structured join graph G(V ), and let
maintenance expressions calculating the effect on a
view V of protected updates to a base relation R i can
be simplified according to Property 4.5 to reference V
includes all the base relations of
Similar to the rule for deletions, Rule 4.3 is used
to simplify the maintenance expressions for 5 - VR and
4 - VR to use the contents of the view and fewer base
relations so that the maintenance expressions can be
rewritten in terms of the auxiliary views.
4.3.1 Rewriting the maintenance expressions
to use auxiliary relations. After simplifying the
maintenance expressions according to Rule 4.3, the
simplified expressions are rewritten to use the auxiliary
views generated by Algorithm 3.1 by replacing
each -oeR i subexpression in the maintenance expression
with the corresponding auxiliary view AR i for R i .
The rewriting is similar to the rewriting for insertions
and deletions. An example and proof that the auxiliary
views are sufficient in general to evaluate the
maintenance expressions are given in [12].
5 Maintaining auxiliary views
Due to space constraints we give only an intuitive
argument, based upon join graphs, for why the set of
auxiliary views is itself self-maintainable. Details on
maintaining the auxiliary views efficiently and a proof
that the set of auxiliary views is self-maintainable appear
in the full version of the paper [12].
Recall that the auxiliary views derived by Algorithm
3.1 are of the form:
where C i equijoins a foreign key of R with the corresponding
for relation R i . Because the joins are
along foreign key referential integrity constraints, each
semijoin could be replaced by a join. Thus, each auxiliary
view is an SPJ view, and its join graph can be
constructed as discussed in Section 3. Further, note
that the join graph for each auxiliary view is a sub-graph
of the graph for the original view, because each
join in an auxiliary view is also a join in the original
view. Thus, the information needed to maintain the
original view is also sufficient to maintain each of its
auxiliary views.
6 Related work
The problem of view self-maintainability was considered
initially in [1, 4]. For each modification type
(insertions, deletions, and updates), they identify
of SPJ views that can be maintained using only
the view and the modification. [1] states necessary and
sufficient conditions on the view definition for the view
to be self-maintainable for updates specified using a
particular SQL modification statement (e.g., delete all
tuples where R:A ? 3). [4] uses information about key
attributes to determine self-maintainability of a view
with respect to all modifications of a certain type.
In this paper we consider the problem of making a
view self-maintainable by materializing a set of auxiliary
views such that the original view and the auxiliary
views taken together are self-maintainable. Although
the set of base relations over which a view
is defined forms one such set of auxiliary views, our
approach is to derive auxiliary views that are much
smaller than storing the base relations in their en-
tirety. Identifying a set of small auxiliary views to
make another view self-maintainable is an important
problem in data warehousing, where the base relations
may not be readily available.
In [10], views are made self-maintainable by pushing
down selections and projections to the base relations
and storing the results at the warehouse. Thus,
using our terminology, they consider auxiliary views
based only on select and project operators. We improve
upon their approach by considering auxiliary
views based on select, project, and semijoin operators,
along with using knowledge about key and referential
constraints. We have shown in Section 1.1
that our approach can significantly reduce the sizes
of the auxiliary views. We show in [12] that auxiliary
views of the form our algorithm produces can be
(self-)maintained efficiently.
In [13], inclusion dependencies (similar to referential
integrity constraints) are used to determine when
it is possible to answer from a view joining several re-
lations, a query over a subset of the relations; e.g.,
given V is a view joining R and other relations, when
R. We on the other hand, use similar
referential integrity constraints to simplify view
maintenance expressions.
--R
Updating derived relations: Detecting irrelevant and autonomously computable updates.
Algorithms for deferred view maintenance.
IEEE Data Engineering Bulletin
Data integration using self-maintainable views
Incremental maintenance of views with duplicates.
Maintenance of Materialized Views: Problems
Materialized Views.
Maintaining views incrementally.
The Stanford Data Warehousing Project.
A framework for supporting data integration using the materialized and virtual approaches.
The Rejuvenation of Materialized Views.
The GMAP: A versatile tool for physical data independence.
View maintenance in a warehousing envi- ronment
--TR
--CTR
Lyman Do , Pamela Drew , Wei Jin , Vish Jumani , David Van Rossum, Issues in Developing Very Large Data Warehouses, Proceedings of the 24rd International Conference on Very Large Data Bases, p.633-636, August 24-27, 1998
Mala Rajamani , Karen C. Davis, Partitioned auxiliary views for self-maintainable data warehouse, Proceedings of the 1st ACM international workshop on Data warehousing and OLAP, p.66-71, November 02-07, 1998, Washington, D.C., United States
Wen-Syan Li , Daniel C. Zilio , Vishal S. Batra , Calisto Zuzarte , Inderpal Narang, Load balancing and data placement for multi-tiered database systems, Data & Knowledge Engineering, v.62 n.3, p.523-546, September, 2007
S. Samtani , V. Kumar , M. Mohania, Self maintenance of multiple views in data warehousing, Proceedings of the eighth international conference on Information and knowledge management, p.292-299, November 02-06, 1999, Kansas City, Missouri, United States
Kenneth Salem , Kevin Beyer , Bruce Lindsay , Roberta Cochrane, How to roll a join: asynchronous incremental view maintenance, ACM SIGMOD Record, v.29 n.2, p.129-140, June 2000
Hoshi Mistry , Prasan Roy , S. Sudarshan , Krithi Ramamritham, Materialized view selection and maintenance using multi-query optimization, ACM SIGMOD Record, v.30 n.2, p.307-318, June 2001
Goretti K. Y. Chan , Qing Li , Ling Feng, Design and selection of materialized views in a data warehousing environment: a case study, Proceedings of the 2nd ACM international workshop on Data warehousing and OLAP, p.42-47, November 02-06, 1999, Kansas City, Missouri, United States
Jens Lechtenbrger , Hua Shu , Gottfried Vossen, Aggregate Queries Over Conditional Tables, Journal of Intelligent Information Systems, v.19 n.3, p.343-362, November 2002
Toby Bloom, Data warehousing: data cleaning and loading, Handbook of data mining and knowledge discovery, Oxford University Press, Inc., New York, NY, 2002
Avigdor Gal, Obsolescent materialized views in query processing of enterprise information systems, Proceedings of the eighth international conference on Information and knowledge management, p.367-374, November 02-06, 1999, Kansas City, Missouri, United States
D. Agrawal , A. El Abbadi , A. Singh , T. Yurek, Efficient view maintenance at data warehouses, ACM SIGMOD Record, v.26 n.2, p.417-427, June 1997
Nam Huyn, Multiple-View Self-Maintenance in Data Warehousing Environments, Proceedings of the 23rd International Conference on Very Large Data Bases, p.26-35, August 25-29, 1997
Wang , Maria Orlowska , Weifa Liang, Efficient refreshment of materialized views with multiple sources, Proceedings of the eighth international conference on Information and knowledge management, p.375-382, November 02-06, 1999, Kansas City, Missouri, United States
Stphane Lopes , Jean-Marc Petit , Farouk Toumani, Discovering interesting inclusion dependencies: application to logical database tuning, Information Systems, v.27 n.1, p.1-19, March 2002
Themistoklis Palpanas , Richard Sidle , Roberta Cochrane , Hamid Pirahesh, Incremental maintenance for non-distributive aggregate functions, Proceedings of the 28th international conference on Very Large Data Bases, p.802-813, August 20-23, 2002, Hong Kong, China
Hector Garcia-Molina , Wilburt Labio , Jun Yang, Expiring Data in a Warehouse, Proceedings of the 24rd International Conference on Very Large Data Bases, p.500-511, August 24-27, 1998
H. Engstr , S. Chakravarthy , B. Lings, Maintenance policy selection in heterogeneous data warehouse environments: a heuristics-based approach, Proceedings of the 6th ACM international workshop on Data warehousing and OLAP, November 07-07, 2003, New Orleans, Louisiana, USA
Jens Lechtenbrger , Gottfried Vossen, On the computation of relational view complements, ACM Transactions on Database Systems (TODS), v.28 n.2, p.175-208, June
Laks V. S. Lakshmanan , Jian Pei , Yan Zhao, QC-trees: an efficient summary structure for semantic OLAP, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Dimitri Theodoratos , Timos K. Sellis, Data Warehouse Configuration, Proceedings of the 23rd International Conference on Very Large Data Bases, p.126-135, August 25-29, 1997
Hongsong Li , Houkuan Huang , Youfang Lin, DSD: maintain data cubes more efficiently, Fundamenta Informaticae, v.59 n.2-3, p.173-190, February 2004
Hongsong Li , Houkuan Huang , Youfang Lin, DSD: Maintain Data Cubes More Efficiently, Fundamenta Informaticae, v.59 n.2-3, p.173-190, April 2004
Xin Zhang , Lingli Ding , Elke A. Rundensteiner, Parallel multisource view maintenance, The VLDB Journal The International Journal on Very Large Data Bases, v.13 n.1, p.22-48, January 2004
Dimitri Theodoratos , Timos Sellis, Incremental Design of a Data Warehouse, Journal of Intelligent Information Systems, v.15 n.1, p.7-27, JulyAug. 2000
Yingwei Cui , Jennifer Widom , Janet L. Wiener, Tracing the lineage of view data in a warehousing environment, ACM Transactions on Database Systems (TODS), v.25 n.2, p.179-227, June 2000
Inderpal Singh Mumick , Dallan Quass , Barinderpal Singh Mumick, Maintenance of data cubes and summary tables in a warehouse, ACM SIGMOD Record, v.26 n.2, p.100-111, June 1997
Songting Chen , Bin Liu , Elke A. Rundensteiner, Multiversion-based view maintenance over distributed data sources, ACM Transactions on Database Systems (TODS), v.29 n.4, p.675-709, December 2004
Arvind Arasu , Brian Babcock , Shivnath Babu , Jon McAlister , Jennifer Widom, Characterizing memory requirements for queries over continuous data streams, Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 03-05, 2002, Madison, Wisconsin
Amit Manjhi , Anastassia Ailamaki , Bruce M. Maggs , Todd C. Mowry , Christopher Olston , Anthony Tomasic, Simultaneous scalability and security for data-intensive web applications, Proceedings of the 2006 ACM SIGMOD international conference on Management of data, June 27-29, 2006, Chicago, IL, USA
Dimitri Theodoratos , Mokrane Bouzeghoub, A general framework for the view selection problem for data warehouse design and evolution, Proceedings of the 3rd ACM international workshop on Data warehousing and OLAP, p.1-8, November 06-11, 2000, McLean, Virginia, United States
D. Laurent , J. Lechtenbrger , N. Spyratos , G. Vossen, Monotonic complements for independent data warehouses, The VLDB Journal The International Journal on Very Large Data Bases, v.10 n.4, p.295-315, December 2001
Bin He , Rui Wang , Ying Chen , Ana Lelescu , James Rhodes, BIwTL: a business information warehouse toolkit and language for warehousing simplification and automation, Proceedings of the 2007 ACM SIGMOD international conference on Management of data, June 11-14, 2007, Beijing, China
Nick Bassiliades , Ioannis Vlahavas , Ahmed K. Elmagarmid , Elias N. Houstis, InterBase-KB: Integrating a Knowledge Base System with a Multidatabase System for Data Warehousing, IEEE Transactions on Knowledge and Data Engineering, v.15 n.5, p.1188-1205, September
Himanshu Gupta , Inderpal Singh Mumick, Incremental maintenance of aggregate and outerjoin expressions, Information Systems, v.31 n.6, p.435-464, September 2006
Yannis Kotidis, Aggregate view management in data warehouses, Handbook of massive data sets, Kluwer Academic Publishers, Norwell, MA, 2002
Shivnath Babu , Jennifer Widom, Continuous queries over data streams, ACM SIGMOD Record, v.30 n.3, September 2001
S. B. Davidson , J. Crabtree , B. P. Brunk , J. Schug , V. Tannen , G. C. Overton , C. J. Stoeckert, Jr., K2/Kleisli and GUS: experiments in integrated access to genomic data sources, IBM Systems Journal, v.40 n.2, p.512-531, February 2001
Ladjel Bellatreche , Kamalakar Karlapalem , Mukesh Mohania, Some issues in design of data warehousing systems, Data warehousing and web engineering, IRM Press, Hershey, PA, 2002
Stefano Rizzi , Alberto Abell , Jens Lechtenbrger , Juan Trujillo, Research in data warehouse modeling and design: dead or alive?, Proceedings of the 9th ACM international workshop on Data warehousing and OLAP, November 10-10, 2006, Arlington, Virginia, USA
Brian Babcock , Shivnath Babu , Mayur Datar , Rajeev Motwani , Jennifer Widom, Models and issues in data stream systems, Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 03-05, 2002, Madison, Wisconsin
Zhiyuan Chen , Chen Li , Jian Pei , Yufei Tao , Haixun Wang , Wei Wang , Jiong Yang , Jun Yang , Donghui Zhang, Recent progress on selected topics in database research: a report by nine young Chinese researchers working in the United States, Journal of Computer Science and Technology, v.18 n.5, p.538-552, September | select-project-join view;warehouse view;materialized views;data integrity;key integrity constraints;auxiliary data storage;materialized view maintenance;fast integrated data access;referential integrity constraints;data source querying;auxiliary views;self-maintainable views;data warehousing |
383209 | Scrambling query plans to cope with unexpected delays. | Accessing data from numerous widely-distributed sources poses significant new challenges for query optimization and execution. Congestion and failures in the network can introduce highly-variable response times for wide-area data access. This paper is an initial exploration of solutions to this variability. We introduce a class of dynamic, run-time query plan modification techniques that we call query plan scrambling. We present an algorithm that modifies execution plans on-the-fly in response to unexpected delays in obtaining initial requested tuples from remote sources. The algorithm both reschedules operators and introduces new operators into the query plan. We present simulation results that demonstrate how the technique effectively hides delays by performing other useful work while waiting for missing data to arrive. | Introduction
Ongoing improvements in networking technology
and infrastructure have resulted in a dramatic increase
in the demand for accessing and collating data
from disparate, remote data sources over wide-area
networks such as the Internet and intranets. Query
optimization and execution strategies have long been
studied in centralized, parallel, and tightly-coupled
distributed environments. Data access across widely-
distributed sources, however, imposes significant new
challenges for query optimization and execution for
two reasons: First, there are semantic and performance
problems that arise due to the heterogeneous
nature of the data sources in a loosely-coupled envi-
ronment. Second, data access over wide-area networks
involves a large number of remote data sources, intermediate
sites, and communications links, all of which
are vulnerable to congestion and failures. From the
To appear in the Proceedings of the Fourth International
Conference on Parallel and Distributed Information Systems
(PDIS'96), Miami Beach, Florida, December 1996.
y Laurent Amsaleg is supported by a post-doctoral fellowship
from INRIA Rocquencourt, France.
z Supported in part by NSF Grant IRI-94-09575, an IBM
SUR award, and a grant from Bellcore.
user's point of view, congestion or failure in any
of the components of the network are manifested as
highly-variable response time - that is, the time required
for obtaining data from remote sources can vary
greatly depending on the specific data sources accessed
and the current state of the network at the time that
such access is attempted.
The query processing problems resulting from heterogeneity
have been the subject of much attention in
recent years (e.g., [SAD In con-
trast, the impact of unpredictable response time on
wide-area query processing has received relatively little
attention. The work presented here is an initial
exploration into addressing problems of response-time
variability for wide-area data access.
1.1 Response Time Variability
High variability makes efficient query processing
difficult because query execution plans are typically
generated statically, based on a set of assumptions
about the costs of performing various operations and
the costs of obtaining data (i.e., disk and/or network
accesses). The causes of high-variability are typically
failures and congestion, which are inherently runtime
issues; they cannot be reliably predicted at query optimization
time or even at query start-up time. As a
result, the execution of a statically optimized query
plan is likely to be sub-optimal in the presence of unexpected
response time problems. In the worst case,
a query execution may be blocked for an arbitrarily
long time if needed data fail to arrive from remote
data sources.
The different types of response time problems that
can be experienced in a loosely-coupled, wide-area environment
can be categorized as follows:
ffl Initial Delay - There is an unexpected delay
in the arrival of the first tuple from a particular
remote source. This type of delay typically
appears when there is difficulty connecting to a
remote source, due to a failure or congestion at
that source or along the path between the source
and the destination.
ffl Slow Delivery - Data is arriving at a regular
rate, but this rate is much slower than the
expected rate. This problem can be the re-
sult, for example, of network congestion, resource
contention at the source, or because a different
(slower) communication path is being used (e.g.,
due to a failure).
ffl Bursty Arrival - Data is arriving at an unpredictable
rate, typically with bursts of data followed
by long periods of no arrivals. This problem
can arise from fluctuating resource demands
and the lack of a global scheduling mechanism in
the wide-area environment.
Because these problems can arise unpredictably at
runtime, they cannot be effectively addressed by static
query optimization techniques. As a result, we have
been investigating a class of dynamic, runtime query
plan modification techniques that we call query plan
scrambling. In this approach, a query is initially executed
according to the original plan and associated
schedule generated by the query optimizer. If how-
ever, a significant performance problem arises during
the execution, then query plan scrambling is invoked
to modify the execution on-the-fly, so that progress
can be made on other parts of the plan. In other
words, rather than simply stalling for slowly arriving
data, query plan scrambling attempts to hide unexpected
delays by performing other useful work.
There are three ways that query plan scrambling
can be used to help mask response time problems.
First, scrambling allows useful work to be done in the
hope that the cause of the problem is resolved in the
meantime. This approach is useful for all three classes
of problems described above. Second, if data are ar-
riving, but at a rate that hampers query processing
performance (e.g., in the Slow Delivery or Bursty Arrival
cases), then scrambling allows useful work to be
performed while the problematic data are obtained in
a background fashion. Finally, in cases where data are
simply not arriving, or are arriving far too slowly, then
scrambling can be used to produce partial results that
can then be returned to users and/or used in query
processing at a later time [TRV96].
1.2 Tolerating Initial Delays
In this work, we present an initial approach to query
plan scrambling that specifically addresses the problem
of Initial Delay (i.e., delay in receiving the initial
requested tuples from a remote data source). We describe
and analyze a query plan scrambling algorithm
that follows the first approach outlined above; namely,
other useful work is performed in the hope that the
problem will eventually be resolved, and the requested
data will arrive at or near the expected rate from then
on. The algorithm exploits, where possible, decisions
made by the static query optimizer and imposes no optimization
or execution performance overhead in the
absence of unexpected delays.
In order to allow us to clearly define the algorithm
and to study its performance, this work assumes an
execution environment with several properties:
ffl The algorithm addresses only response time delays
in receiving the initial requested tuples from
remote data sources. Once the initial delay is
over, tuples are assumed to arrive at or near the
originally expected rate. As stated previously,
this type of delay models problems in connecting
to remote data sources, as it is often experienced
in the Internet.
ffl We focus on query processing using a data-shipping
or hybrid-shipping approach [FJK96],
where data is collected from remote sources and
integrated at the query source. Only query processing
that is performed at the query source is
subject to scrambling. This approach is typical
of mediated database systems that integrate
data from distributed, heterogeneous sources,
e.g., [TRV96].
Query execution is scheduled using an iterator
model [Gra93]. In this model every run-time operator
supports an open() call and a get-next()
call. Query execution starts by calling open() on
the topmost operator of the query execution plan
and proceeds by iteratively calling get-next() on
the topmost operator. These calls are propagated
down the tree; each time an operator needs to consume
data, it calls get-next() on its child (or chil-
dren) operator(s). This model imposes a schedule
on the operators in the query plan.
The reminder of the paper is organized as follows.
Section 2 describes the algorithm and gives an extended
example. Section 3 presents results from a
simulation study that demonstrate the properties of
the algorithm. Section 4 describes related work. Section
5 concludes with a summary of the results and a
discussion of future work.
Scrambling Query Plans
This section describes the algorithm for scrambling
queries to cope with initial delays in obtaining data
from remote data sources. The algorithm consists
of two phases: one that changes the execution order
of operations in order to avoid idling, and one that
synthesizes new operations to execute in the absence
of other work to perform. We first provide a brief
overview of the algorithm and then describe the two
phases in detail using a running example. The algorithm
is then summarized at the end of the section.
2.1 Algorithm Overview
Figure
1 shows an operator tree for a complex query
plan. Typically, such a complicated plan would be
generated by a static query optimizer according to its
cost model, statistics, and objective functions. At the
leaves of the tree are base relations stored at remote
sites. The nodes of the tree are binary operators (we
focus our study on hash-based joins) that are executed
at the query source site. 1
Unary operators, such as selections, sorting, and partitioning
are not shown in the figure.
A
D E2
I
G
Figure
1: Initial Query Tree
As discussed previously, we describe the scrambling
algorithm in the context of an iterator-based execution
model. This model imposes a schedule on the operators
of a query and drives the flow of data between
operators. The scheduling of operators is indicated
in
Figure
1 by the numbers associated to each oper-
ator. In the figure, the joins are numbered according
to the order in which they would be completed by an
iterator-based scheduler. The flow of data between
the operators follows the model discussed in [SD90],
i.e., the left input of a hash join is always materialized
while the right input is consumed in a pipelined
fashion.
The schedule implied by the tree in Figure 1 would
thus begin by materializing the left subtree of the root
node. Assuming that hash joins are used and that
there is sufficient memory to hold the hash tables for
relations A, C, and D (so no partitioning is necessary
for these relations), this materialization would consist
of the following steps:
1. Scan relation A and build hash-table HA using
selected tuples;
2. In a pipelined fashion, probe HA with (selected)
tuples of B and build a hash-table containing the
result of A1B (HAB );
3. Scan C and build hash-table HC ;
4. Scan D and build hash-table HD ;
5. In a pipelined fashion, probe HD , HC and HAB
with tuples of E and build a hash-table containing
the result of (A1B)1(C1D1E).
The execution thus begins by requesting tuples
from the remote site where relation A is stored. If
there is a delay in accessing that site (say, because
this site is temporarily down), then the scan of A (i.e.,
step 1) is blocked until the site recovers. Under a traditional
iterator-based scheduling discipline, this delay
of A would result in the entire execution of the query
being blocked, pending the recovery of the remote site.
Given that unexpected delays are highly probable
in a wide-area environment, such sensitivity to delays
is likely to result in unacceptable performance. The
scrambling algorithm addresses this problem by attempting
to hide such delays by making progress on
other parts of the query until the problem is resolved.
The scrambling algorithm is invoked once a delayed
relation is detected (via a timeout mechanism). The
algorithm is iterative; during each iteration it selects
part of the plan to execute and materializes the corresponding
temporary results to be used later in the
execution.
The scrambling algorithm executes in one of two
phases. During Phase 1, each iteration modifies the
schedule in order to execute operators that are not
dependent on any data that is known to be delayed.
For example, in the query of Figure 1, Phase 1 might
result in materializing the join of relations C, D and E
while waiting for the arrival of A. During Phase 2, each
iteration synthesizes new operators (joins for example)
in order to make further progress. In the example, a
Phase 2 iteration might choose to join relation B with
the result of (C1D1E) computed previously.
At the end of each iteration the algorithm checks
to see if any delayed sources have begun to respond,
and if so, it stops iterating and returns to normal
scheduling of operators, possibly re-invoking scrambling
if additional delayed relations are later detected.
If, however, no delayed data has arrived during an
iteration, then the algorithm iterates again. The algorithm
moves from Phase 1 to Phase 2 when it fails to
find an existing operator that is not dependent on a
delayed relation. If, while in Phase 2, the algorithm is
unable to create any new operators, then scrambling
terminates and the query simply waits for the delayed
data to arrive. In the following sections we describe,
in detail, the two phases of scrambling and their interactions
2.2 Phase 1: Materializing Subtrees
2.2.1 Blocked and Runnable Operators
The operators of a query tree have producer-consumer
relationships. The immediate ancestor of a
given operator consumes the tuples produced by that
operator. Conversely, the immediate descendants of a
given operator produce the tuples that operator con-
sumes. The producer-consumer relationships create
execution dependencies between operators, as one operator
can not consume tuples before these tuples have
been produced. For example, a select operator can
not consume tuples of a base relation if that relation
is not available. In such a case the select operator is
blocked. If the select can not consume any tuples, it
can not produce any tuples. Consequently, the consumer
of the select is also blocked. By transitivity, all
the ancestors of the unavailable relation are blocked.
When the system discovers that a relation is un-
available, query plan scrambling is invoked. Scrambling
starts by splitting the operators of the query tree
into two disjoint queues: a queue of blocked operators
and a queue of runnable operators. These queues are
defined as follows:
Definition 2.1 Queue of Blocked Operators:
Given a query tree, the queue of blocked operators
contains all the ancestors of each unavailable relation.
Definition 2.2 Queue of Runnable Operators:
Given a query tree and a queue of blocked operators,
the queue of runnable operators contains all the operators
that are not in the queue of blocked operators.
Operators are inserted in the runnable and blocked
queues according to the order in which their execution
would be initiated by an iterator-based scheduler.
2.2.2 Maximal Runnable Subtree
Each iteration during Phase 1 of query plan scrambling
analyzes the runnable queue in order to find a
maximal runnable subtree to materialize. A maximal
runnable subtree is defined as follows:
Definition 2.3 Maximal Runnable Subtree:
Given the query tree and the queues of blocked and
runnable operators, a runnable subtree is a subtree
in which all the operators are runnable. A runnable
subtree is maximal if its root is the first runnable descendant
of a blocked operator.
None of the operators belonging to a maximal
runnable subtree depend on data that is known to be
delayed. Each iteration of Phase 1 initiates the materialization
of the first maximal runnable subtree found.
The notion of maximal used in the definition is impor-
tant, as materializing the biggest subtrees during each
iteration tends to minimize the number of materializations
performed, hence reducing the amount of extra
I/O caused by scrambling. The materialization of a
runnable subtree completes only if no relations used
by this subtree are discovered to be unavailable during
the execution. 2 When the execution of a runnable
subtree is finished and its result materialized, the algorithm
removes all the operators belonging to that
subtree from the runnable queue. It then checks if
missing data have begun to arrive. If the missing data
from others, blocked relations are still unavailable, another
iteration is begun. The new iteration analyzes
(again) the runnable queue to find the next maximal
runnable subtree to materialize.
2.2.3 Subtrees and Data Unavailability
It is possible that during the execution of a runnable
subtree, one (or more) of the participating base relations
is discovered to be unavailable. This is because
a maximal runnable subtree is defined with respect
to the current contents of the blocked and runnable
queues. The runnable queue is only a guess about
the real availability of relations. When the algorithm
inserts operators in the runnable queue, it does not
know whether their associated relations are actually
available or unavailable. This will be discovered only
when the corresponding relations are requested.
In the case where a relation is discovered to be
unavailable during the execution of a runnable sub-
tree, the current iteration stops and the algorithm updates
the runnable and blocked queues. All the ancestors
of the unavailable relation are extracted from
the runnable queue and inserted in the blocked queue.
Once the queues are updated, the scrambling of the
query plan initiates a new Phase 1 iteration in order
to materialize another maximal runnable subtree.
2 Note that in the remainder of this paper, we use "maxi-
mal runnable subtree" and "runnable subtree" interchangeably,
except where explicitly noted.
A
D E2
I
G
Runnable
Blocked
Figure
2: Blocked and Runnable Operators with Relation
A Unavailable
2.2.4 Termination of Phase 1
At the end of each iteration, the algorithm checks
for data arrival. If it is discovered that an unavailable
relation has begun to arrive, the algorithm updates
the blocked and runnable queues. The ancestors of
the unblocked relation are extracted from the blocked
queue and inserted in the runnable queue. Note that
any ancestors of the unblocked relation that also depend
on other blocked relations are not extracted from
the queue. Phase 1 then terminates and the execution
of the query returns to normal iterator-based scheduling
of operators. If no further relations are blocked,
the execution of the query will proceed until the final
result is returned to the user. The scrambling algorithm
will be re-invoked, however, if the query execution
blocks again.
Phase 1 also terminates if the runnable queue is
empty. In this case, Phase 1 can not perform any other
iteration because all remaining operators are blocked.
When this happens, query plan scrambling switches
to Phase 2. The purpose of the second phase is to
process the available relations when all the operators
of the query tree are blocked. We present the second
phase of query plan scrambling in Section 2.3. First,
however, we present an example that illustrates all the
facets of Phase 1 described above.
2.2.5 A Running Example
This example reuses the complex query tree presented
at the beginning of Section 2. To discuss cases
where data need or do need not to be partitioned before
being joined, we assume that tuples of relations
do not need to be partitioned. In
contrast, we assume that the tuples of relations F, G,
H and I have to be partitioned. To illustrate the behavior
of Phase 1, we follow the scenario given below:
1. When the execution of the query starts, relation A
is discovered to be unavailable.
2. During the third iteration, relation G is discovered
to be unavailable.
3. The tuples of A begin to arrive at the query execution
site before the end of the fourth iteration.
4. At the time Phase 1 terminates, no tuples of G
have been received.
The execution of the example query begins by requesting
tuples from the remote site owning relation A.
Following the above scenario, we assume relation A is
(b)
AC
D E2
I
G
(a)
AI
G
Figure
3: Query Tree During Iterations 1 and 2
I
(b)
(a)
Figure
4: G Unavailable; X2 Materialized
unavailable (indicated by the thick solid line in Figure
2). The operators that are blocked by the delay of
A are depicted using a dashed line.
The unavailability of A invokes Phase 1 which updates
the blocked and runnable queues and initiates
its first iteration. This iteration analyses the runnable
queue and finds that the first maximal runnable sub-tree
consists of a unary operator that selects tuples
from relation B. 3 Once the operator is materialized
(i.e., selected tuples of B are on the local disk stored
in the relation B'), the algorithm checks for the arrival
of the tuples of A. Following the above scenario,
we assume that the tuples of A are still unavailable,
so another iteration is initiated. This second iteration
finds the next maximal runnable subtree to be the one
rooted at operator 3. Note the subtree rooted at operator
2 is not maximal since its consumer (operator
is not blocked.
Figure
3 shows the materialization of the runnable
subtrees found by the first two iterations of query
scrambling. Part (a) of this figure shows the effect of
materializing of the first runnable subtree: the local
relation B' contains the materialized and selected tuples
of the remote relation B. It also shows the second
runnable subtree (indicated by the shaded grey area).
Figure
3(b) shows the query tree after the materialization
of this second runnable subtree. The materialized
result is called X1.
Once X1 is materialized, another iteration starts
since, in this example, relation A is still unavailable.
The third iteration finds the next runnable subtree
rooted at operator 7 which joins F, G, H and I (as
stated above, these relations need to be partitioned
before being joined). The execution of this runnable
subtree starts by building the left input of operator 5
(partitioning F into F'). It then requests relation G
in order to partition it before probing the tuples of
F. In this scenario, however, G is discovered to be
unavailable, triggering the update of the blocked and
runnable queues. Figure 4(a) shows that operators 5
and 7 are newly blocked operators (operator 8 was already
blocked due to the unavailability of A). Once the
queues of operators are updated, another iteration of
scrambling is initiated to run the next runnable sub-
tree, i.e., the one rooted at operator 6 (indicated by
the shaded grey area in the figure). The result of this
execution is called X2.
3 As stated earlier, operators are inserted into the queues with
respect to their execution order.
Figure
5 illustrates the next step in the scenario,
i.e., it illustrates the case where after X2 is materialized
it is discovered that the tuples of relation A
have begun to arrive. In this case, the algorithm updates
the runnable and blocked queues. As shown
in
Figure
5(a), operators 1 and 4 that were previously
blocked are now unblocked (operator 8 remains
blocked however). Phase 1 then terminates and returns
to the normal iterator-based scheduling of operators
which materializes the left subtree of the root
node (see Figure 5(b)). The resulting relation is called
X3.
After X3 is materialized, the query is blocked on G
so Phase 1 is re-invoked. Phase 1 computes the new
contents of the runnable and blocked queue and discovers
that the runnable queue is empty since all remaining
operators are ancestors of G. Phase 1 then
terminates and the scrambling of the query plan enters
Phase 2. We describe Phase 2 of the algorithm in
the next section.
2.3 Phase 2: Creating New Joins
Scrambling moves into Phase 2 when the runnable
queue is empty but the blocked queue is not. The goal
of Phase 2 is to create new operators to be executed.
Specifically, the second phase creates joins between
relations that were not directly joined in the original
query tree, but whose consumers are blocked (i.e., in
the blocked queue) due to the unavailability of some
other data.
In contrast to Phase 1 iterations, which simply adjust
scheduling to allow runnable operators to exe-
cute, iterations during Phase 2 actually create new
joins. Because the operations that are created during
Phase 2 were not chosen by the optimizer when
the original query plan was generated, it is possible
that these operations may entail a significant amount
of additional work. If the joins created and executed
by Phase 2 are too expensive, query scrambling could
result in a net degradation in performance. Phase 2,
(b)
(a)
Figure
5: Relation A Available
(a) (b) (c)5
GX3
G F' X2
Figure
Performing a New Join in Phase 2
therefore, has the potential to negate or even reverse
the benefits of scrambling if care is not taken. In this
paper we use the simple heuristic of avoiding Cartesian
products to prevent the creation of overly expensive
joins during Phase 2. In Section 3, we analyze the
performance impact of the cost of created joins relative
to the cost of the joins in the original query plan. One
way to ensure that Phase 2 does not generate overly
expensive joins is to involve the query optimizer in the
choice of new joins. Involving the optimizer in query
scrambling is one aspect of our ongoing work.
2.3.1 Creating New Joins
At the start of Phase 2, the scrambling algorithm
constructs a graph G of possible joins. Each node in
G corresponds to a relation, and each edge in G indicates
that the two connected nodes have common
join attributes, and thus can be joined without causing
a Cartesian product. Unavailable relations are not
placed into G.
Once G is constructed, Phase 2 starts to iteratively
create and execute new join operators. Each iteration
of Phase 2 performs the following steps:
1. In G, find the two leftmost joinable (i.e., con-
nected) relations i and j. The notion of leftmost
is with respect to the order in the query plan. If
there are no joinable relations in G, then terminate
scrambling.
2. Create a new join operator
3. Materialize i 1 j. Update G by replacing i and
j with the materialized result of i 1 j. Update
runnable and blocked queues. Update query tree.
4. Test to see if any unavailable data has arrived. If
so, then terminate scrambling, else begin a new
iteration.
Figure
6 demonstrates the behavior of Phase 2 by
continuing the example of the previous section. The
figure is divided into three parts. Part (a) shows the
query tree at the end of Phase 1. In this case, G would
contain F', X2, and X3. Assume that, in G, relations
F' and X2 are directly connected but relation X3 is
not connected to either (i.e., assume it shares join attributes
only with the unavailable relation G). In this
example, therefore, F' and X2 are the two leftmost
joinable relations; X3 is the leftmost relation, but it
is not joinable.
Figure
6(b) shows the creation of the new join of F'
and X2. The creation of this join requires the removal
of join number 7 from the blocked queue and its replacement
in the ordering of execution by join number
5. Finally, Figure 6(c) shows the materialization of the
created operator. The materialized join is called X4.
At this point, G is modified by removing F' and X2
and inserting X4, which is not joinable with X3, the
only other relation in G.
2.3.2 Termination of Phase 2
After each iteration of Phase 2, the number of relations
in G is reduced. Phase 2 terminates if G is
reduced to a single relation, or if there are multiple
relations but none that are joinable. As shown in the
preceding example, this latter situation can arise if
the attribute(s) required to join the remaining relations
are contained in an unavailable relation (in this
case, relation G).
Phase 2 can also terminate due to the arrival
of unavailable data. If such data arrive during a
Phase 2 iteration, then, at the end of that itera-
tion, the runnable and blocked queues are updated
accordingly and the control is returned to the normal
iterator-based scheduling of operators. As mentioned
for Phase 1, query scrambling may be re-invoked later
to cope with other delayed relations.
2.3.3 Physical Properties of Joins
The preceding discussion focused on restructuring
logical nodes of a query plan. The restructuring of
physical plans, however, raises additional considera-
tions. First, adding a new join may require the introduction
of additional unary operators to process the
inputs of this new join so that it can be correctly ex-
ecuted. For example, a merge join operator requires
that the tuples it consumes are sorted, and thus may
require that sort operators be applied to its inputs.
Second, deleting operators, as was done in the preceding
example, may also require the addition of unary
operators. For example, relations may need to be
repartitioned in order to be placed as children of an existing
hybrid hash node. Finally, changing the inputs
of an existing join operator may also require modifica-
tions. If the new inputs are sufficiently different than
the original inputs, the physical join operators may
have to be modified. For example, an indexed nested
loop join might have to be changed to a hash join if the
inner relation is replaced by one that is not indexed
on the join attribute.
2.4 Summary and Discussion
The query plan scrambling algorithm can be summarized
as follows:
ffl When a query becomes blocked (because relations
are unavailable), query plan scrambling is initi-
ated. It first computes a queue of blocked operators
and a queue of runnable operators.
Phase 1 then analyses the queue of runnable oper-
ators, picks a maximal runnable subtree and materializes
its result. This process is repeated, i.e.,
it iterates, until the queue of runnable operators
is empty. At this point, the system switches to
Phase 2.
Phase 2 tries to create a new operator that joins
two relations that are available and joinable. This
process iterates until no more joinable relations
can be found.
ffl After each iteration of the algorithm, it checks to
see if any unavailable data have arrived, and if
so, control is returned to normal iterator-based
scheduling of operators, otherwise another iteration
is performed.
There are two additional issues regarding the algorithm
that deserve mention, here. The first issue
concerns the knowledge of the actual availability of re-
lations. Instead of discovering, as the algorithm does
now, during the execution of the operations performed
by each iteration that some sources are unavailable, it
is possible to send some or all of the initial data requests
to the data sources as soon as the first relation
is discovered to be unavailable. Doing so would give
the algorithm immediate knowledge of the availability
status of all the sources. Fortunately, using the
iterator model, opening multiple data sources at once
does not force the query execution site to consume all
the tuples simultaneously - the iterator model will
suspend the flow of tuples until they are consumed by
their consumer operators.
The second issue concerns the potential additional
work of each phase. As described previously, Phase 1
materializes existing subtrees that have been optimized
prior to runtime by the query optimizer. The
relative overhead of each materialization may be more
or less significant depending on the I/O pattern of the
scrambled subtree compared to its unscrambled ver-
sion. For example, if a subtree consists of a single
select on a base relation, its materialization during
Phase 1 is pure overhead since the original query plan
was selecting tuples as they were received, without involving
any I/O. On the other hand, the overhead of
materializing an operator that partitions data is comparatively
less important. In this case, both the original
query plan and the scrambled plan have to perform
disk I/Os to write the partitions on disk for later pro-
cessing. The scrambled plan, however, writes to disk
one extra partition that would be kept in memory by
the original non-scrambled query plan.
Phase 2, however, can be more costly as it creates
new joins from scratch using the simple heuristic of
avoiding Cartesian products. The advantage of this
approach is its simplicity. The disadvantage, however,
is the potential overhead caused by the possibly sub-optimal
joins. We study the performance impact of
varying costs of the created joins in the following section
The costs of materializations during Phase 1 and
of new joins during Phase 2 may, in certain cases,
negate the benefits of scrambling. Controlling these
costs raises the possibility of integrating scrambling
with an existing query optimizer. This would allow
us to estimate the costs of iterations in order to skip,
for example, costly materializations or expensive joins.
Such an integration is one aspect of our ongoing work.
Parameter Value Description
NumSites 8 number of sites
Mips
NumDisks 1 number of disks per site
DskPageSize 4096 size of a disk page (bytes)
NetBw 1 network bandwidth (Mbit/sec)
NetPageSize 8192 size of a network page (bytes)
Compare 4 instr. to apply a predicate
HashInst 25 instr. to hash a tuple
Move 2 instr. to copy 4 bytes
Table
1: Simulation Parameters and Main Settings
Performance
In this section, we examine the main performance
characteristics of the query scrambling algorithm. The
first set of experiments shows the typical performance
of any query that is scrambled. The second set of
experiments studies the sensitivity of Phase 2 to the
selectivity of the new joins it creates. We first describe
the simulation environment used to study the
algorithm.
3.1 Simulation Environment
To study the performance of the query scrambling
algorithm, we extended an existing simulator [FJK96,
that models a heterogeneous, peer-to-peer
database system such as SHORE [CDF 94]. The simulator
we used provides a detailed model of query processing
costs in such a system. Here, we briefly describe
the simulator, focusing on the aspects that are
pertinent to our experiments.
Table
1 shows the main parameters for configuring
the simulator, and the settings used for this study.
Every site has a CPU whose speed is specified by the
Mips parameter, NumDisks disks, and a main-memory
buffer pool. For the current study, the simulator was
configured to model a client-server system consisting
of a single client connected to seven servers. Each
site, except the query execution site, stores one base
relation.
In this study, the disk at the query execution site
(i.e., client) is used to store temporary results. The
disk model includes costs for random and sequential
physical accesses and also charges for software operations
implementing I/Os. The unit of disk I/O for
the database and the client's disk cache are pages of
size DskPageSize. The unit of transfer between sites
are pages of size NetPageSize. The network is modeled
simply as a FIFO queue with a specified band-width
(NetBw); the details of a particular technology
(Ethernet, ATM) are not modeled. The simulator also
charges CPU instructions for networking protocol op-
erations. The CPU is modeled as a FIFO queue and
the simulator charges for all the functions performed
by query operators like hashing, comparing, and moving
tuples in memory.
In this paper, the simulator is used primarily to
demonstrate the properties of the scrambling algo-
rithm, rather than for a detailed analysis of the algo-
F
G
10,000
10,000
10,000
10,000
10,000
10,000
10,000
Figure
7: Query Tree Used for the Experiments
rithm. As such, the specific settings used in the simulator
are less important than the way in which delay is
either hidden or not hidden by the algorithm. In the
experiments, the various delays were generated by simply
requesting tuples from an "unavailable" source at
the end of the various iterations of query plan scram-
bling. That is, rather than stochastically generating
delays, we explicitly imposed a series of delays in order
to study the behavior of the algorithm in a controlled
manner. For example, to simulate the arrival
of blocked tuples during, say, the third iteration of
Phase 1, we scrambled the query 3 times, and then
initiated the transfer of tuples from the "blocked" relation
so that the final result of the query could eventually
be computed.
3.2 A Query Tree for the Experiments
For all the experiments described in this section, we
use the query tree represented in Figure 7. We use this
query tree because it demonstrates all of the features
of scrambling and allows us to highlight the impact on
performance of the overheads caused by materializations
and created joins.
Each base relation has 10,000 tuples of 100 bytes
each. We assume that the join graph is fully con-
nected, that is, any relation can be (equi-)joined with
any other relation and that all joins use the same join
attribute. In the first set of experiments, we study
the performance of query plan scrambling in the case
where all the joins in the query tree produce the same
number of tuples, i.e., 1,000 tuples. In the second set
of experiments, however, we study the case where the
joins in the query tree have different selectivities and
thus produce results of various sizes.
For all the experiments, we study the performance
of our approach in the case where a single relation
is unavailable. This relation is the left-most relation
(i.e., relation which represents the case where query
scrambling is the most beneficial. Examining the cases
with others unavailable relations would not change the
basic lessons of this study.
For each experiment described below, we evaluate
the algorithm in the cases where it executes in the context
of a small or a large memory. In the case of large
memory, none of the relations used in the query tree
(either a base relation or an intermediate result) need
to be partitioned before being processed. In the case
of small memory, every relation (including intermediate
results) must be partitioned. Note, that since all
joins in the test query use the same join attribute, no
re-partitioning of relations is required when new joins
are created in this case.
3.3 Experiment 1: The Step Phenomenon
Figure
8 shows the response time for the scrambled
query plans that are generated as the delay for
relation A (the leftmost relation in the plan) is var-
ied. The delay for A is shown along the X-axis, and
is also represented as the lower grey line in the figure.
The higher grey line shows the performance of the unscrambled
query, that is, if the execution of the query
is simply delayed until the tuples of relation A begin
to arrive. The distance between these two lines therefore
is constant, and is equal to the response time for
the original (unscrambled) query plan, which is 80.03
seconds in this case. In this experiment, the memory
size of the query execution site is small. With this set-
ting, the hash-tables for inner relations for joins can
not entirely be built in memory so partitioning is required
The middle line in Figure 8 shows the response time
for the scrambled query plans that are executed for
various delays of A. In this case, there are six possible
scrambled plans that could be generated. As stated
in Sections 2.2 and 2.3, the scrambling algorithm is
iterative. At the end of each iteration it checks to see
if delayed data has begun to arrive, and if so, it stops
scrambling and normal query execution is resumed. If,
however, at the end of the iteration, the delayed data
has still not arrived, another iteration of the scrambling
algorithm is initiated. The result of this execution
model is the step shape that can be observed in
Figure
8.
The width of each step is equal to the duration of
the operations that are performed by the current iteration
of the scrambling algorithm, and the height of the
step is equal to the response time of the query if normal
processing is resumed at the end of that iteration.
For example, in this experiment, the first scrambling
iteration results in the retrieval and partitioning of relation
B. This operation requires 12.23 seconds. If at
the end of the iteration, tuples of relation A have begun
to arrive then no further scrambling is done and
normal query execution resumes. The resulting execution
in this case, has a response time of 80.10 seconds.
Thus, the first step shown in Figure 8 has a width of
12.23 seconds and a height of 80.10 seconds. Note that
in this case, scrambling is effective at hiding the delay
of A; the response time of the scrambled query is
nearly identical to that of original query with no delay
of A.
If no tuples of A have arrived at the end of the
first iteration, then another iteration is performed. In
this case, the second iteration retrieves, partitions, and
joins relations C and D. As shown in Figure 8, this iteration
requires an additional 26.38 seconds, and if
A begins to arrive during this iteration, then the resulting
query plan has a total response time of 80.90
seconds. Thus, in this experiment, scrambling is able
to hide delays of up to 38.61 seconds with a penalty of
no more than 0.80 seconds (i.e., 1%) of the response
Response
Time
(sec.)Delay (sec.)
Delay for A
Scrambling
Figure
8: Response Times of Scrambled Query Plans (Small Memory, Varying the Delay of A.)
time of the original query with no delay. This corresponds
to a response time improvement of up to 32%
compared to not scrambling.
If, at the end of the second iteration, tuples of A
have still failed to arrive, then the third iteration is
initiated. In this case however, there are no more
runnable subtrees, so scrambling switches to Phase 2,
which results in the creation of new joins (see Section
2.3). In this third iteration, the result of C1D is
partitioned and joined with relation B. This iteration
has a width of only 2.01 seconds, because both inputs
are already present, B is already partitioned, and the
result of C1D is fairly small. The response time of the
resulting plan is 82.22 seconds, which again represents
a response time improvement of up to 32% compared
to not scrambling.
The remaining query plans exhibit similar behavior
Table
2 shows the additional operations and the
overall performance for each of the possible scrambled
plans. In this experiment, the largest relative benefit
(approximately 44%) over not scrambling is obtained
when the delay of A is 69.79 seconds, which is the
time required to complete all six iterations. After this
point, there is no further work for query scrambling to
do, so the scrambled plan must also wait for A to ar-
rive. As can be seen in Figure 8, at the end of iteration
six the response time of the scrambled plan increases
linearly with the delay of A. The distance between the
delay of A and the response time of the scrambled plan
is the time that is required to complete the query once
A arrives.
Although it is not apparent in Figure 8, the first
scrambled query is slightly slower than the unscrambled
query plan when A is delayed for a very short
amount of time. For a delay below 0.07 seconds, the
response time of the scrambled query is 80.10 seconds
while it is 80.03 seconds for the non-scrambled query.
When joining A and B, as the unscrambled query does,
B is partitioned during the join, allowing one of the
partitions of B to stay in memory. Partitioning B before
joining it with A, as the first scrambled query plan
does, forces this partition to be written back to disk
and to be read later during the join with A. When
A is delayed by less than the time needed to perform
these additional I/Os, it is cheaper to stay idle waiting
for A.
3.4 Experiment 2: Sensitivity of Phase 2
In the previous experiment all the joins produced
the same number of tuples, and as a result, all of the
operations performed in Phase 2 were beneficial. In
this section, we examine the sensitivity of Phase 2
to changes in the selectivities of the joins it creates.
Varying selectivities changes the number of tuples produced
by these joins which affects the width and the
height of each step. Our goal is to show cases where
the benefits of scrambling vary greatly, from clear improvements
to cases where scrambling performs worse
than no-scrambling.
For the test query, the first join created in Phase 2
is the join of relation B with the result of C1D (which
was materialized during Phase 1). In this set of experi-
ments, we vary the selectivity of this new join to create
a result of a variable size. The selectivity of this join is
adjusted such that it produces from 1,000 tuples up to
several thousand tuples. The other joins that Phase 2
may create behave like functional joins and they simply
carry all the tuples created by (B1(C1D)) through
the query tree. At the time these tuples are joined
with A, the number of tuples carried along the query
tree returns to normality and drops down to 1,000.
Varying the selectivity of the first join produced by
Phase 2 is sufficient to generate a variable number of
tuples that are carried all along the tree by the other
joins that Phase 2 may create.
The two next sections present the results of this sensitivity
analysis for a small and a large memory case.
Scrambled Performed by Total Response Savings
Plan # Iteration Delay Time
Partition B 0-12.23 80.10 up to 13.18%
Table
2: Delay Ranges and Response Times of Scrambled Query Plans
50,000
1,000
10,000
Delay (sec.)
Response
Time
Delay for A
Scrambling
Phase 1 only
Figure
9: Response Times of Scrambled Query Plans
(Small Memory, Varying Selectivity and Delay.)
As stated previously, when the memory is small, relations
have to be partitioned before being joined (as
in the previous experiment). This partitioning adds
to the potential cost of scrambled plans because it
results in additional I/O that would not have been
present in the unscrambled plan. When the memory
is large, however, hash-tables can be built entirely in
memory so relations do not need to be partitioned.
Thus, with large memory the potential overhead of
scrambled plans is lessened.
3.4.1 Small Memory Case
In this experiment, we examine the effectiveness
of query scrambling when the selectivity of the first
join created by Phase 2 is varied. Figure 9 shows the
response time results for 3 different selectivities. As
in the previous experiment, the delay for A is shown
along the X-axis and is also represented as the lower
grey line in the figure. The higher grey line shows the
response time of the unscrambled query, which as be-
fore, increases linearly with the delay of A. These two
lines are exactly the same as the ones presented in the
previous experiment.
The solid line in the middle of the figure shows
the performance of a scrambled query plan that stops
scrambling right at the end of Phase 1 (in this case,
two iterations are performed during Phase 1) without
initiating any Phase 2 iterations. Note that this line
becomes diagonal after the end of Phase 1 since the
system simply waits until the tuples of A arrive before
computing the final result of the query.
Intuitively, it is not useful to perform a second
phase for scrambled queries when the resulting response
time would be located above this line. Costly
joins that would be created by Phase 2 would consume
a lot of resources for little improvement. On the
other hand, Phase 2 would be beneficial for scrambled
queries whose resulting response time would be below
this line since the additional overhead would be small
and the gain large.
The dashed and dotted lines in the figure illustrate
the tradeoffs. These lines show the response time for
the scrambled query plans that are executed for various
delays of A and for various selectivities. Note all
these scrambled query plans share the same response
times for the iterations performed during Phase 1.
These two first iterations correspond exactly to the
scrambled plans 1 and 2 described in the previous ex-
periment. At the end of the second iteration (38.61
seconds), however, if the tuples of A have still failed
to arrive, a third iteration is initiated and the query
scrambling enters Phase 2 which creates new joins.
The dotted line shows the performance when the
selectivity for the new join is such that it produces a
result of 1,000 tuples. This line is identical to the one
showed in the previous experiment since all the joins
were producing 1,000 tuples.
With the second selectivity, the first join created by
the second phase produces 10,000 tuples. If at the end
of this iteration, the tuples of A have still not arrived,
another iteration is initiated and this iteration has to
process and to produce 10,000 tuples. The corresponding
line in the figure is the lowest dashed line. In this
case, where 10 times more tuples have to be carried
along the scrambled query plans, each step is higher
(roughly 12 seconds) and wider since more tuples have
to be manipulated than in the case where only 1,000
tuples are created. Even with the additional overhead
of these 10,000 tuples, however, the response times of
the scrambled query plans are far below the response
times of the unscrambled query with equivalent delay.
When the new join produces 50,000 tuples (the
higher dashed line in the figure), the response time
of the scrambled plans are almost equal to or even
worse than that of the original unscrambled query including
the delay for A. In this case, it is more costly
to carry the large number of tuples through the query
tree than to simply wait for blocked data to arrive.
Response
Time
Delay (sec.)
Scrambling
Delay for A
10,000
50,000
80,000
1,000
Phase 1 only
Figure
10: Response Times of Scrambled Query Plans
(Large Memory, Varying Selectivity and Delay.)
3.4.2 Large Memory Case
Figure
shows the same experiment in the case
where the memory is large enough to allow inner relations
for joins to be built entirely in main memory.
With large memory, no partitioning of relations needs
to be done.
For the large memory case, the lines showing the
increasing delay of A and the response time of the
unscrambled query when this delay increases are separated
by 65.03 seconds and Phase 2 starts when A
is delayed by more than 18.95 seconds. Four different
selectivities are represented in this figure.
In contrast to the previous experiment where 50
times more tuples negated the benefits of scrambling,
in this case up to 80 times more tuples can be carried
by the scrambled query plans before the benefits
become close to zero. With a large memory, results
computed by each iteration need only be materialized
and can be consumed as is. In contrast, when the
memory is small, materialized results have to be partitioned
before being consumed. With respect to a
small memory case, not partitioning the relation when
the memory is large reduces the number of I/Os and
allows the scrambled plans to manipulate more tuples
for the same overhead.
The experiments presented in this section have
shown that query scrambling can be an effective technique
that is able to improve the response time of
queries when data are delayed. These improvements
come from the fact that each iteration of a scrambled
query plan can hide the delay of data. The improve-
ment, however, depends on the overhead due to materializations
and created joins.
The improvement that scrambling can bring also
depends on the amount of work done in the original
query. The bigger (i.e., the longer and the more
costly) the original query is, the more improvement
our technique can bring since it will be able to hide
larger delays by computing costly operations. The improvement
also depends on the shape of the query
tree: bushy trees offer more options for scrambling
than deep trees.
With respect to the Figures 9 and 10 presented
above, when many iterations can be done during
Phase 1, the point where Phase 2 starts shifts to the
right. This increases the distance between the Phase 1-
only diagonal line and the response time of the unscrambled
query. In turn, the scrambling algorithm
can handle a wider range of bad selectivities for the
joins it creates during Phase 2.
4 Related Work
In this section we consider related work with respect
to (a) the point in time that optimization decisions are
made (i.e., compile time, query start-up time, or query
run-time); (b) the variables used for dynamic decisions
(i.e. if the response time of a remote source is con-
sidered); (c) the nature of the dynamic optimization
(i.e. if the entire query can be rewritten); and (d) the
basis of the optimization (i.e., cost-based or heuristic
based).
The Volcano optimizer [CG94, Gra93] does dynamic
optimization for distributed query processing.
During optimization, if a cost comparison returns in-
comparable, the choice for that part of the search space
is encoded in a choose-plan operator. At query start
up time, all the incomparable cost comparisons are re-
evaluated. According to the result of the reevaluation,
the choose-plan operator selects a particular query execution
plan. All final decisions regarding query execution
are thus made at query start-up time. Our
work is complimentary to the Volcano optimizer since
Volcano does not adapt to changes once the evaluation
of the query has started.
Other work in dynamic query optimization either
does not consider the distributed case [DMP93,
OHMS92] or only optimizes access path selection and
cannot reorder joins [HS93]. Thus, direct considerations
of problems with response times from remote
sources are not accounted for. These articles are, how-
ever, a rich source of optimizations which can be carried
over into our work.
A novel approach to dynamic query optimization
used in Rdb/VMS is described in [Ant93]. In this ap-
proach, multiple different executions of the same logical
operator occur at the same time. They compete
for producing the best execution - when one execution
of an operator is determined to be (probably) better,
the other execution is terminated.
In [DSD95] the response time of queries is improved
by reordering left-deep join trees into bushy join trees.
Several reordering algorithms are presented. This
work assumes that reordering is done entirely at compile
time. This work cannot easily be extended to
handle run-time reordering, since the reorderings are
restricted to occur at certain locations in the join tree.
[ACPS96] tracks the costs of previous calls to remote
sources (in addition to caching the results) and
can use this tracking to estimate the cost of new calls.
As in Volcano, this system optimizes a query both at
query compile and query start-up time, but does not
change the query plan during query run-time.
The research prototype Mermaid [CBTY89] and its
commercial successor InterViso [THMB95] are heterogeneous
distributed databases that perform dynamic
query optimization. Mermaid constructs its query
plan entirely at run-time, thus each step in query optimization
is based on dynamic information such as intermediate
join result sizes and network performance.
Mermaid neither takes advantage of a statically generated
plan nor does it dynamically account for a source
which does not respond at run-time.
The Sage system [Kno95] is an AI planning system
for query optimization for heterogeneous distributed
sources. This system interleaves execution and optimization
and responds to unavailable data sources.
5 Conclusion and Future Work
Query plan scrambling is a novel technique that can
dynamically adjust to changes in the run-time environ-
ment. We presented an algorithm which specifically
deals with variability in performance of remote data
sources and accounts for initial delays in their response
times. The algorithm consists of two phases. Phase 1
changes the scheduling of existing operators produced
as a result of query optimization. Phase 1 is iteratively
applied until no more changes in the scheduling are
possible. At this point, the algorithm enters Phase 2
which creates new operators to further process available
data. New operators are iteratively created until
there is no further work for query plan scrambling to
do.
The performance experiments demonstrated how
the technique hides delays in receiving the initial requested
tuples from remote data sources. We then examined
the sensitivity of the performance of scrambled
plans to the selectivity of the joins created in Phase 2.
This work represents an initial exploration into
the development of flexible systems that dynamically
adapt to the changing properties of the environment.
Among our ongoing and future research plans, we are
developing algorithms that can scramble under different
failure models to handle environments where data
arrives at a bursty rate or at a steady rate that is
significantly slower than expected. We are also studying
the use of partial results which approximate the
final results. We also plan to study the potential
improvement of basing scrambling decisions on cost-based
knowledge.
Finally, query plan scrambling is a promising approach
to addressing many of the concerns addressed
by dynamic query optimization. Adapting the query
plan at run-time to account for the actual costs of
operations could compensate for the often inaccurate
and unreliable estimates used by the query optimizer.
Moreover, it could account for remote sources that do
not export any cost information, which is especially
important when these remote sources run complex
subqueries. Thus, we plan to investigate the use of
scrambling as a complimentary approach to dynamic
query optimization.
Acknowledgments
We would like to thank Praveen
Seshadri, Bj-orn J'onsson and Jean-Robert Gruser for
their helpful comments on this work. We would also
like to thank Alon Levy for pointing out related work.
--R
Query Caching and Optimization in Distributed Mediator Systems.
Dynamic Query Optimization in Rdb/VMS.
Distributed Query Processing in a Multiple Database System.
Shoring Up Persistent Applications.
Optimization of Dynamic Query Execution Plans.
Semantic Data Caching and Replacement.
Design and Implementation of the Glue-Nail Database System
Reducing Multi-database Query Response Time by Tree Balancing
Performance Tradeoffs for Client
Query Evaluation Techniques for Large Databases.
Optimization of Parallel Query Execution Plans in XPRS.
Query Processing in the ObjectStore Database System.
Tradeoffs in Processing Complex Join Queries via Hashing in Multiprocessor Database Machines.
Modern Database Systems: The Object Model
Dealing with the Complexity of Federated Database Access.
Scaling Heterogeneous Databases and the Design of DISCO.
--TR
--CTR
Navin Kabra , David J. DeWitt, Efficient mid-query re-optimization of sub-optimal query execution plans, ACM SIGMOD Record, v.27 n.2, p.106-117, June 1998
Henrique Paques , Ling Liu , Calton Pu, Ginga: a self-adaptive query processing system, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Ihab F. Ilyas , Walid G. Aref , Ahmed K. Elmagarmid , Hicham G. Elmongui , Rahul Shah , Jeffrey Scott Vitter, Adaptive rank-aware query optimization in relational databases, ACM Transactions on Database Systems (TODS), v.31 n.4, p.1257-1304, December 2006
Amol Deshpande , Zachary Ives , Vijayshankar Raman, Adaptive query processing, Foundations and Trends in Databases, v.1 n.1, p.1-140, January 2007
Memory-adaptive scheduling for large query execution, Proceedings of the seventh international conference on Information and knowledge management, p.105-115, November 02-07, 1998, Bethesda, Maryland, United States
Avigdor Gal, Obsolescent materialized views in query processing of enterprise information systems, Proceedings of the eighth international conference on Information and knowledge management, p.367-374, November 02-06, 1999, Kansas City, Missouri, United States
Laurent Amsaleg , Michael J. Franklin , Anthony Tomasic, Dynamic Query Operator Scheduling for Wide-Area Remote Access, Distributed and Parallel Databases, v.6 n.3, p.217-246, July 1998
Qiang Zhu , Jaidev Haridas , Wen-Chi Hou, Query optimization via contention space partitioning and cost error controlling for dynamic multidatabase systems, Distributed and Parallel Databases, v.23 n.2, p.151-188, April 2008
Henrique Paques , Ling Liu , Calton Pu, Distributed query adaptation and its trade-offs, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Yongluan Zhou , Beng Chin Ooi , Kian-Lee Tan , Wee Hyong Tok, An adaptable distributed query processing architecture, Data & Knowledge Engineering, v.53 n.3, p.283-309, June 2005
Kain-Lee Tan , Pin Kwang Eng , Beng Chin Ooi , Ming Zhang, Join and multi-join processing in data integration systems, Data & Knowledge Engineering, v.40 n.2, p.217-239, February 2002
Tolga Urhan , Michael J. Franklin , Laurent Amsaleg, Cost-based query scrambling for initial delays, ACM SIGMOD Record, v.27 n.2, p.130-141, June 1998
Qiang Zhu , Jaidev Haridas , Wen-Chi Hou, Query optimization via contention space partitioning and cost error controlling for dynamic multidatabase systems, Distributed and Parallel Databases, v.23 n.2, p.151-188, April 2008
Jean-Robert Gruser , Louiqa Raschid , Vladimir Zadorozhny , Tao Zhan, Learning response time for WebSources using query feedback and application in query optimization, The VLDB Journal The International Journal on Very Large Data Bases, v.9 n.1, p.18-37, March 2000
Ron Avnur , Joseph M. Hellerstein, Eddies: continuously adaptive query processing, ACM SIGMOD Record, v.29 n.2, p.261-272, June 2000
Yong Yao , Johannes Gehrke, The cougar approach to in-network query processing in sensor networks, ACM SIGMOD Record, v.31 n.3, September 2002
Bertram Ludscher , Pratik Mukhopadhyay , Yannis Papakonstantinou, A transducer-based XML query processor, Proceedings of the 28th international conference on Very Large Data Bases, p.227-238, August 20-23, 2002, Hong Kong, China
Alon Halevy , Anand Rajaraman , Joann Ordille, Data integration: the teenage years, Proceedings of the 32nd international conference on Very large data bases, September 12-15, 2006, Seoul, Korea
Bret Hull , Vladimir Bychkovsky , Yang Zhang , Kevin Chen , Michel Goraczko , Allen Miu , Eugene Shih , Hari Balakrishnan , Samuel Madden, CarTel: a distributed mobile sensor computing system, Proceedings of the 4th international conference on Embedded networked sensor systems, October 31-November 03, 2006, Boulder, Colorado, USA
Anthony Tomasic , Rmy Amouroux , Philippe Bonnet , Olga Kapitskaia , Hubert Naacke , Louiqa Raschid, The distributed information search component (Disco) and the World Wide Web, ACM SIGMOD Record, v.26 n.2, p.546-548, June 1997 | run time query plan modification techniques;highly variable response times;query optimization;wide area data access;remote sources;missing data;query plan scrambling;unexpected delays;data access;initial requested tuples;widely distributed sources;query processing |
383210 | Building regression cost models for multidatabase systems. | A major challenge for performing global query optimization in a multidatabase system (MDBS) is the lack of cost models for local database systems at the global level. In this paper we present a statistical procedure based on multiple regression analysis for building cost models for local database systems in an MDBS. Explanatory variables that can be included in a regression model are identified and a mixed forward and backward method for selecting significant explanatory variables is presented. Measures for developing useful regression cost models, such as removing outliers, eliminating multicollinearity, validating regression model assumptions, and checking significance of regression models, are discussed. Experimental results demonstrate that the presented statistical procedure can develop useful local cost models in an MDBS. | Introduction
A multidatabase system (MDBS) integrates information
from pre-existing local databases managed by
heterogeneous database systems (DBS) such as OR-
ACLE, DB2 and EMPRESS. A key feature of an
MDBS is the local autonomy that each local database
retains to manage its data and serve its existing ap-
plications. An MDBS can only interact with a local
DBS at its external user interface.
A user can issue a global query on an MDBS to
retrieve data from several local databases. The user
does not need to know where the data is stored and
how the result is obtained. How to efficiently process
Research supported by IBM Toronto Laboratory and Natural
Sciences and Engineering Research Council (NSERC) of
Canada
y Current address: Microsoft Corporation, One Microsoft
Way, Redmond, WA 98052-6399, palarson@microsoft.com
such a global query is the task of global query optimization
There are a number of new challenges for query
optimization in an MDBS, caused primarily by local
autonomy. Among these challenges, a crucial one is
that local information needed for global query optim-
ization, such as local cost formulas (models), typically
are not available at the global level. To perform
global query optimization, methods to derive approximate
cost models for an autonomous local DBS are
required.
This issue has attracted a number of researchers re-
cently. In [3], Du et al. proposed a calibration method
to deduce necessary local cost parameters. The idea is
to construct a special local synthetic calibrating data-base
and then run a set of special queries against this
database. Cost metrics for the queries are used to deduce
the coefficients in the cost formulas for the access
methods supported by the underlying local database
system. In [14], Zhu and Larson presented a query
sampling method to tackle this issue. The idea of
this method will be reviewed below. In [15, 16], Zhu
and Larson proposed a fuzzy optimization method to
solve the problem. The idea is to build a fuzzy cost
model based on experts' knowledge, experience and
about local DBSs and perform query optimization
based on the fuzzy cost model. In [6, 13], Lu and
discussed issues for employingdynamic (adaptive)
query optimization techniques based on information
available at run time in an MDBS.
The idea of the query sampling method that we proposed
in [14] is as follows. The first step is to group
all possible queries for a local database 1 into more
homogeneous classes so that the costs of queries in
each class can be estimated by the same formula. This
can be done by classifying queries according to their
potential access methods. For example, unary queries
whose qualifications have at least one conjunctive
We assume that each local DBS has an MDBS agent that
provides a uniform relational interface to the MDBS global
server. Hence all local DBSs can be viewed as relational ones.
is an indexed column in
table R, can be put in one class because they are usually
executed by using an index scan in a local DBS
and, therefore, follow the same performance pattern.
Several such unary and join query 3 classes can be ob-
tained. The second step of the query sampling method
is to draw a sample of queries from each query class.
A mixture of judgment sampling and simple random
sampling is adopted in this step. The sample queries
are then performed against the relevant local database
and their costs are recorded. The costs are used to
derive a cost formula for the queries in the query class
by multiple regression. The coefficients of the cost formulas
for the local database system are kept in the
multidatabase catalog and retrieved during query op-
timization. To estimate the cost of a query, the query
class to which the query belongs needs to be identified
first, and the corresponding cost formula is then used
to give an estimate for the cost of the query.
Although a number of sampling techniques
have been applied to query optimization in the
literature [5; 8; 11] , all of them perform data sampling
(i.e., sampling data from databases) instead of query
sampling (i.e., sampling queries from a query class).
The query sampling method overcomes several short-comings
of Du et al.'s calibration method [14] .
However, the statistical procedure for deriving cost
estimation formulas in [14] was oversimplified. In this
paper, an improved statistical procedure is presented.
The formulas are automatically determined based on
observed sampling costs. More explanatory variables
in a formula are considered. A series of measures for
ensuring useful formulas are adopted.
The rest of this paper is organized as follows. Section
2 reviews the general linear regression model and
the related terminology. Section 3 identifies potential
explanatory variables for a regression cost model. Section
4 discusses how to determine a cost model for a
query class. Section 5 discusses the measures used to
ensure that the developed cost models are useful. Section
presents some experimental results. Section 7
summarizes the conclusions.
We assume that the qualification has been converted to conjunctive
normal form.
3 A select that may or may not be followed by a project is
called a unary query. A (2-way) join that may or may not be
followed by a project is called a join query. Only unary and join
queries are considered in this paper since most common queries
can be expressed by a sequence of such queries.
Multiple Linear Regression Model
Multiple regression allows us to establish a statistical
relationship between the costs of queries and the
relevant contributing (explanatory) variables. Such a
statistical relationship can be used as a cost estimation
formula for queries in a query class.
explanatory variables.
They do not have to represent different independent
variables. It is allowed, for example, that X
. The response (dependent) variable Y tends
to vary in a systematic way with the explanatory variables
X's. If the systematic way is a statistical linear
relationship between Y and X's, which we assume is
true in our application, a multiple linear regression
model is defined as
denotes the value of the
j-th explanatory variable X j in the i-th trial; Y i is
the i-th dependent random variable corresponding to
denotes the random error
are regression coefficients. The
following assumptions are usually made in regression
analysis:
are unknown constants, and
are known values.
1. Any two "
are uncorrelated.
2. The expected value of every " i is 0, i.e., E(" i
and the variance of " i is a constant oe 2 , for all i.
3. Every " i is normally distributed.
For n sample observations, we can get the values
of Y Applying
the method of least squares, we can find the values
that minimize
The equation
is called a fitted regression equation. For a given set of
values of X's, (1) gives a fitted value b
Y for the response
variable Y . If we use a fitted regression equation as an
estimation formula for Y , a fitted value is an estimated
value for Y corresponding to the given X's.
To evaluate the goodness of estimates obtained by
using the developed regression model, the variance oe 2
of the error terms is usually estimated. A point estimate
of oe 2 is given by the following formula:
is an
observed value; b
Y i is the corresponding fitted value;
and e
Y i . The square root of s 2 , i.e., s, is called
the standard error of estimation. It is an indication of
the accuracy of estimation. The smaller s is, the better
the estimation formula.
Using s, the i-th standardized residual is defined as
follows:
e
A plot of (standardized) residuals against the fitted
values or the values of an explanatory variable is called
a residual plot.
In addition to s, another descriptive measure used
to judge the goodness of a developed model is the coefficient
of multiple determination R 2 , which is defined
as:
is the proportion of variability in the response variable
Y explained by the explanatory variables X's. The
larger R 2 is, the better the estimation formula.
The standard error of estimation measures the absolute
accuracy of estimation, while the coefficient of
multiple determination measures the relative strength
of the linear relationship between the response variable
Y and the explanatory variables X's. A low standard
error of estimation s and a high coefficient of multiple
determination R 2 are evidence of a good regression
model.
3 Explanatory Variables
In our application, the response variable Y represents
query cost, while the explanatory variables X's
represent the factors that affect query cost. It is not
difficult to see that the following types of factors usually
affect the cost of a query:
1. The cardinality of an operand table. The higher
the cardinality of an operand table is, the higher
the query (execution) cost. This is because the
number of I/O's required to scan the operand
table or its index(es) usually increases with the
cardinality of the table.
2. The cardinality of the result table. A large result
table implies that many tuples need to be
processed, buffered, stored and transferred during
query processing. Hence, the larger the result
table is, the higher the corresponding query
cost. Note that the cardinality of the result table
is determined by the selectivity of the query. This
factor can hence be considered as the same as the
selectivity of a query.
3. The size of an intermediate result. For a join
query, if its qualification contains one or more conjunctive
terms that refer to only one of its operand
tables, called separable conjunctive terms, they
can be used to reduce the relevant operand table
before further processing is performed. The smaller
the size of such an intermediate table is, the
more efficient the query processing would be. For
a unary query, if it can be executed by an index
scan method, the query processing can be viewed
as having two stages: the first stage is to retrieve
the tuples via an index(es), the second stage is
to check the retrieved tuples against the remaining
conditions in the qualification. The number of
tuples that are retrieved in the first stage can be
considered as the size of the intermediate result
for such a unary query.
4. The tuple length of an operand table. This factor
affects data buffering and transferring cost during
query processing. However, this factor is usually
not as important as the above factors. It becomes
important when the tuple lengths of tables in a
database vary widely; for example, when multi-media
data is stored in the tables.
5. The tuple length of the result table. Similar to
the above factor, this factor affects data buffering
and transferring cost, but it is not as important
as the first three types of factors. It may become
important when it varies significantly from one
query to another, compared with other factors.
6. The physical sizes (i.e., the numbers of used disk
blocks) of operand tables and result tables. Although
factors of this type are obviously controlled
by factors of types 1, 2, 4 and 5, they may
reflect additional information, such as the percentage
assigned to an operand table (or
a result table) and a combined effect of the previous
factors.
7. Contention in the system environment. Factors
of this type include contention for CPU, I/O,
buffers, data items, and servers, etc. Obviously,
these factors affect the performance of a query.
However, they are difficult to measure. The number
of concurrent processes, the memory resident
set sizes (RSS) of processes, and some other information
about processes that we could obtain
can only reflect part of all contention factors. This
is why contention factors are usually omitted from
existing cost models.
8. The characteristics of an index, such as index
clustering ratio, the height and number of leaves
of an index tree, the number of distinct values of
an indexed column, and so on. If all tuples with
the same index key value are physically stored to-
gether, the index is called as a clustered index,
which has the highest index clustering ratio. For
a referenced index, how the tuples with the same
index key value are scattered in the physical storage
has an obvious effect on the performance of a
query. Other properties of an index, such as the
height of the index tree and the number of distinct
values, also affect the performance of a query.
The variables representing the above factors are the
possible explanatory variables to be included in a cost
formula.
4 Regression Cost Models
4.1 Variables Inclusion Principle
In general, not all explanatory variables in the last
section are necessary in a cost model. Some variables
may not be significant for a particular model, while
some other variables may not be available at the global
level in an MDBS. Our general principle for including
variables in a cost model is to include important variables
and omit insignificant or unavailable variables.
Among the factors discussed in Section 3, the first
three types of factors are often more important. The
variables representing them are usually included in a
cost model. Factors of types 4 and 5 are less important
since their variances are relatively small. Their representing
variables are included in a cost model only if
they are significant. Variables representing factors of
type 6 are included in a cost model if they are not
dominated by other included variables. Variables representing
the last two types of factors will be omitted
from our cost models because they are usually not
available at the global level in an MDBS. In fact, we
assume that contention factors in a considered environment
are approximately stable. Under this assump-
tion, the contention factors are not very important in a
cost model. The variables representing the characteristics
of referenced indexes 4 can possibly be included
in a cost model if they are available and significant.
How to apply this variable inclusion principle to
develop a cost model for a query class will be discussed
in more details in the following subsection. Let us first
give some notations for the variables.
Let RU be the operand table for a unary query; R J1
and R J2 be the two operand tables for a join query;
NU , N J1 and N J2 be the cardinalities of RU , R J1 and
R J2 , respectively; LU , L J1 and L J2 be the tuple lengths
of RU , R J1 and R J2 , respectively; RLU and RL J be the
tuple lengths of the result tables for the unary query
and the join query, respectively. Let S U and S J be
the selectivities of the unary query and the join query,
J2 be the selectivities of the conjunctions
of all separable conjunctive terms for R J1 and
R J2 , respectively; S U1 be the selectivity of a conjunctive
term that is used to scan the operand table via an
index, if applicable, of the unary query.
4.2 Regression Models for Unary Query
Based on the inclusion principle, we divide a regression
model for a unary query class into two parts:
The basic model is the essential part of the regression
model, while the secondary part is used to improve the
model.
The set V UB of potential explanatory variables to
be included in the basic model contains the variables
representing factors of types 1 - 3. By the definition
of a selectivity,
are the cardinalities of the intermediate table and result
table for a unary query, respectively. Therefore, V
g.
If all potential explanatory variables in V UB are
chosen, the full basic model is
As it will be discussed later, some potential variable(s)
may be insignificant for a given query class and, there-
fore, is not included in the basic model.
Only local catalog information, such as the presence of an
index for a column, is assumed to be available at the global level.
Local implementation information, such as index tree structures
and index clustering ratio, is not available.
The basic model captures the major performance
behavior of queries in a query class. In fact, the basic
model is based on some existing cost models [4; 10] for
a DBMS. The parameters
can be interpreted as the initialization cost, the cost of
retrieving a tuple from the operand table, the cost of an
index loo-up and the cost of processing a result tuple,
respectively. In a traditional cost model, a parameter
may be split up into several parts (e.g., B 1 may consist
of I/O cost and CPU cost) and can be determined by
analyzing the implementation details of the employed
access method. However, in an MDBS, the implementation
details of access methods are usually not known
to the global query optimizer. The parameters are,
therefore, estimated by multiple regression based on
sample queries instead of an analytical method.
To further improve the basic model, some secondary
explanatory variables may be included into the model.
The set VUS of potential explanatory variables for the
secondary part of a model contains the variables representing
factors of types 4 - 6. The real physical sizes
of the operand table and result table of a unary query
may not be known exactly in an MDBS. However,
they can be estimated by Z
RNU RLU , respectively 5 . We call Z U and RZU the
operand table length and result table length, respect-
ively. Therefore, V g. Any
other variables, if available, could also be included in
VUS .
If all potential variables in V US are added to (3), the
full regression model is
Note that, for some query class, a variable might
appear in its regression model in another form. For ex-
ample, if the access method for a query class sorts the
operand table of a query based on a column(s) before
further processing, some terms like N U log NU and/or
log NU could be included in its regression model. Let
a new variable represent such a term. This new variable
may replace an existing variable in
or be an additional secondary variable in V US . A regression
model can be adjusted according to available
information about the relevant access method.
5 The physical size of an operand table can be more accurately
estimated by (NU where the constants d 1 and
reflect some overhead such as page overhead and free space.
Since the constants d 1 and d 2 are applied to all sample data,
they can be omitted. Estimating the physical size of a result
table is similar.
4.3 Regression Models for Join Query
Similarly, the regression model for a join query class
consists of a basic model plus a possible secondary
part.
The set V JB of potential explanatory variables for
the basic model contains the variables representing
factors of types 1 - 3. By definition, RN
N J2 S J is the cardinality of the result table for a
join query; TN is the size of the intermediate
table obtained by performing the conjunction
of all separable conjunctive terms on R Ji
is the size of the Cartesian
product of the intermediate tables. Therefore, V
g.
If all potential explanatory variables in V JB are se-
lected, the full basic model is
Similar to a unary query class, the basic model is
based on some existing cost models for a DBMS. The
parameters can be
interpreted as the initialization cost, the cost of pre-processing
a tuple in the first operand table, the cost
of pre-processing a tuple in the second operand table,
the cost of retrieving a tuple from the first intermediate
table, the cost of retrieving a tuple from the second
intermediate table, the cost of processing a tuple in the
Cartesian product of the two intermediate tables and
the cost of processing a result tuple, respectively.
The basic model may be further improved by including
some additional beneficial variables. The set
V JS of potential explanatory variables for the secondary
part of a model contains the variables representing
factors of types 4 - 6. Similar to unary queries,
the physical size of a table is estimated by the table
length. In other words, the physical sizes of the first
operand table, the second operand table and the result
table are estimated by the variables: Z
Therefore, g.
Any other useful variables, if available, could also be
included in V JS .
If all potential explanatory variables in V JS are added
to (4), the full regression model is
Similar to a unary query class, all variables in V JB
and V JS may not be necessary for a join query class.
A procedure to choose significant variables in a model
will be described in the following subsection. In addi-
tion, some additional variables may be included, and
some variables could be included in another form.
4.4 Selection of Variables for Regression
Models
To determine the variables for inclusion in a regression
model, one approach is to evaluate all possible
subset models and choose the best one(s) among them
according to some criterion. However, evaluating all
possible models may not be practically feasible when
the number of variables is large.
To reduce the amount of computation, two types of
selection procedures have been proposed [2] : the forward
selection procedure and the backward elimination
procedure. The forward selection procedure starts
with a model containing no variables, i.e., only a constant
term, and introduces explanatory variables into
the regression model one at a time. The backward
elimination procedure starts with the full model and
successively drops one explanatory variable at a time.
Both procedures need a criterion for selecting the next
explanatory variable to be included in or removed from
the model and a condition for stopping the procedure.
With k variables, these procedures will involve evaluation
of at most (k as contrasted with
the evaluation of 2 k models necessary for examining
all possible models.
To select a suitable regression model for a query
class, we use a mixed forward and backward procedure
described below (see Figure 1). We start with the full
Basic Model
Backward Elimination Forward Selection
Start Point
Secondary Part
Figure
1: Selection of Variables for Regression Model
basic model (3) or (4) for the query class and apply the
backward elimination procedure to drop some insignificant
terms (explanatory variables) from the model.
We then apply the forward selection procedure to find
additional significant explanatory variables from the
set (VUS or V JS ) of secondary explanatory variables
for the query class.
The next explanatory variable X to be removed
from the basic model during the first backward stage
is the one that (1) has the smallest simple correlation
coefficient 6 with the response variable Y and (2) makes
the reduced model (i.e., the model after X is removed)
have a smaller standard error of estimation than the
original model or the two standard errors of estimation
very close to each other, for instance, within 1%
relative error. If the next explanatory variable satisfying
(1) does not satisfy (2), or there are no more explanatory
variable, the backward elimination procedure
stops. Condition (1) chooses the variable which
usually contributes the least among other variables in
predicting Y . Condition (2) guarantees that removing
the chosen variable results in an improved model or
affects the model only very little. Removing the variables
that affect the model very little can reduce the
complexity and maintenance overhead of the model.
The next explanatory variable X to be added into
the current model during the second forward stage is
the one that (a) is in the set of secondary explanatory
variables; (b) has the largest simple correlation coefficient
with the response variable Y that has been adjusted
for the effect of the current model (i.e., the largest
simple correlation coefficient with the residuals of the
current model); and (c) makes the augmented model
(i.e., the model that includes X) have a smaller standard
error of estimation than the current model and
the two standard errors of estimation not very close
to each other, for instance, greater than 1% relative
error. If the next explanatory variable satisfying (a)
and (b) does not satisfy (c), or no more explanatory
variable exists, the forward selection procedure stops.
The reasons for using conditions (a) - (c) are similar
to the situation for removing a variable. In particular,
a variable is not added into the model unless it improves
the standard error of estimation significantly in
order to reduce the complexity of the model.
A description of the whole mixed forward and backward
procedure is given below.
Algorithm 4.1 : Select Explanatory Variables for
a Regression Model
Input: the set VB of basic explanatory variables;
the set V S of secondary explanatory
observed data of sample
queries for a given query class.
Output: a regression model with selected
explanatory variables
1. begin
2. Use observed data to fit the full basic model
for the query class;
6 The simple correlation coefficient of two variables indicates
the degree of the linear relationship between the two variables.
3. Calculate the standard error of estimation s;
4. for each variable X in VB do
5. Calculate the simple correlation coefficient
between X and the response variable Y
6. end;
7. backward := 'true';
8. while backward = 'true' and VB 6= ; do
9. Let X 0 be the explanatory variable in VB
with the smallest simple correlation
coefficient;
11. Use the observed data to fit the reduced
model with X 0 removed;
12. Calculate the standard error of estimation
s 0 for the reduced model;
13. very small then
14. begin
15. Set the reduced model as the current
model;
17. end
18. else backward := 'false'
19. end;
20. forward := 'true';
21. while do
22. for each X in V S do
23. Calculate the simple correlation
coefficient between X and the
residuals of the current model
24. end;
25. Let X 0 be the variable with the
largest simple correlation coefficient;
26. Use the observed data to fit the augmented
model with X 0 added;
27. Calculate the standard error of estimation
s 0 for the augmented model;
28. very small
then
29. begin
30. Set the augmented model as the
current model;
31.
33. end
34. else forward := 'false'
35. end;
36. Return the current model as the
regression model
37. end.
Since we start with the basic model, which has a
high possibility to be the appropriate model for the
given query class, the backward elimination and forward
selection will most likely stop soon after they
are initiated. Therefore, our procedure is likely more
efficient than a pure forward or backward procedure.
However, in the worst case, the above procedure will
still check potential explanatory
variables, which is the same as a pure forward or backward
procedure.
5 Measures Ensuring Useful Models
To develop a useful regression model, measures
need to be taken during the analysis. Furthermore,
a developed regression model should be verified before
it is used. Improvements may be needed if the
model proves not acceptable. In this section, based on
the characteristics of the cost models for query optim-
ization, we identify the appropriate statistical methods
and apply them to ensure the significance of our developed
cost models.
5.1 Outliers
Outliers are extreme observations. In a residual
plot, outliers are the points that lie far beyond the
scatter of the majority of points. Under the method of
least squares, a fitted equation may be pulled disproportionately
towards an outlying observation because
the sum of the squared deviations is minimized.
There are two possibilities for the existence of out-
liers. Frequently, an outlier results from a mistake or
other extraneous causes. In our application, it may be
caused by an abnormal situation in the system during
the execution of a sample query. In this case, the
outlier should be discarded. Sometimes, however, an
outlier may convey significant information. For ex-
ample, in our application, an outlier may indicate that
the underlying DBMS uses a special strategy to process
the relevant sample query, which is different from
the one used for other queries. Since outliers represent
a few extreme cases and our objective is to derive
a cost estimation formula that is good for the majority
of queries in a query class, we simply discard the
outliers and use the remaining observations to derive
a cost formula.
In a (standardized) residual plot, an outlier is usually
four or more standard deviations from zero [7] .
Therefore, an observation whose residual exceeds a
certain amount of standard deviations D, such as
4, can be considered as an outlier and be re-
moved. The residuals of query observations used here
are calculated based on the full basic model since such
a model usually captures the major behavior of the
final model.
5.2 Multicollinearity
When the explanatory variables are highly correlated
among themselves, multicollinearity among them
is said to exist. The presence of multicollinearity does
not, in general, inhibit our ability to obtain a good fit
nor does it tend to affect predictions of new observa-
tions, provided these predictions are made within the
region of observations. However, the estimated regression
coefficients tend to have large sampling variabil-
ity. To make reasonable predictions beyond the region
of observations and obtain more precise information
about the true regression coefficients, it is better to
avoid multicollinearity among explanatory variables.
A method to detect the presence of multicollinearity
that is widely used is by means of variance inflation
factors. These factors measure how much the variances
of the estimated regression coefficients are inflated as
compared to when the independent variables are not
linearly related. If R 2
j is the coefficient of total determination
that results when the explanatory variable X j
is regressed against all the other explanatory variables,
the variance inflation factor for X j is defined as
It is clear that if X j has a strong linear relationship
with the other explanatory variables, R 2
j is close to 1
and V IF (X j ) is large.
To avoid multicollinearity, we use the reciprocal of
a variance inflation factor to detect instances where
an explanatory variable should not be allowed into the
fitted regression model because of excessively high interdependence
between this variable and other explanatory
variables in the model.
More specifically, the set VB of basic explanatory
variables used by Algorithm 4.1 is formed as follows.
At the beginning, VB only contains the basic explanatory
variable which has the highest simple correlation
coefficient with the response variable Y . Then the variable
which has the next highest simple correlation
coefficient with Y is entered into VB if 1=V IF (X j ) is
not too small. This procedure continues until all possible
basic explanatory variables are considered. Sim-
ilarly, when Algorithm 4.1 selects additional beneficial
variables from V S for the model, any variable X j whose
small is skipped.
5.3 Validation of Model Assumptions
Usually, three assumptions of a regression model (1)
need to be checked: 1. uncorrelation of error terms; 2.
equal variance of error terms; and 3. normal distribution
of error terms.
Note that the dependent random variables Y i 's
should satisfy the same assumptions as their error
terms since the X i;j 's in (1) are known values. In gen-
eral, regression analysis is not seriously affected by
slight to moderate departures from the assumptions.
The assumptions can be ranked in terms of the seriousness
of the failure of the assumption to hold from
the most serious to the least serious as follows: assumptions
For our application, the observed costs of repeated
executions of a sample query have no inherent relationship
with the observed costs of repeated executions
of another sample query under the assumption
that the contention factors in the system are approximately
stable. Hence the first assumption should be
satisfied. This is a good property because the violation
of assumption 1 is the most serious to a regression
model.
However, the variance of the observed costs of repeated
executions of a sample query may increase with
the level (magnitude) of query cost. This is because
the execution of a sample query with longer time (lar-
ger cost) may suffer more disturbances in the system
than the execution of a sample query with shorter time.
Thus assumption 2 may be violated in our regression
models. Furthermore, the observed costs of repeated
executions of a sample query may not follow
the normal distribution; i.e., assumption 3 may not
hold either. The observed costs are usually skewed to
the right because the observed costs stay at a stable
level for most time and become larger from time to
time when disturbances occur in the system.
Since the uncorrelation assumption is rarely violated
in our application, it is not checked by our regression
analysis program. For the normality assumption,
many studies have shown that regression analysis is
robust to it [7; 9] ; that is, the technique will give usable
results even if this assumption is not satisfied. In fact,
the normality assumption is not required to obtain the
point estimates of b
Y and s. This assumption
is required only when constructing confidence intervals
and hypothesis-testing decision rules. In our ap-
plication, we will not construct confidence intervals,
and the only hypothesis-test that needs the normality
assumption is the F -test which will be discussed
later. Like many other statistical applications, if only
the normality assumption is violated, we choose to ignore
this violation. Thus, the normality assumption is
not checked by our regression analysis program either.
When the assumption of equal variances is violated,
a correction measure is usually taken to eliminate or
reduce the violation. Before a correction measure is
us first discuss how to test for the violation
of equal variances.
Assuming that a regression model is proper to fit
sample observations, the sampled residuals should reflect
the assumptions on the error terms. We can,
therefore, use the sampled residuals to check the as-
sumptions. There are two ways in which the sampled
residuals can be used to check the assumptions
residual plots and statistical tests. The former is sub-
jective, while the latter is objective. Since we try to
develop a program to test assumption 2 automatically,
we employ the latter.
As mentioned before, if the assumption of equal
variances is violated in our application, variances typically
increase with the level of the response variable.
In this case, the absolute values of the residuals usually
have a significant correlation with the fitted values
of the response variable. A simple test for the correlation
between two random variables u and w when the
bivariate distribution is unknown is to use Spearman's
rank correlation coefficient [9; 12] , which is defined as
are the ranks of the values u i
and w i of u and w, respectively. The null and alternate
hypotheses are as follows:
The values of u and w are uncorrelated.
Either there is a tendency for larger values of u
to be paired with the larger values of w, or there
is a tendency for smaller values of u to be paired
with larger values of w.
The decision rule at the significance level ff is:
If r s ! ae 1\Gammaff=2 or r s ? ae ff=2 , conclude HA .
The critical values ae can be found in [9].
If HA is concluded for the absolute residuals and fitted
values, the assumption of equal variances is violated.
If the assumption of equal variances is violated, the
estimates given by the corresponding regression model
will not have the maximum precision [2] . Since the estimation
precision requirement is not high for query
optimization, the violation of this assumption can be
tolerated to a certain degree. However, if the assumption
of equal variances is severely violated, account
should be taken of this in fitting the model.
A useful tool to remedy the violation of the equal
variances assumption is the method of weighted least
squares. The idea is to provide differing weights in
(1); that is,
where w i is the weight for the i-th Y observation.
The values for B j 's to minimize LSw is to be found.
Least squares theory states that the weights w i 's are
inversely proportional to the variances oe 2
i 's of the error
terms. Thus an observation Y i that has a large
variance receives less weight than another observation
that has a smaller variance. The (weighted) variances
of error terms tend to be equalized.
Unfortunately, one rarely has knowledge of the variances
's. To estimate the weights, we do the fol-
lowing. The sample data is used to obtain the fitted
regression function and residuals by ordinary least
squares first. The cases are then placed into a small
number of groups according to level of the fitted value.
The variance of the residuals is calculated for each
group. Every Y observation in a group receives a
weight which is the reciprocal of the estimated variance
for that group.
Moreover, we use the results of weighted least
squares to re-estimate the weights and obtain a new
weighted least squares fit. This procedure is continued
until no substantial changes in the fitted regression
function take place or too many iterations occur.
In the latter case, the fitted regression function with
the smallest Spearman's rank correlation coefficient is
chosen. This procedure is called an iterative weighted
least squares procedure.
5.4 Testing Significance of Regression
Model
As mentioned previously, to evaluate the goodness
of the developed regression model, two descriptive
measures are used: the standard error of estimation
and the coefficient of multiple determination. A good
regression model is evidenced by a small standard error
of estimation and a high coefficient of multiple determination
The significance of the developed model can be further
tested by using the F -test [7; 9] . The F -test was
derived under the normality assumption. However,
there is some evidence that non-normality usually
does not distort the conclusions too seriously [12] .
In general, the F -test under the normality assumption
is asymptotically (i.e., with sufficiently large
samples) valid when the error terms are not normally
distributed [1] . Therefore, F -test is adopted in our application
to test the significance of a regression model
although the error terms may not follow the normality
assumption.
Class Characteristics of Queries in the Class Likely Access Method
Gu1 unary queries whose qualifications have at least one conjunct R i scan method
indexed with a key value
Gu2 unary queries that are not in Gu1 and whose qualifications have at least one index scan method
conjunct R i :an ' C where R i :an is indexed and ' 2 f!; -;g with a range
Gu3 unary queries that are not in Gu1 or Gu2 sequential scan method
G j1 join queries whose qualifications have at least one conjunct R i method
where either R i :an or R j :am (or both) is indexed
G j2 join queries that are not in G j1 and whose qualifications have at least one nested-loop join method
index-usable conjunct for one or both operand tables with index reduction first
G j3 join queries that are not in G j1 or G j2 sort-merge join method
Table
1: Considered Query Classes
6 Experiments
To verify the feasibility of the presented statistical
procedure, experiments were conducted within
a multidatabase system prototype, called CORDS-
MDBS. Three commercial DBMSs, i.e., ORACLE 7.0,
EMPRESS 4.6 and DB2/6000 1.1.0, were used as local
DBMSs in the experiments. All the local DBMSs were
run on IBM RS/6000 model 220 machines. Due to the
limitation of the paper length, only the experimental
results on ORACLE 7.0 are reported in this paper.
The experiments on the other systems demonstrated
similar results.
The experiments were conducted in a system environment
where the contention factors were approximately
stable. For example, they were performed during
midnights and weekends when there was no or little
interference from other users in the systems. However,
occasional interference from other users still existed
since the systems were shared resources.
Queries for each local database system were classified
according to the query sampling method. The
considered query classes 7 are given in table 1. Sample
queries are then drawn from each query class and performed
on the three local database systems. Their observed
costs are used to derive cost models for the
relevant query classes by the statistical procedure introduced
in the previous sections.
Tables
2 and 3 show the derived cost models and
the relevant statistical measures. It can be seen that:
ffl Most cost models capture over 90% variability
in query cost, from observing the coefficients of
total determination. The only exception is for G u1
when queries can be executed very fast, i.e., small-
cost queries, due to their efficient access methods
and small result tables.
ffl The standard errors of estimation for the cost
models are acceptable, compared with the mag-
7 Only equijoin queries were considered.
nitudes of the relevant average observed costs of
the sample queries.
ffl The statistical F-tests at the significance level
0:01 show that all derived cost models are useful
for estimating the costs of queries in the relevant
query classes.
ffl The statistical hypothesis tests for the Spearman's
rank correlation coefficients at the significance
level show that there is no strong evidence
indicating the violation of equal variances assumption
for all derived cost models after using
the method of weighted least squares if needed.
ffl Derivations of most 8 cost models require the
method of weighted least squares, which implies
that the error terms of the original regression
model (using the regular least squares) violate the
assumption of equal variances in most cases.
In summary, the statistical procedure derived useful
cost models. Figure 2 shows a typical comparison
between the observed costs and our estimated costs
for some test queries.
As mentioned, the experimental results show that
small-cost queries often have worse estimated costs
than large-cost queries. This observation coincides
with Du et al.'s observation for their calibration
method. The reason for this phenomenon is that (1)
a cost model is usually dominated by large costs used
to derive it, while the small costs may not follow the
same model because different buffering and processing
strategies may be used for the small-cost queries; (2) a
small cost can be greatly affected by some contention
factors, such as available buffer space and the number
of current processes; (3) initialization costs, distribution
of data over a disk space and some other factors,
which may not be important for large-cost queries,
8 Some unreported cost models for other local database systems
in the experiments did not require the method of weighted
least squares.
query
class Cost Estimation Formula
Table
2: Derived Cost Formulas for Query Classes on ORACLE 7.0
query coefficient standard average F-statistic Spearman's rank weighted
of multiple error of cost (critical value correlation (critical least
class determination estimation (sec.) at value at
Table
3: Statistical Measures for Cost Formulas on ORACLE 7.0
could have major impact on the costs of small-cost
Table Cardinality
Cost
(Elapse
Time
in
solid line - estimated cost
dotted line - observed cost
Figure
2: Observed and Estimated Costs for Test
Queries in G j3 on ORACLE
Since the causes of this problem are usually uncontrollable
and related to implementation details of the
underlying local database system, it is hard to completely
solve this problem at the global level in an
MDBS. However, this problem could be mitigated by
(a) refining the query classification according to the
sizes of result tables; and/or (b) performing a sample
query multiple times and using the average of observed
costs to derive a cost model; and/or (c) including in
the cost model more explanatory variables if available,
such as buffer sizes, and distributions of data in a disk
space.
Fortunately, estimating the costs of small-cost queries
is not as important as estimating the costs of large-
cost queries in query optimization because it is more
important to identify large-cost queries so that "bad"
execution plans could be avoided.
7 Conclusion
Today's organizations have increasing requirements
for tools that support global access to information
stored in distributed, heterogeneous, autonomous data
repositories. A multidatabase system is such a tool
that integrates information from multiple pre-existing
local databases. To process a global query efficiently
in an MDBS, global query optimization is required.
A major challenge for performing global query optimization
in an MDBS is that some desired local cost
information may not be available at the global level.
knowing how efficiently local queries can be
executed, it is difficult for the global query optimizer
to choose a good decomposition for the given global
query.
To tackle this challenge, a feasible statistical procedure
for deriving local cost models for a local data-base
system is presented in this paper. Local queries
are grouped into homogeneous classes. A cost model
is developed for each query class. The development of
cost models are base on multiple regression analysis.
Each cost model is divided into two parts: a basic
model and a secondary part. The basic model is
based on some existing cost models in DBMSs and
used to capture the major performance behavior of
queries. The secondary part is used to improve the
basic model. Potential explanatory variables that can
be included in each part of a cost model are identified.
A backward procedure is used to eliminate insignificant
variables from the basic model for a cost model.
A forward procedure is used to add significant variables
to the secondary part of a cost model. Such
a mixed forward and backward procedure can select
proper variables for a cost model efficiently.
During the regression analysis, outliers are removed
from the sample data. Multicollinearity is discovered
by using the variance inflation factor and prevented
by excluding variables with larger variance inflation
factors. Violation of the equal variance assumption is
detected by using Spearman's rank correlation coefficient
and remedied by using an iterative weighted least
squares procedure. The significance of a cost model is
checked by the standard error of estimation, the coefficient
of multiple determination, and F-test. These
measures ensure that a developed cost model is useful.
The experimental results demonstrated that the
presented statistical procedure can build useful cost
models for local database systems in an MDBS.
The presented procedure introduces a promising
method to estimate local cost parameters in an MDBS
or a distributed information system. We plan to
investigate the feasibility of this method for non-relational
local database systems in an MDBS in the
future.
--R
The Theory of Linear Models and Multivariate Analysis.
Regression Analysis by Example
Query optimization in heterogeneous DBMS.
Query optimization in data-base systems
Practical selectivity estimation through adaptive sampling.
On global multidatabase query optimization.
Applied Linear Statistical Models
Simple random sampling from relational databases.
Statistical Methods for Business and Economics.
Access path selection in relational database management systems.
Accurate estimation of the number of tuples satisfying a condition.
Statistical Methods
Query optimization in multidatabase systems.
--TR
--CTR
Wei Ru Liu , Zhi Ning Liao , Jun Hong, Query cost estimation through remote system contention states analysis over the Internet, Web Intelligence and Agent System, v.2 n.4, p.279-291, December 2004
Qiang Zhu , Satyanarayana Motheramgari , Yu Sun, Cost estimation for queries experiencing multiple contention states in dynamic multidatabase environments, Knowledge and Information Systems, v.5 n.1, p.26-49, March
Ying Chen , Qiang Zhu , Nengbin Wang, Query processing with quality control in the World Wide Web, World Wide Web, v.1 n.4, p.241-255, 1998
Amira Rahal , Qiang Zhu , Per-ke Larson, Evolutionary techniques for updating query cost models in a dynamic multidatabase environment, The VLDB Journal The International Journal on Very Large Data Bases, v.13 n.2, p.162-176, May 2004
Ning Zhang , Peter J. Haas , Vanja Josifovski , Guy M. Lohman , Chun Zhang, Statistical learning techniques for costing XML queries, Proceedings of the 31st international conference on Very large data bases, August 30-September 02, 2005, Trondheim, Norway
Zaiqing Nie , Subbarao Kambhampati , Ullas Nambiar, Effectively Mining and Using Coverage and Overlap Statistics for Data Integration, IEEE Transactions on Knowledge and Data Engineering, v.17 n.5, p.638-651, May 2005
Piyush Shivam , Shivnath Babu , Jeff Chase, Active and accelerated learning of cost models for optimizing scientific applications, Proceedings of the 32nd international conference on Very large data bases, September 12-15, 2006, Seoul, Korea
Qiang Zhu , Jaidev Haridas , Wen-Chi Hou, Query optimization via contention space partitioning and cost error controlling for dynamic multidatabase systems, Distributed and Parallel Databases, v.23 n.2, p.151-188, April 2008
Subbarao Kambhampati , Eric Lambrecht , Ullas Nambiar , Zaiqing Nie , Gnanaprakasam Senthil, Optimizing Recursive Information Gathering Plans in EMERAC, Journal of Intelligent Information Systems, v.22 n.2, p.119-153, March 2004 | cost estimation;cost model;global query optimization;multidatabase system;multiple regression |
383258 | Integrated test of interacting controllers and datapaths. | In systems consisting of interacting datapaths and controllers and utilizing built-in self test (BIST), the datapaths and controllers are traditionally tested separately by isolating each component from the environment of the system during test. This work facilitates the testing of datapath/controller pairs in an integrated fashion. The key to the approach is the addition of logic to the system that interacts with the existing controller to push the effects of controller faults into the data flow, so that they can be observed at the datapath registers rather than directly at the controller outputs. The result is to reduce the BIST overhead over what is needed if the datapath and controller are tested independently, and to allow a more complete test of the interface between datapath and controller, including the faults that do not manifest themselves in isolation. Fault coverage and overhead results are given for four example circuits. | INTRODUCTION
Systems consisting of interacting datapaths and controllers are typically designed by
synthesizing and testing the datapath and controller independently, even though
the two operate as an inseparable pair. This separation can cause difficulties in
even if the datapath and controller are designed such that they are 100%
Current addresses are: M. Nourani, Dept. of Electrical Engineering, The University of Texas at
Dallas, P.O. Box: 830688, EC 33, Richardson, Dept. of Electrical
Engineering, The University of Akron, Akron, OH, 44325-3904; C. Papachristou, Dept. of EECS,
Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106-7071.
Permission to make digital or hard copies of part or all of this work for personal or classroom
use is granted without fee provided that copies are not made or distributed for profit or direct
commercial advantage and that copies show this notice on the first page or initial screen of a
display along with the full citation. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish,
to post on servers, to redistribute to lists, or to use any component of this work in other works,
requires prior specific permission and/or a fee.
c
2000 by the Association for Computing Machinery, Inc.
testable taken separately, when the two are taken in combination the testability
may be severely degraded [Dey et al. 1995]. In addition, separating tests for the
datapath and the controller may result in neglecting the control/status signals used
to communicate between the two. Moreover, some faults can be seen only when
modules interact with each other, such as faults due to phenomena like crosstalk
and reflection [Bakoglu 1990], faults that create signal slew among cores receiving
the same signal [Sparmann et al. 1995], and faults that cause excessive power
consumption in the circuit [Nourani et al. 1997].
Few, if any, synthesis tools address the issue of how to test the datapath and
controller of an interacting pair in an integrated way. The main goal of our work is
to test a controller-datapath pair realistically and quickly, without neglecting the
signals used for communicating between the two. The key to a successful integrated
system test of a controller-datapath pair is to provide a method to propagate and
observe the effect of certain controller faults through the datapath, so that they
can be observed at the datapath registers rather than at the controller outputs.
In this way, we can avoid the test hardware overhead associated with observing
the controller outputs directly. This approach will test the interconnects between
datapath and controller more effectively than would separate tests of datapath and
controller. Moreover, our approach can detect certain type of redundant controller
faults that, although they may not affect the overall system functionality of the
controller-datapath pair, do have deleterious effects on the system, such as increased
power consumption. At the same time, our approach will not substantially increase
the overall system cost.
The basis of our technique is the addition of a small finite state machine (FSM)
that interacts with the main controller FSM, for the purpose of making controller
faults observable at the datapath registers. This state machine is designed to work
independent of implementation details, the design-for-testability technique, and the
design tools used to synthesize the controller and datapath.
1.1 Related Work
There are many well-known problems in controller optimization. Work presented in
[Devadas and Newton 1989], [Ashar and Devadas 1991] and [Lagnese and Thomas
1989] apply finite state machine (FSM) decomposition techniques to improve controller
area or performance. The importance of state assignment is discussed in
[DeMicheli et al. 1984] and [Devadas et al. 1988], among others. Recently, the effect
of controller design on power consumption has been explored in [Landman and
Rabaey 1995]. The work of [Benini and DeMicheli 1994] uses special state assignments
to reduce power, while [Benini et al. 1994] adds some combinational logic to
the original controller to avoid inactive state transitions.
For self-testable designs based on BIST (Built-In-Self-Test), research involving
controllers focuses on test plan and test scheduling [Abadir and Breuer 1985] [Kime
and Saluja 1982] [Jone et al. 1989]. [Hellebrand and Wunderlich 1994] uses additional
test registers to implement the system function supporting self-testable
pipeline-like controller. The MMC control scheme in [Breuer and Lien 1988] is able
to test a chip in a module via a boundary scan bus. A local dedicated test controller
is discussed in [Joersz and Kime 1987] to reduce the overall test overhead. Some
other heuristics and examples are [Eschermann and Wunderlich 1990], which uses
special state assignment and feedback polynomial, [Mukherejee et al. 1991], which
uses one-hot encoding, and [Breuer et al. 1988], which employs microprogrammed
and hard-wired implementations of the controller. The method proposed in [Kuo
et al. 1995] adds some additional edges to FSM to make the corresponding architecture
testable. The authors of [Hsu and Patel 1995] note that some FSMs are not
easily controllable because they require a long synchronizing sequence, and propose
a method to improve FSM testability.
None of these approaches use a unified model to test the controller-datapath pair.
Instead, datapath and controller are tested separately in different test sessions. For
these approaches, the basic test scheme is similar to what [Bhatia and Jha 1994]
proposed; the controller output signals are multiplexed with some or all of the
datapath primary outputs, thus making them directly observable. Observing the
controller and datapath faults separately, in general, implies more test time (due
to separate test session) and more overhead (due to direct observation of each).
[Dey et al. 1995] observed that even when the controller and datapath are 100%
testable separately, the combination of them has usually much lower coverage. This
degradation, in their opinion, occurs due to the correlation and dependency between
the control signals. Then, to improve testability the authors propose to redesign
the controller by breaking the correlation between the control signals.
1.2 Organization of Paper
This paper is organized as follows. Section 2 presents a system model for testing a
datapath/controller pair. Issues central to the testing of controllers are presented
in Section 3. These issues include a classification of the types of faults in the
controller, and the impact that the controller faults have on the nonfunctional
aspects of the system of manufacturing and power. Section 4 details our solution for
integrated datapath/controller testing. Experimental results are shown in Section
5, and concluding remarks are in Section 6.
2. MODEL
In our system model, introduced in [Carletta and Papachristou 1995], the datapath
is represented behaviorally by a data flow graph (DFG), in which nodes represent
operations such as addition and multiplication, and edges represent the transfer of
data. Structurally, the datapath consists of arithmetic logic units (ALUs), mul-
tiplexers, registers, and busses, and is responsible for all data computations. We
assume that the datapath is composed of functional blocks like that shown in Figure
1. Behaviorally, the controller is viewed as a state diagram that specifies the
time steps in which the various operations in the data flow graph are done. For
this work, controllers are implemented structurally as finite state machines using
random logic.
The traditional approach to testing a controller-datapath pair is shown in Figure
2. The pair is completely split, and the two parts are tested independently. If the
controller and datapath can be tested at different times, multiplexers may be used
to share the test resources; for example, one TPGR and one multiplexer can be
used instead of the two TPGRs shown on the figure. For traditional designs, for
which design-for-testability decisions are made for one component without thinking
about how the component will be used in the context of the pair, this approach
Controller Datapath
MS
ALU
Multiplexer/Bus
RL Register "R"
Fig. 1. One functional block defining our datapath style.
CONTROL
status
done
MISR
TPGR
data out
data in
status
MISR
TPGR
(a) (b)
start
control
control
Fig. 2. Separate testing: (a) for the controller; and (b) for the datapath.
CONTROL
start
done
data out
data in
control
status
TPGR
MISR
Fig. 3. Completely integrated testing of datapath and controller.
has the advantage that it tests the components as the designers intended. There-
fore, fault coverage for individual components will be as high as the design allows.
However, this approach is undesirable because it does not test the interface between
controller and datapath, and because it requires a large amount of insertion.
These disadvantages are addressed by a completely integrated approach, as shown
in
Figure
3. In an integrated approach, the controller-datapath pair is treated as
an inseparable unit, and the two parts are tested simultaneously.
One motivation for treating a controller-datapath pair as an integrated system is
shown in Figure 4. The figure shows a controller and datapath, with a single control
line extending from one to the other. In an independent test of the controller,
this line would be tapped so that the output of the controller could be observed
control line
to MISR
CONTROL DATAPATH
Fig. 4. Illustration of disadvantage of separate testing.9296100
Fault
Coverage
Time in clocks
separate
integrated
(a) for the datapath8090100
Fault
Coverage
Time in clocks
separate
integrated
(b) for the controller
Fig. 5. Fault coverage curves for the datapath and the controller of a differential equation solver,
when tested separately and when tested together as pair.
directly by an MISR. Even though such an arrangement allows good observation
of the controller, there is still a segment of the control line, shown on the figure
as a dotted line, that can not be observed. This control line extends far into the
datapath, and may control multiple registers and / or multiplexers. Even if part
of the segment is tested during the test of the datapth, it is difficult to ensure
coverage of the complete line. In particular, testing the line as a series of segments
may miss problems due to phenomena like crosstalk, reflection and signal skew that
show up only during operation of the overall system. Note that if the controller and
datapath are laid out in separate blocks, the control lines may be of significantly
longer length than other wires, and may therefore be more susceptible to faults. By
doing an integrated test, for which we observe the controller through the datapath,
we ensure that we test the entire control line.
Figure
5 uses a differential equation solver as an example to compare separate
and integrated testing of a controller-datapath pair. Separate fault coverage curves
are shown for the datapath and controller. As can be seen from the curves, fault
coverage is degraded when the datapath and controller are tested together; this is
because controllability and observability of the control and status lines are reduced;
in the integrated test, these lines are no longer directly accessible. The goal of this
research is to achieve a high quality integrated test by overcoming these difficulties.
This work concentrates exclusively on the test of the controller in an integrated
environment. The focus is on enhancing the observability of the control lines
through the datapath through the addition of some extra logic to the system. The
work complements previous work, reported in [Carletta and Papachristou 1997],
that develops a scheme under which the datapath can be tested in an integrated
way. In that work, the datapath is exercised according to its normal behavior even
during test; guidelines based on high-level testability metrics are given for modify-
6controller-functionally
irredundant faults
(CFI)
controller-functionally
redundant faults
(CFR)
redundant faults
system-functionally
irredundant faults
system-functionally
controller faults
Design Number of faults that are:
CFR CFI SFI SFR
poly 0 164 136 28
Fig. 6. Classification of controller faults, with percentage of faults belonging to each category in
example controllers.
ing the datapath to ensure that the quality of such a test is sufficient. These two
pieces of work can be used together to ensure a full integrated system test.
3. CONTROLLER FAULT ANALYSIS
We classify stuck-at faults internal to the controller into several groups as shown in
Figure
6 [Carletta et al. 1999]. The first division is based on whether a fault affects
the functionality of the controller. By functionality, we mean the input-output
behavior of the synthesized controller as it operates in normal mode. Faults that
never affect the output of the synthesized controller in normal mode are controller-
functionally redundant or CFR. CFR faults can not be detected even by direct
observation of the controller outputs during normal mode operation. Detection
of these faults may require, for example, the application of transitions that the
designer left unspecified, perhaps because some of the states of the controller are
unused, or because for some states some input combinations will never occur. The
work in [Fummi et al. 1995] shows that controller resynthesis can be used to remove
these faults if they are a concern.
Note that a stuck-at fault inside the controller may affect the controller outputs
in a sequential way, causing the controller outputs to change only in one or more
control steps in the controller state diagram. The other kind of fault, which we call
controller-functionally irredundant or CFI, affects the output of the controller in at
least one time step when the controller is running in normal mode. Faults of this
kind will be caught in an independent test of the controller, for which we operate
the controller in normal mode and observe the controller outputs directly.
We further divide the controller-functionally irredundant faults into two sub-
groups, based on whether a fault affects the functionality of the datapath/controller
pair as a system. System-functionally irredundant or SFI faults, are those faults
that change the input-output behavior of the system as a whole. Some faults in
the controller clearly affect the function performed by the datapath; for example,
a fault whose effect changes a "care" specification of a multiplexer select line will
cause an operation to be done on incorrect data, thereby affecting a change in the
results of the computation. System-functionally redundant or SFR faults are those
faults that do not affect the input-output behavior of the system, even though they
did affect the input-output behavior of the controller. One example of a system-
7functionally redundant fault is a fault that affects bits of the controller output only
in time steps when those bits are "don't care'' specifications. For example, a fault
may affect a multiplexer select line only in those time steps when the multiplexer
is idle. In time steps when no register driven by a multiplexer is loaded, the select
line for the multiplexer is a "don't care'', and the multiplexer does not take part in
any register-to-register transfer. Depending on how the controller was synthesized,
the select line will be either a 0 or 1. Although the actual value of the select line
will make a difference in terms of what signals are propagating locally in the area
of the multiplexer, these signals are never written to any register, are never used in
computation, and therefore do not affect the function performed by the datapath.
If some fault in the controller causes the select line value to change, there is no way
to observe the change through the datapath, and the function of the datapath is
not affected.
The difference between our fault classification model and that of [Fummi et al.
1995] is subtle, but important. In [Fummi et al. 1995], faults are distinguished in
terms of the controller functionality as specified by the designer, whereas we consider
the functionality of the synthesized controller. In the designer's specification,
some outputs may be unspecified, whereas in the synthesized version specific values
have been chosen for all the outputs as a byproduct of the synthesis. In [Fummi
et al. 1995], any fault that affects a "care" specification at the controller output
is viewed as irredundant. However, some system-functionally redundant faults affect
even the "care" specifications for the outputs of the controller. In some sense,
these faults are due to redundant logic within the system. Even if the controller
and datapath have no redundancy when considered separately, the combined system
may have redundancy. For example, suppose that some fault in the controller
causes a register to be loaded in a time step when it should not be loaded. If
the extra load overwrites some important part of the computation, it will be de-
tectable. However, the extra load may write into a register that is not currently
holding a computation value, or that is holding a computation value that will not
be used again. In this case, the extra load will not affect the functionality of the
datapath. It is possible to determine whether a fault that causes a change on a
register load line during a time step is system-functionally redundant by analyzing
the lifespans of the variables bound to the register. If no variable is alive during
that time step or if the extra load serves only to re-do a previous computation, the
fault is system-functionally redundant.
Table
6 shows how the faults in three example controllers, presented more completely
in the results section, break down into categories. For these controllers,
about 15% of faults in the controller are system-functionally redundant. Synthesizing
the controller so that SFR faults do not exist is not trivial; in [Carletta et al.
1999], we show that the key to removing SFR faults is a careful consideration of the
the meaning of "don't care'' specifications in the context of the controller-datapath
pair, and requires an analysis of the lifespans of the variables bound to registers in
the datapath. Controllers specified in other ways are likely to contain SFR faults.
In particular, controllers for systems utilizing gated clocks, which designate that
registers be loaded only when necessary to save the results of a computation, are
very likely to contain a significant number of SFR faults.
Inputs Power [-W]
Clock Data Lines Load Line Latch Register
Stopped Random or Fixed Random or Fixed 0.07 0.07
Running Random Random 130 151
Stuck-at-0 26 25
Running Fixed Random 68 96
(a)
Fault Power %
Presence [-W] increase
multiple effects 2413 75.4%
(b)
Table
I. Power consumption in -Watts for (a) four bit storage components; and (b) a four bit
implementation of a differential equation solver in the presence of SFR faults.
3.1 Power and other non-functional effects of SFR faults
The synthesis method used for the controller impacts how many and what kind of
controller-functionally redundant and system-functionally redundant faults exist in
the controller [Carletta et al. 1999]. The presence of these faults does not affect
the functionality of the system. However, these faults may cause undesirable non-functional
effects. Whether detection of these faults is important depends on the
concerns of the designer, and groups responsible for manufacturing, reliability, and
quality assurance.
One non-functional effect of system-functionally redundant faults is increased
power consumption. Excessive power consumption may be undesirable in its own
right, and can also cause degradation in system performance as the chip heats up.
An SFR fault that causes harmless but unnecessary loading of "garbage" values
into a register will result in unnecessary power consumption in the register and any
combinational logic driven by the register. In essence, such a fault undermines the
gated clock scheme used for low power design.
To show the extent of effects on power, we measured the dynamic power consumption
of 4-bit storage elements. For implementation, we used components in the
0.8-Micron VCC4DP3 datapath library [VLSI Technology 1993] in the COMPASS
Design Automation tools [Compass Design Automation 1993]. We then ran the
COMPASS toolset with the "power enable" switch on to report the average power
consumption for a large number of random patterns. Table I (a) shows an experiment
to measure the power consumption. "Fixed" means that we have fixed that
signal to a randomly selected value and kept it unchanged for the entire simulation
process. "Random" means that the signal is driven with random patterns. Note
that even when the data input to the storage element is fixed, there is a considerable
amount of power consumption in the component due to the "clock" signal. A
system-functionally redundant stuck-at-1 faults on the load line will cause a dramatic
9increase in power consumption. For random data inputs, power consumption
in a latch rises from 56 to 176 -Watts, and for fixed inputs power consumption
rises from 26 to 97 -Watts. This is 200-300% increase in power of one latch in the
worst case and j on the average.
The above simulation was for a non-embedded storage element. To verify that
the same phenomenon occurs when storage elements are embedded in a circuit,
we have repeated the power simulation for a complete design that implements a
differential equation solver. Here, we are careful to inject only faults that are
system-functionally redundant; throughout the experiment, the functionality of the
datapath remains the same as in the fault-free case. Table I(b) summarizes the
result. Fault 1 and 2 correspond to two different system-functionally redundant
single stuck-at-1 faults on two specific register load lines. The presence of fault 1
causes a 2% increase in overall power consumption, while the presence of fault 2
causes a 9% increase. The column labelled "multiple effects" reflects a worst case
scenario for this particular example, in which registers load as often as possible
without disrupting datapath functionality. In this scenario, multiple registers load
multiple times, and the increase in power consumption is a dramatic 75%.
In another example of an undesirable non-functional fault effect, the presence of
the fault may be an indication of some manufacturing problem. Taking an example
from [Aitken 1995], one manufacturing problem seen in real integrated circuits
is cracks in insulation layers. Over time, metal migrates into the cracks, forming
shorts. A system-functionally redundant fault caused by this manufacturing
problem may indicate more serious problems to come, as more shorts form, and
therefore be worth detecting.
4. A DESIGN SOLUTION
This section describes a solution to the controller testing problem that adds a small
finite state machine (FSM) to the system. This FSM "piggybacks" onto the original
controller, interacting with the controller in such a way that the effects of all
controller-functionally irredundant (CFI) faults, both system-functionally irredundant
and redundant (SFI and SFR), within the controller are pushed into the data
flow, where they can be observed at the outputs of the datapath registers. The
goals of our scheme are as follows:
ffl The scheme should work with any existing (ad-hoc or systematically synthesized)
controller / datapath pair without architectural change.
ffl The overhead for the scheme should be less than the overhead required for the
separate test scheme.
ffl The scheme should complement any test schemes for the controller and datapath
indicated by the designer, making it possible to detect SFR and interface faults.
ffl All faults observable directly at the controller outputs, whether system-functionally
redundant or irredundant, should be made observable at the datapath registers
under the scheme.
The key to the method is to push the effects of controller faults from the controller-
datapath interface into the datapath registers. The conditions under which this can
be done successfully are explored in Section 4.1. Section 4.2 shows the implementation
details of our scheme, and explains how the finite state machine added to the
x y z
d
c
a
e
x y z
d
c
ed d
e
(a) (b) (c)
Fig. 7. Propagation of a fault effect on a control line into the data flow (a) for multiplexer select
lines; (b) for register load lines; and (c) closer view of register load line case.
system ensures that the necessary conditions are present to observe faults through
the datapath registers. In Section 4.3, we show how observation costs can be reduced
by observing a single bit of each pertinent datapath register, rather than the
whole register.
4.1 Propagation of Controller Faults
In this section, we discuss how to propagate the effect of controller-functionally
irredundant (CFI) faults within the controller through the datapath. Any CFI
fault will cause at least one output of the controller to change in at least one time
step of the control schedule. Barring a detailed gate level analysis of the controller,
if we want to be sure to catch all CFI faults within the controller, we must be sure
that we can detect any change in a control line during any one time step. The key
to our approach is to ensure that changes in the control lines cause changes in the
data flowing through the datapath. In what follows, we consider multiplexer select
line fault effects, register load line stuck-at-1 effects and register load line stuck-at-0
effects separately.
Figure
7(a) shows a fault effect on the select line of a multiplexer (at point a).
The fault causes the wrong path through the multiplexer to be selected in some
time step of the control schedule. In that time step, for example, the multiplexer
may pass the incorrect value y instead of the correct value x. This will be noticeable
at point b as long as x 6= y. In turn, the ALU performs the operation y
of the correct x+z, and the effect of the fault propagates further into the datapath,
to point c. To preserve the fault effect and propagate it to point d, the register must
be loaded in the same time step; otherwise, the result of the erroneous operation
z is never written, and is therefore lost.
Figure
7(b) illustrates a fault effect on a register load line (at point e). Suppose
first that the fault causes the load line to be stuck-at-0 in some time step; in this
case, the register is not loaded when it should be. Thus, it keeps its old value
rather than obtaining a new value c(t). This will be noticeable at the
output of the register (at location d) only if c(t \Gamma 1) 6= c(t), i.e., only if the missed
load would have written a new value into the register. Assuming that the system
is designed so that redundant computations are not done, this should be the case
for at least some of the test patterns.
If a register load line is stuck-at-1 in some time step, the register is loaded when
it should not be. This is noticeable at the register output (at location d) only if the
new value inadvertently loaded is different from the old value. Referring to Figure
7(b), we see that there are a number of ways for this to happen. First, of all, the
multiplexer select line could have changed value since the last time the register was
loaded, so that the operand of the ALU comes from a different source. This is
noticeable as long as the new source supplies a different data value from the old
source (i.e., x 6= y). Alternatively, the value of x itself may be changed; since x is
coming from another register in the datapath, it is possible that a new value has
been written to x. In this case, the inadvertent load will cause x(t) +z to overwrite
the correct value,
For all of the faults discussed, if the register is not a primary output register,
then multiplexer selects and register loads in subsequent time steps must serve to
propagate the erroneous value at the register output to an observable point. For
example, in the case of a register load line being stuck-at-1 in some time step, the
inadvertent load caused by the fault will be noticed only if the value of the register
is used at least once after the inadvertent load takes place.
Note that Figure 7 is solely an example; our method is not restricted to this specific
architectural style. Having multi-level multiplexers or fanouts at the outputs
of arithmetic logic units, multiplexers, or registers does not invalidate the above
argument. We elaborate on this shortly, after we explain how the controller faults
are observed.
4.2 Implementation
The purpose of the FSM that we add to the system is to allow us to detect the
changes in the controller output value by looking at the outputs of the datapath
registers, rather than directly at the controller outputs themselves. We justify the
method by looking at a single functional block of the datapath (as shown in Figure
1) in a single time step We would like to be able to detect any change in the
multiplexer select lines or register load line during this time step, with the following
requirements:
ffl The justification should not depend on the content of register loaded at a time
step before i.
ffl The scheme should work regardless of the values of MS and RL.
The method works by freezing the original controller to expand the time step
into two time steps. In the first of the two steps, a known value that is different
from what it is supposed to be under normal operation is loaded into the register.
This is accomplished by complementing the multiplexer select lines and loading the
register. In the second of the two steps, control signals for normal operation are
produced, and the original controller is unfrozen so that it will make the transition
to the next normal mode state. This is illustrated in Figure 8. As the figure
shows, when testing the controller, a time step that normally would produce the
control signals (MS
expanded into two time steps, one which produces
and one which produces (MS
). Note that this has the side effect
MS , RL
MS RL
MS RL
MS , 1
MS , RL
MS , RL i
MS 1
MS , RL
MS 1
(b) operation during testing of controller
(a) normal mode operation
select
register load
Fig. 8. State diagrams illustrating how the added FSM interacts with the original controller.
MS
MS
MS
CTL_Clock
RLRL
r
r
r
Controller
Test_Mode
mask
mask
MS
RL
CTL_Clock
MS*
RL*
Fig. 9. One possible logic implementation of the FSM added for our scheme.
of slowing down the execution of the control schedule by a factor of two. The logic
implementation for the FSM needed to effect these changes is quite inexpensive.
Figure
9 shows one possible implementation.
We now elaborate on the role of the added FSM in allowing the detection of
CFI faults within the controller. This is best described by Figure 10, which details
different cases. Note that this table shows typical active components (see Figure
1) at time step However, for simplicity the subscript i is not shown in the
figure. [R] and [R] denote the content of register R when multiplexer select lines
are MS and MS, respectively. Also, MS f
denotes a multiplexer select for which at
least one bit is faulty due to a controller fault, and MS f
denotes its complement.
and [R] f
denote the content of register R in these two situations.
MS MS
MS MS
MS MS
on RL
MS MS
f
f
s-a-0 on RL
Fault-free Circuit
when
register "R"
is observed
Content of
register "R"
MS i
on RL is detected
or s-a-1 on MS is detected
Faulty Circuit
or -1 on MS
on RL
s-a-0 on RL
Fault-free Circuit
when
register "R"
is observed
Content of
register "R"
MS i
MS MS
MS MS
MS MS
MS MS
f
f
or s-a-1 on MS is detected
Faulty Circuit
or -1 on MS
on RL is detected
(a) for the case in which "R" is supposed to load.
(b) for the case in which "R" is not supposed to load
Fig. 10. The effect of interaction between the controller and piggyback on a typical register R
at time step
Figure
shows how all controller faults that cause changes at the controller
outputs (MS and RL lines) in the given time step can be observed by checking the
content of register R. The figure is split into two cases:
Case 1. In the fault-free system, the register loads a new value at time step i,
Part (a) of the figure shows the contents of register R in the fault-free
case, and in the presence of three different kinds of fault effects: stuck-at-0 on RL,
stuck-at-1 on RL, and stuck-at-0 or 1 on MS. The arrows show when a difference
in the contents of the register indicates that a fault will be detected. From the
figure, it is easily seen that any fault which causes RL to be stuck-at-0 or MS to
be stuck-at-0 or 1 in this time step will be detected. Note that if a fault causes
RL to be stuck-at-1 only in time steps during which (like this one) RL is supposed
to be a '1', the fault is controller-functionally redundant, and not targeted by this
technique.
ALU
MS1
MS2
MS3
An active path
Fig. 11. An active path with multi-level multiplexers and fanouts.
Case 2. In the fault-free system, the register does not load a new value
at time step i. Results for this case are shown in part (b) of the figure. The
arrows show that any fault which causes RL to be stuck-at-1 or MS to be stuck-
at-0 or 1 in this time step will be detected. Note that if a fault causes RL to be
stuck-at-0 only in time steps during which RL is supposed to be a '0', the fault is
controller-functionally redundant, and not targeted by this technique.
As mentioned earlier, our method in pushing controller faults into the data flow
is not restricted to any specific architectural style. In fact, in Figure 10, MS i
refers to all select lines of multiplexers that forward data in time step
example, in Figure 11, where we show the active path through the data flow in step
fMS1;MS2;MS3g. MS indicates that all select lines of all
three multiplexers (including the two multi-level multiplexers) are complemented;
this will have the desired effect, that is, it will forward incorrect data to the ALU.
Similarly, RL i
refers to all registers loading data from the ALU, whether directly
or indirectly (through muxes). In Figure 11, for example,
Having more fanouts on the multiplexer/ALU outputs or having more registers
driven by an ALU could even be beneficial for testing, since the effect of a fault
that is traveling from controller to datapath at time step has the potential
to influence more components, and more erroneous values are loaded for checking.
This feature stems from the fact that all multiplexer select lines are complemented
and all storage elements are loaded in the additional control states.
4.3 Observation of controller faults
An important point is that when the effect of the fault moves from a control line to
the data flow, it moves from a single bit line to a multi-bit bus. The fault effect may
be seen on one or more lines of the bus. Figure 7(c) shows the transfer of the fault
effect from the register load line into an n-bit wide data flow, with individual bits
of the data flow shown. In moving the register load line fault effect from location e
to location d, which data bits the fault effect will change on the bus at location d
depends on the specific values of the data. It is noticeable in some bit of the data
if x 6= y, but more specifically, it is noticeable in bit i of the data bus at location
d if bit i of x is not equal to bit i of y. From a practical standpoint, if we test the
datapath using a reasonably large number of random patterns, it will be possible
to observe control line faults without observing all the bits of the datapath bus; a
single bit will suffice, because, for any bit i that we choose, there are likely to be
at least some patterns that cause a change in bit i.
We now explore this argument more quantitatively. Suppose that the patterns
being written to a register are random with a uniform distribution and uncorrelated
in time. Let c i
(t) denote bit i of the tth pattern written to the register. When we
observe the bit i of the register output, we will detect an error that affects the
register load line whenever c i
In other words, the fault will escape
detection only if c i
(t\Gamma1) for all patterns written to the register
Assuming that c i
(t) is a random signal, this will happen with probability 1
must take on the same value (0 or 1) for N patterns, and the probability
that it takes a particular value in a given pattern is 1. Therefore, under these
assumptions, the probability that the fault escapes detection drops exponentially
with the test session length, and is quite small even for short test sessions. Note
that if all bits of the register output were observed, the probability of the fault
escaping detection would be even smaller; for an n bit register, the probability
would be 1
2 nN . We acknowledge that in practical circuits, the ALUs in particular
influence the randomness of signals and the assumption that c i (t) remains random
is often not valid for all or some bits of a signal. For example, if an ALU multiplies
its input by 4, the first two bits of the output remain zero all the time, and their
probability will be far from the ideal value of 1Empirically, we have observed that for the majority of arithmetic and logic ALUs,
randomness reduction does not invalidate our argument, and almost any bit can be
used for observation. This can be seen for an example system in the fault coverage
curves of Figure 12, which show the effects of observing a single bit (either the
most significant bit or the least significant bit of each register) versus observing all
datapath register bits. The curves corresponding to single bit observation rise a
bit more slowly than the full observation curve, but do reach the same final fault
coverage. One can easily perform a behavioral simulation of the register transfer
level datapath to find out which bit(s) can be successfully used for observation. In
previous work [Harmanani et al. 1994], we presented a randomness analysis of a
dataflow graph based on entropy 1 . The simulation tool analyzes the behavior and
computes the randomness of each bit (and overall signals) generated by the ALUs.
5. EXPERIMENTAL RESULTS
In this section, we demonstrate our approach using several example circuits. The
circuits have been synthesized from high level descriptions using the SYNTEST
synthesis system [Harmanani et al. 1992]. The output of SYNTEST is a register
transfer level datapath and state diagram controller. Logic level synthesis is
done using the ASIC Synthesizer from the COMPASS Design Automation suite of
1 Entropy of a binary signal X is defined as: I
, where jXj denotes the
bit width of X, and p X;i denotes the probability that X is in state i (i
Fault
Coverage
Time in clock phases
full observation
msb observation
lsb observation
Fig. 12. Fault coverage curves for the proposed test scheme for the controller in a differential
equation solver when all datapath register bits are observed, versus observing only the most or
least significant bit of each register.
tools [Compass Design Automation 1993], using a finite state machine implementation
for the controller and based on a 0.8-micron CMOS library [VLSI Technology
1993]. The test pattern generation registers (TPGRs) necessary for built-in self-test
(BIST) are synthesized using COMPASS's Test Compiler. Fault coverage curves
are found for the resulting logic level circuits using AT&T's GENTEST fault simulator
[AT&T 1993]. GENTEST uses a single stuck-at fault model. The probability
of aliasing within the MISRs is neglected, as are faults within the TPGRs and other
test circuitry. Although the datapath and controller are tested together, we have
separated out the fault coverage curves for the controller to clarify the results.
We work with four example circuits, all with eight bit wide datapaths. The first
evaluates the third degree polynomial d. Our second example
implements a differential equation solver and is a standard high level synthesis
benchmark [Gajski et al. 1992]. Our third example is another high level benchmark
known as the FACET example [Gajski et al. 1992]. Finally, the fourth example is
the well known fifth order elliptical filter from [Kung et al. 1985]. For the basis of
comparison, we show fault coverage and transistor count results for three different
test schemes:
Together Test. Corresponds to a completely integrated test of datapath and con-
troller, for which no additional hardware is added internal to the system. For this
case, we drive the inputs of the system from the TPGR, and observe the system
outputs, but make no changes at the datapath/controller interface.
Piggyback Test. Corresponds to our new test scheme for facilitating integrated
controller/datapath test. We add the piggyback finite state machine at the interface
between datapath and controller, then we drive the inputs of the system from the
TPGR, and observe one bit of all datapath registers.
Separate Test. Corresponds to an independent test of the controller separate from
the system. In this case, we drive the inputs of the controller from the TPGR, and
observe the controller outputs directly.
Transistor counts given are for an entire system under a given test scheme, and
include the datapath, controller, and any other circuitry necessary for the test. For
Fault
Coverage
Time in clocks
separate test
piggyback test
together test
(a) a polynomial evaluator.85950 50 100 150 200 250
Fault
Coverage
Time in clocks
separate test
piggyback test
together test
(b) a differential equation solver.85950 100 200 300
Fault
Coverage
Time in clocks
separate test
piggyback test
together test
(c) the FACET benchmark.85950 50 100 150 200
Fault
Coverage
Time in clocks
separate test
piggyback test
together test
(d) the WAVE benchmark.
Fig. 13. Fault coverage curves for the controllers of four example circuits.
the "piggyback" test scheme, this includes the transistor count for the added finite
state machine.
Fault coverage results for the four examples are shown in parts (a), (b), (c) and
(d) of
Figure
13, respectively. Fault coverage is for the controller only. On the
fault coverage graphs, the vertical axes show fault coverage as the percentage of
controller faults detected, and the horizontal axes show time as a function of clock
cycles. During each of the tests, the controller is run in normal mode; in the case of
the polynomial, the schedule has five control steps, and so for the "separate" and
"together" schemes, after, for example, 100 clocks, the controller has run through
the schedule times. For the "piggyback" scheme, the action of the added
finite state machine serves to slow the speed down to twice as slow, and so for the
same 100 clocks the controller has run through the schedule 10 times. We see from
this that one penalty of our approach is that it takes approximately twice as long
for the fault coverage curves to saturate using the "piggyback" scheme. This effect
is seen for all four of our example circuits. However, this is not a serious limitation,
as the controller test is still quite short, especially when compared with the test
for the datapath, which may easily be an order of magnitude longer [Carletta and
Papachristou 1997].
As expected, the "together" test scheme results in very low fault coverage for
the controller in all four examples. This is due to the fact that it is very difficult
to observe the controller outputs through the datapath. For this scheme, all
system-functionally redundant faults go undetected. On the other hand, the "sep-
8d in the bitwidth of the input data
dout the bitwidth of the output data
r the number of registers
m the number of multiplexer select lines
s the number of status lines
(a) parameters for system size .
Together Piggyback Separate
Test Scheme Test Scheme Test Scheme
width of TPGR d in
# of associated muxes d in
width of MISR dout
# of associated muxes
Other Gates - m XOR, r
(b) test circuitry required in terms of the parameters.
Table
II. Test circuitry required for the three test schemes.
arate" test scheme does a good job of testing the controller. Because this scheme
observes the controller outputs directly, this test scheme is capable of catching all
controller-functionally irredundant faults, including the system-functionally redundant
ones. We see from the curves that under the "piggyback" scheme, final fault
coverages are very nearly as high as for the "separate" case. This indicates that
the piggyback is successful at pushing the system-functionally redundant faults out
into the datapath. Table II summarizes the test circuitry needed for each of the
three test schemes in terms of key system parameters. It includes the TPGR, the
MISR, and any muxes associated with them. For example, muxes are needed to
control whether the data inputs to the datapath are coming from the TPGR (as
in test mode) or from the system inputs (as in normal mode). If a single MISR is
used to test both the datapath and the controller, muxes are needed to determine
which component is driving the MISR at a given time.
Table
III shows the relative sizes of systems implementing the three test schemes
for our four example circuits. Overhead is given relative to the overhead for the
"together" test scheme, since the "together" scheme represents the minimal amount
of test circuitry that can be used for any BIST scheme. The "together" test scheme
has the lowest area for test circuitry, since no attempt is made to observe extra
controller or datapath lines. The drawback, however, is low fault coverage. At
the other extreme, the "separate" test scheme has the highest fault coverage and
highest observability, but also the highest area overhead, ranging from 10.8% to
23.9% for the examples shown. In the middle is the "piggyback" test scheme. its
fault coverage is almost as good as the separate test scheme, but its area overhead
is much lower, from 3.8% to 7.6%.
The overhead advantage of the "piggyback" scheme over the "separate" scheme
arises from two sources. The first is reduced area requirements for the TPGR
compared to the "separate" scheme, due to the fact that under the "separate"
scheme, the TPGR must be at least wide enough to generate test bits for all inputs
of the datapath, both data and control, at once. In contrast, for the "piggyback"
scheme, the control lines are driven from the controller, so the TPGR must be only
as wide as the input data, with one extra bit for the start input to the controller.
Together Piggyback Relative Separate Relative
Design Test Scheme Test Scheme Overhead Test Scheme Overhead
Poly 8186 8684 6.1% 9839 20.2%
FACET 12966 13460 3.8% 14371 10.8%
Table
III. Relative sizes of the systems for the three test schemes, in number of transistors, with
overhead figures relative to the size of the system for the "together" test scheme.
Design Together Piggyback Separate
TPGR MISR TPGR MISR TPGR MISR
Poly 729 819 +0% +0% +43% +0%
FACET 729 440 +0% +50% +76% +100%
Table
IV. Comparing TPGR and MISR overhead for three test schemes, in number of transistors
for "together" test scheme and relative to "together" method for the other two test schemes.
There is a similar savings in the number of associated multiplexers. This often
results in a significant area savings.
The second source is reduced area requirements for the MISR due to the indirect
observation of the control lines through the datapath registers. The "separate"
scheme requires an MISR wide enough to watch all outputs of the datapath or all
outputs of the controller, whichever is wider, at once. Consider a system in which
the controller outputs outnumber the datapath outputs. Our MISR must be at
least as big as r +m+ 1, where r is the number of registers and m is the number
of multiplexer select lines. In the "piggyback" scheme, in contrast, we require an
MISR wide enough to watch r bits. Thus, we see that the larger m is in
comparison to r, the more we will save on area for the MISR. There is a similar
savings associated with multiplexers necessary for sharing the same MISR when
testing both the datapath and controller. Thus, for circuits in which that number of
controller outputs outnumber the number of datapath outputs and m is reasonably
large compared to r, we will see a significant reduction in the amount of observation
circuitry needed. For other circuits, there may not be a significant reduction, and
the amount of observation circuitry may even grow slightly. However, the other
stated benefits, like detecting faults that would cause excessive power consumption,
will still exist.
As can be seen from the results, our test scheme requires slightly more observation
circuitry for two of the circuits: the differential equation solver, for which the
number of datapath outputs outnumber the number of controller outputs, so that
a savings in the MISR width is not possible; and the FACET example, which has
eleven register load lines and only two multiplexer select lines. Here, m is too small
relative to r to offset the addition of the logic needed for the finite state machine
required for the "piggyback" approach. In general, for many designs the number
of multiplexer select lines will outnumber the register load lines, especially when
distributed multiplexors, buses with tri-state buffers and one-hot encoding for the
multiplexor control are used. For the other two circuits, use of the "piggyback"
scheme does result in a reduction in the amount of observation circuitry.
6. CONCLUSION
This paper proposes a scheme for facilitating testing of datapath / controller pairs.
It advocates testing the pair in an integrated way, rather than testing the datapath
and controller completely independently by separating each from the system environment
during test. The scheme adds a small finite state machine to the system
that serves to enhance observability of the controller outputs, so that controller
faults can be observed at the outputs of the registers of the datapath. Experimental
results show that use of the scheme results in about one-third the test overhead
than that required for a scheme in which datapath and controller are tested sepa-
rately, with fault coverage that is as good, or nearly as good. In addition, for the
integrated scheme, the control lines used to communicate between controller and
datapath are more thoroughly tested.
--R
Constructing optimal test schedules for VLSI circuits having built-in test hardware
Finding defects with fault models.
Optimum and heuristic algorithms for an approach to finite state machine decomposition.
State assignment for low power dissipation.
IEEE Custom Integrated Circuits Conf.
Automatic synthesis of gated clocks for power reduction in sequential circuits.
Behavioral synthesis for hierarchical testability of con- troller/datapath circuits with conditional branches
on Computer Design
Concurrent control of multiple BIT structures.
A test and maintenance controller for a module containing testable chips.
Synthesis of controllers for full testability of integrated datapath-controller pairs
Testability analysis and insertion for RTL circuits based on pseudorandom BIST.
Behavioral testability insertion for data- path/controller circuits
Compass Design Automation.
MUSTANG: State assignment of finite state machines targeting multilevel logic implementations.
Decomposition and factorization of sequential finite state machines.
A controller-based design-for- testability technique for controller-datapath circuits
Optimized synthesis of self-testable finite state machines
Synthesis for testability of large complexity controllers.
A method for testability insertion at the RTL - behavioral and structural
SYNTEST: An environment for system-level design for test
An efficient pprocedure for the synthesis of fast self-testable controller structure
A distance reduction approach to design for testability.
A distributed hardware approach to built-in self test
A scheme for overlaying concurrent testing of VLSI circuits.
Test scheduling in testable VLSI circuits.
VLSI and Modern Signal Processing.
An optimized testable architecture for finite state machine.
Architectural partitioning for system level design.
Synthesis of optimal 1-hot coded on-chip controllers for BIST hardware
A scheme for integrated controller-datapath fault testing
Fast identification of robust dependent path delay faults.
--TR
Architectural partitioning for system level design
A scheme for overlaying concurrent testing of VLSI circuits
High-level synthesis
An efficient procedure for the synthesis of fast self-testable controller structures
Fast identification of robust dependent path delay faults
Activity-sensitive architectural power analysis for the control path
A controller-based design-for-testability technique for controller-data path circuits
Behavioral Testability Insertion for Datapath/Controller Circuits
A scheme for integrated controller-datapath fault testing
Synthesis of controllers for full testability of integrated datapath-controller pairs
VLSI and Modern Signal Processing
Behavioral Synthesis for Hierarchical Testability of Controller/Data Path Circuits with Conditional Branches
Synthesis for testability of large complexity controllers
Testability analysis and insertion for RTL circuits based on pseudorandom BIST
Finding Defects with Fault Models
A distance reduction approach to design for testability
An optimized testable architecture for finite state machines | built-in self-test;register transfer level;synthesis-for-testability |
383271 | A signal-processing framework for inverse rendering. | Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known. | Introduction
To create a realistic computer-generated image, we need both
an accurate, physically-based rendering algorithm and a detailed
model of the scene including light sources and objects specified
by their geometry and material properties-texture and reflectance
(BRDF). There has been substantial progress in the development
of rendering algorithms, and nowadays, realism is often limited by
the quality of input models. As a result, image-based rendering is
becoming widespread. In its simplest form, image-based rendering
uses view interpolation to construct new images from acquired
images without constructing a conventional scene model.
The quality of view interpolation may be significantly improved
if it is coupled with inverse rendering. Inverse rendering measures
rendering attributes-lighting, textures, and BRDF-from
photographs. Whether traditional or image-based rendering algorithms
are used, rendered images use measurements from real ob-
jects, and therefore appear very similar to real scenes. Measuring
scene attributes also introduces structure into the raw imagery,
making it easier to manipulate the scene. For example, an artist can
change independently the material properties or the lighting.
Inverse rendering methods such as those of Debevec et al. [6],
Marschner et al. [21], and Sato et al. [32], have produced high
Real Photograph Rendered Image
Figure
1: Left: Real Photograph Right: Rendered image. The BRDF used for
the rendered image was estimated under complex unknown illumination from 3 photographs
of a cat sculpture with known geometry. Our algorithm also recovered the
lighting distribution, which consisted of two directional sources and an area source.
The images above show a new view not used in BRDF recovery; the lighting is also
new, being composed of a single directional source (with known direction) not used in
BRDF estimation. These images show that the recovered BRDF accurately predicts
appearance even under novel viewing and lighting conditions.
quality measurements. However, most previous work has been conducted
in highly controlled lighting conditions, usually by careful
active positioning of a single point source. Even methods that work
in outdoor conditions, such as those of Yu and Malik [39], Sato and
Ikeuchi [31] and Love [17], are designed specifically for natural
illumination, and assume a simple parametric model for skylight.
Previous methods have also usually tried to recover only one of the
unknowns-texture, BRDF or lighting. The usefulness of inverse
rendering would be greatly enhanced if it could be applied under
general uncontrolled lighting, and if we could simultaneously estimate
more than one unknown. For instance, if we could recover
both the lighting and BRDF, we could determine BRDFs under
unknown illumination. One reason there has been relatively little
work in these areas is the lack of a common theoretical framework
for determining under what conditions inverse problems can and
cannot be solved, and for making principled approximations.
Our main contribution is a theoretical framework for analyzing
the reflected light field from a curved convex homogeneous surface
under distant illumination. We believe this framework provides a
solid mathematical foundation for many areas of graphics. With
respect to inverse rendering, we obtain the following results:
Reflection as Convolution: It has been observed qualitatively by
Miller and Hoffman [23], Cabral et al. [3], Bastos et al. [2] and
others that the reflection operator behaves like a convolution in the
angular domain. We formalize these notions mathematically. The
reflected light field can therefore be thought of in a precise quantitative
way as obtained by convolving the lighting and BRDF, i.e.
by filtering the illumination using the BRDF. We believe this is a
useful way of analyzing many computer graphics problems. In par-
ticular, inverse rendering can be viewed as deconvolution.
Well-posedness and Conditioning of Inverse Problems: Inverse
problems can be ill-posed-there may be no solutions or several
solutions. They are also often numerically ill-conditioned, i.e.
extremely sensitive to noisy input data. From our theory, we are
able to analyze the well-posedness and conditioning of a number
of inverse problems, explaining many previous empirical observa-
tions. This analysis can serve as a guideline for future research.
New Practical Representations and Algorithms: Insights from
the theory lead to the derivation of a simple practical representation,
which can be used to estimate BRDFs under complex lighting. The
theory also leads to novel frequency space and hybrid angular and
frequency space methods for inverse problems, including two new
algorithms for estimating the lighting, and an algorithm for simultaneously
determining the lighting and BRDF. Therefore, we can
recover BRDFs under general, unknown lighting conditions.
Previous Work
To describe previous work, we will introduce a taxonomy based on
how many of the three quantities-lighting, BRDF and texture-
are unknown. To motivate the taxonomy, we first write a simplified
version of the reflection equation, omitting visibility.
Z#
Here, B is the reflected light field, expressed as a function of the
surface position x and outgoing direction # . The normal vector is
#n. For simplicity, we assume that a single texture T modulates the
BRDF. In practice, we would use separate textures for the diffuse
and specular components of the BRDF.
The integrand is a product of terms-the texture T (x), the
and the lighting L(x, # i ). Inverse rendering, assuming
known geometry, involves inverting the integral to recover
one or more of #, L, or T . If two or more quantities are unknown,
inverse rendering involves factoring the reflected light field.
One Unknown
1. Unknown Texture: Previous methods have recovered the
diffuse texture of a surface using a single point light source by
dividing by the irradiance in order to estimate the albedo at each
point. Details are given by Marschner [34] and Levoy et al. [16].
2. Unknown BRDF: The BRDF [24] is a fundamental intrinsic
surface property. Active measurement methods, known as goniore-
flectometry, involving a single point source and a single observation
at a time, have been developed. Improvements are suggested
by Ward [37] and Karner et al. [12]. More recently, image-based
BRDF measurement methods have been proposed by Lu et al. [18]
and Marschner et al. [21]. If the entire BRDF is measured, it may be
represented by tabulating its values. An alternative representation
is by low-parameter models such as those of Ward [37] or Torrance
and Sparrow [36]. The parametric BRDF will generally not be as
accurate as a full measured BRDF. However, parametric models are
often preferred in practice since they are compact, and are simpler
to estimate. Love [17] estimates parametric BRDFs under natural
assuming a low-parameter model for skylight and
sunlight. Dror et al. [7] classify the surface reflectance as one of
a small number of predetermined BRDFs, making use of assumed
statistical characteristics of natural lighting. However, the inverse
BRDF problem has not been solved for general illumination.
3. Unknown Lighting: A common solution is to use a mirrored
ball, as done by Miller and Hoffman [23]. Marschner and Greenberg
[20] find the lighting from a Lambertian surface. D'Zmura [8]
proposes, but does not demonstrate, estimating spherical harmonic
coefficients. For Lambertian objects, we [29] have shown how to
recover the first 9 spherical harmonics. Previous work has not estimated
the lighting from curved surfaces with general BRDFs.
Two Unknowns
4. Factorization-Unknown Lighting and BRDF: BRDF
estimation methods have been proposed by Ikeuchi and Sato [10]
and Tominaga and Tanaka [35] for the special case when the lighting
consists of a single source of unknown direction. However,
these methods cannot simultaneously recover a complex lighting
distribution and the object BRDF. One of the main practical contributions
of this paper is a solution to this problem for curved sur-
allowing us to estimate BRDFs under general unknown illu-
mination, while also determining the lighting. The closest previous
work is that of Sato et al. [30] who use shadows to estimate the
illumination distribution and the surface reflectance properties. We
extend this work by not requiring shadow information, and presenting
improved methods for estimating the illumination.
5. Factorization-Unknown Texture and BRDF: This corresponds
to recovering textured, or spatially-varying BRDFs. Sato
et al. [32] rotate an object on a turntable, using a single point
source, to recover BRDF parameters and texture. Yu et al. [38] recover
a texture only for the diffuse BRDF component, but account
for interreflections. Using a large number of images obtained by
moving a point source around a sphere surrounding the subject,
Debevec et al. [6] acquire the reflectance field of a human face,
and recover parameters of a microfacet BRDF model for each surface
location. Sato and Ikeuchi [31] and Yu and Malik [39] recover
BRDFs and diffuse textures under natural illumination, assuming
a simple parametric model for skylight, and using a sequence of
images acquired under different illumination conditions. Most of
these methods recover only diffuse textures; constant values, or
relatively low-resolution textures, are used for the specular param-
eters. A notable exception is the work of Dana et al. [5] who generalize
BRDFs to a 6D bi-directional texture function (BTF).
6. Factorization-Unknown Lighting and Texture: We
have shown [29] that a distant illumination field can cause only low
frequency variation in the radiosity of a convex Lambertian sur-
This implies that, for a diffuse object, high-frequency texture
can be recovered independently of lighting. These observations are
in agreement with the perception literature, such as Land's retinex
theory [15], wherein high-frequency variation is usually attributed
to texture, and low-frequency variation associated with
tion. However, note that there is a fundamental ambiguity between
low-frequency texture and lighting effects. Therefore, lighting and
texture cannot be factored without using active methods or making
further assumptions regarding their expected characteristics.
General Case: Three Unknowns
7. Factorization-Unknown Lighting, Texture, BRDF:
Ultimately, we wish to recover textured BRDFs under unknown
lighting. We cannot solve this problem without further assump-
tions, because we must first resolve the lighting-texture ambiguity.
Our approach differs from previous work in that it is derived
from a mathematical theory of inverse rendering. As such, it has
similarities to inverse methods used in areas of radiative transfer
and transport theory such as hydrologic optics [26] and neutron
scattering. See McCormick [22] for a review.
In previous theoretical work, D'Zmura [8] has analyzed reflection
as a linear operator in terms of spherical harmonics, and discussed
some resulting perceptual ambiguities between reflectance
and illumination. In computer graphics, Cabral et al. [3] first
demonstrated the use of spherical harmonics to represent BRDFs.
We extend these methods by explicitly deriving the frequency-space
reflection equation (i.e. convolution formula), and by providing
quantitative results for various special cases. We have earlier reported
on theoretical results for planar or flatland light fields [27],
and for determining the lighting from a Lambertian surface [29].
For the Lambertian case, similar results have been derived independently
by Basri and Jacobs [1] in simultaneous work on face
recognition. This paper extends these previous results to the general
3D case with arbitrary isotropic BRDFs, and applies the theory
to developing new practical inverse-rendering algorithms.
Assumptions
The input to our algorithms consists of object geometry and photographs
from a number of different locations, with known extrinsic
and intrinsic camera parameters. We assume static scenes, i.e. that
the object remains stationary and the lighting remains the same between
views. Our method is a passive-vision approach; we do not
actively disturb the environment. Our assumptions are:
Reflected radiance
Coefficients of basis-function expansion of B
Incoming radiance
lm Coefficients of spherical-harmonic expansion of L
Surface BRDF
# BRDF multiplied by cosine of incident angle
# lpq Coefficients of spherical-harmonic expansion of -
Incident elevation angle in local, global coordinates
Incident azimuthal angle in local, global coordinates
Outgoing elevation angle in local, global coordinates
Outgoing azimuthal angle in local, global coordinates
Hemisphere of integration in local,global coordinates
x Surface position
Surface normal parameterization-elevation angle
Surface normal parameterization-azimuthal angle
R # Rotation operator for surface normal (#)
D l
Matrix related to Rotation Group SO(3)
Y lm Spherical Harmonic basis function
lm
Complex Conjugate of Spherical Harmonic
# l Normalization constant,
I # -1
Figure
2: Notation
Known Geometry: We use a laser range scanner and a volumetric
merging algorithm [4] to obtain object geometry. By assuming
known geometry, we can focus on lighting and material properties.
Curved Objects: Our theoretical analysis requires curved sur-
faces, and assumes knowledge of the entire 4D reflected light field,
corresponding to the hemisphere of outgoing directions for all surface
orientations. However, our practical algorithms will require
only a small number of photographs.
Distant Illumination: The illumination field will be assumed to
be homogeneous, i.e. generated by distant sources, allowing us to
use the same lighting function regardless of surface location. We
treat the lighting as a general function of the incident angle.
Isotropic BRDFs: We will consider only surfaces having
isotropic BRDFs. The BRDF will therefore be a function of only 3
variables, instead of 4, i.e. 3D instead of 4D.
interreflection will be
ignored. Also, shadowing is not considered in our theoretical anal-
ysis, which is limited to convex surfaces. However, we will account
for shadowing in our practical algorithms, where necessary.
4 Theory of Reflection as Convolution
This section presents a signal-processing framework wherein reflection
can be viewed as convolution, and inverse rendering as de-
convolution. First, we introduce some preliminaries, defining the
notation and deriving a version of the reflection equation. We then
expand the lighting, BRDF and reflected light field in spherical harmonics
to derive a simple equation in terms of spherical harmonic
coefficients. The next section explores implications of this result.
Incoming Light (L) Outgoing Light (B)
a a
Figure
3: Schematic of reflection. On top, we show the situation with respect to
the local surface. The BRDF maps the incoming light distribution L to an outgoing
light distribution B. The bottom figure shows how the rotation # affects the situation.
Different orientations of the surface correspond to rotations of the upper hemisphere
and BRDF, with global directions corresponding to local directions (# i
4.1 Preliminaries
For the purposes of theoretical analysis, we assume curved convex
isotropic surfaces. We also assume homogeneous objects, i.e.
untextured surfaces, with the same BRDF everywhere. We parameterize
the surface by the spherical coordinates of the normal vector
(#), using the standard convention that corresponds
to the north pole or +Z axis. Notation used in this section
is listed in figure 2, and a diagram is in figure 3. We will use two
types of coordinates. Unprimed global coordinates denote angles
with respect to a global reference frame. On the other hand, primed
local coordinates denote angles with respect to the local reference
frame, defined by the local surface normal. These two coordinate
systems are related simply by a rotation, to be defined shortly.
Reflection Equation: We modify equation 1 based on our as-
sumptions, dropping the texturing term, and using the surface normal
(#) instead of the position x to parameterize B. Since
assumed to be independent of x, we write it as
Finally, ( #
can be written simply as cos # i , the
cosine of the incident angle in local coordinates.
Z#
We have mixed local (primed) and global (unprimed) coordinates.
The lighting is a global function, and is naturally expressed in a
global coordinate frame as a function of global angles. On the
other hand, the BRDF is naturally expressed as a function of the
local incident and reflected angles. When expressed in the local
coordinate frame, the BRDF is the same everywhere for a homogeneous
Similarly, when expressed in the global coordinate
frame, the lighting is the same everywhere, under the assumption
of distant illumination. The reflected radiance B can be expressed
conveniently in either local or global coordinates; we have used local
coordinates to match the BRDF. Similarly, integration can be
conveniently done over either local or global coordinates, but the
upper hemisphere is easier to express in local coordinates.
We now define a transfer 1 function -
in order to absorb
the cosine term. With this modification, equation 2 becomes
Z#
Rotations-Converting Local and Global coordinates:
Local and global coordinates are related by a rotation corresponding
to the surface normal (#). The north pole in local coor-
dinates, (0 # , is the surface normal. The corresponding global
coordinates are clearly (#). We define R# as a rotation operator
2 on column vectors that rotates
and is given by Rz is a rotation about
the Z axis and Ry a rotation about the Y axis.
We can now write the dependence on incident angle in equation 3
entirely in global coordinates, or entirely in local coordinates.
R#
R#
1 If we want the transfer function to be reciprocal, i.e. symmetric with respect to
incident and outgoing angles, we may multiply both the transfer function and the reflected
light field by cos #
2 For anisotropic surfaces, we need an initial rotation about Z to set the local tangent
frame. We would then have rotations about Z, Y and Z-the familiar Euler-
Angle parameterization. Since we are dealing with isotropic surfaces, we have ignored
this initial Z rotation, which has no physical significance. It is not difficult to derive
the theory for the more general anisotropic case.
Interpretation as Convolution: In the spatial domain, convolution
is the result generated when a filter is translated over an
input signal. However, we can generalize the notion of convolution
to other transformations Ta , where Ta is a function of a, and write
Z
When Ta is a translation by a, we obtain the standard expression
for spatial convolution. When Ta is a rotation by the angle a, the
above formula defines convolution in the angular domain.
Therefore, equations 4 and 5 represent rotational convolutions.
Equation 4 in global coordinates states that the reflected light field
at a given surface orientation corresponds to rotating the BRDF to
that orientation, and then integrating over the upper hemisphere.
The BRDF can be thought of as the filter, while the lighting is the
input signal. Symmetrically, equation 5 in local coordinates states
that the reflected light field at a given surface orientation may be
computed by rotating the lighting into the local coordinate system
of the BRDF, and then doing the hemispherical integration.
4.2 Spherical Harmonic Representation
For the translational case, the well-known frequency-space convolution
formula is given in terms of Fourier transforms. For a general
operator, an analogous formula can be obtained in terms of group
representations and the associated basis functions. For translations,
these basis functions are sines and cosines-the familiar Fourier
basis. For rotations, the corresponding basis functions are spherical
harmonics, and we now proceed to derive the frequency-space
rotational convolution formula in terms of spherical harmonics.
Inui et al. [11] is a good reference for background on spherical
harmonics and their relationship to rotations. Our use of spherical
harmonics to represent the lighting is similar in some respects
to previous methods [25] that use steerable linear basis functions.
Spherical harmonics, as well as the closely related Zernike Polyno-
mials, have been used before to represent BRDFs [3, 14, 33].
Spherical harmonics are the analog on the sphere to the Fourier
basis on the line or circle. The spherical harmonic Ylm is given by
s
(l -m)!
(l +m)!
l (cos #)e Im#
where Nlm is a normalization factor. In the above equation, the
azimuthal dependence is expanded in terms of Fourier basis func-
tions. The # dependence is expanded in terms of the associated
Legendre functions P m
l . The indices obey l # 0 and -l # m # l.
The rotation formula for spherical harmonics is
l
D l
mm #)e Im#
The important thing to note here is that the m indices are mixed-a
spherical harmonic after rotation must be expressed as a combination
of other spherical harmonics with different m indices. How-
ever, the l indices are not mixed; rotations of spherical harmonics
with order l are composed entirely of other spherical harmonics
with order l. For given order l, D l is a matrix that tells us how a
spherical harmonic transforms under rotation about the y-axis, i.e.
how to rewrite a rotated spherical harmonic as a linear combination
of all the spherical harmonics of the same order.
We begin by expanding the lighting in global coordinates.
l
m=-l
Here, the coefficients Llm can be computed in the standard way by
integrating against the complex conjugate Y #
lm
Z 2#
We now represent the transfer function -
of spherical harmonics. Note that -
# is nonzero only over the upper
hemisphere, i.e. when cos #
We are interested in isotropic BRDFs, which depend only on
This implies that the BRDF is invariant with respect
to adding a constant angle # to both incident and outgoing azimuthal
angles. It can be shown from the form of the spherical harmonics
that this condition forces all terms to vanish unless
The use of the complex conjugate for Y #
lm in the expansion above
is to make We now write
Furthermore, invariance of the BRDF with respect to negating
both incident and outgoing azimuthal angles requires that -
Finally, we use only three indices for the BRDF.
To represent the reflected light field, we define a new set of orthonormal
basis functions. The normalization and form of these
functions are derived in the appendix. In particular, the matrix
D comes from the rotation formula for spherical harmonics, equation
6. It will be convenient to first define a normalization constant.
r
r
The new basis functions can then be written
The expansion of the reflected light field is now
The translational convolution theorem expresses convolution in
frequency-space as a product of Fourier coefficients. For the rotational
case, an analogous result is derived in the appendix, using
spherical harmonics instead of complex exponentials. The
frequency-space reflection equation (or rotational convolution for-
mula) is a similar product of basis-function coefficients.
Implications
This section explores the implications of our results for problems
in inverse rendering, and works out some special cases in detail.
Our theory indicates which inverse problems are tractable, as opposed
to being ill-posed or ill-conditioned. Finally, we will use the
insights gained to develop a new practical representation.
5.1 General Observations
Inverse can be manipulated to yield
Llm
We may use any index m in inverse BRDF computation. Therefore,
BRDF recovery is well-posed unless the denominator vanishes for
all m, i.e. all terms for some order l in the spherical harmonic expansion
of the lighting vanish. In signal processing terms, if the
input signal (lighting) has no amplitude along certain modes of the
filter (BRDF), those modes cannot be estimated. BRDF recovery is
well conditioned when the lighting contains high frequencies like
directional sources, and is ill-conditioned for soft lighting.
Inverse Lighting: Equation 10 can also be manipulated to yield
Similarly as for BRDF recovery, any p, q can be used for inverse
lighting. The problem is well-posed unless the denominator -
vanishes for all p, q for some l. In signal processing terms, when
the BRDF filter truncates certain frequencies in the input lighting
signal (for instance, if it were a low-pass filter), we cannot determine
those frequencies from the output signal. Inverse lighting is
well-conditioned when the BRDF has high-frequency components
like sharp specularities, and is ill-conditioned for diffuse surfaces.
Light Field Factorization-Lighting and BRDF: We now
consider the problem of factorizing the light field, i.e simultaneously
recovering the lighting and BRDF when both are unknown.
The reflected light field is defined on a four-dimensional domain
while the lighting is a function of two dimensions and the isotropic
BRDF is defined on a three-dimensional domain. This seems to indicate
that we have more knowns (in terms of coefficients of the reflected
light field) than unknowns (lighting and BRDF coefficients).
For fixed order l, we can use known lighting coefficients Llm
to find unknown BRDF coefficients -
# lpq and vice-versa. In fact,
we need only one known nonzero lighting or BRDF coefficient to
bootstrap this process. It would appear from equation 10, however,
that there is an unrecoverable scale factor for each order l, corresponding
to the known coefficient we require. But, we can also use
reciprocity of the BRDF. To make the transfer function symmetric,
we multiply it, as well as the reflected light field B, by cos #
The new transfer function -
# is symmetric with respect to incident
and outgoing directions, and corresponding indices: -
# plq .
There is a global scale factor we cannot recover, since -
B is not
affected if we multiply the lighting and divide the BRDF by the
same amount. Therefore, we scale the lighting so the DC term
1/ (4#). Now, using equations 11, 12, and 13,
Llm
In the last line, we can use any value of m. This gives an explicit
formula for the lighting and BRDF in terms of coefficients of the
output light field. Therefore, up to global scale, the reflected light
field can be factored into the lighting and the BRDF, provided
the appropriate coefficients of the reflected light field do not vanish.
5.2 Special Cases
Mirror BRDF: The mirror BRDF corresponds to a gazing
sphere. Just as the inverse lighting problem is easily solved in angular
space in this case, we will show that it is well-posed and easily
solved in frequency space. The BRDF involves a delta function,
Note that the BRDF is nonzero only when # i #/2 and #
The coefficients for the BRDF, reflected light field, and lighting are
-0.4
Clamped
l ->
coefficient
->
Figure
4: Left: Successive approximations to the clamped cosine function by adding
more spherical harmonic terms. For we get a very good approximation. Right:
The solid line is a plot of spherical harmonic coefficients A l = # l - # l For l > 1,
odd terms vanish, and even terms decay rapidly.
The factor of (-1) q is because the azimuthal angle changes by #
upon reflection. We see that the lighting coefficients correspond in
a very direct way to the coefficients of the reflected light field. In
signal processing terminology, the inverse lighting problem is well
conditioned because the frequency spectrum of a delta function remains
constant with increasing order l, and does not decay.
Single Directional Source: For convenience, we position the
coordinate axes so that the source is located at +Z, i.e. at (0, 0).
Because the directional source is described by a delta function,
the spherical harmonic expansion coefficients are given simply by
lm (0), which vanishes for m #= 0. Thus,
In angular space, a single observation corresponds to a single
BRDF measurement. This property is used in image-based BRDF
measurement [18, 21]. We see that in frequency space, there is a
similar straightforward relation between BRDF coefficients and reflected
light field coefficients. BRDF recovery is well-conditioned
since we are estimating the BRDF filter from its impulse response.
Lambertian BRDF: For a Lambertian object, the transfer function
is a scaled clamped cosine function, since it is proportional to
the cosine of the incident angle over the upper hemisphere when
and is equal to 0 over the lower hemisphere. Plots
of spherical-harmonic fits to the clamped cosine function and the
magnitude of the coefficients are shown in figure 4. Because there
is no dependence on outgoing angle, we can drop the indices p and
q. Further, the reflected light field is now effectively the surface
radiosity function, and can be expanded 3 in spherical harmonics.
l
m=-l
BlmYlm (#)
We [29] have shown that with the definitions,
\Theta
one can derive
Blm
l
We define -
l . An analytic formula for A l may be derived
[29]. It can be shown that -
A l vanishes for odd values of l > 1,
3 The basis functions Clmpq in equation 9 become
m0 (#)e Im# if we
ignore output dependence, and set (the BRDF is azimuthally symmetric). It
can be shown that this is simply Ylm (#). Equation 15 now follows naturally
from equation 10 upon dropping indices p and q. Our previous derivation [29] was
specialized to the Lambertian case, and ignored the output dependence from the onset.
and even terms fall off very rapidly as l -5/2 . More than 99% of the
energy of the BRDF filter is captured by l # 2. Numerically,
Thus, the Lambertian BRDF acts like a low-pass filter, truncating
or severely attenuating frequencies with l > 2. Therefore,
from observations of a Lambertian surface, estimation of the illumination
is formally ill-posed, and is well-conditioned only for the
lighting coefficients with l # 2, corresponding to 9 parameters-1
and 5 for order 2 (l This explains the ill-conditioning
observed by Marschner and Greenberg [20] in trying
to solve the inverse lighting problem from a Lambertian surface.
Furthermore, for practical applications, including forward rendering
[28], the reflected light field from a Lambertian surface can
be characterized using only its first 9 spherical harmonic coef-
ficients; lighting effects cannot produce high-frequency variation
in intensity with respect to surface curvature.
Phong BRDF: The normalized Phong transfer function is
R is the reflection of the outgoing (viewing) direction about
the surface normal, #
L is the direction to the light source, and s is
the shininess, or Phong exponent. The normalization ensures the
Phong lobe has unit energy. Technically, we must also zero the
BRDF when the light is not in the upper hemisphere. However, the
Phong BRDF is not physically based, so others have often ignored
this boundary effect, and we will do the same.
We now reparameterize by the reflection vector #
R, transforming
the integral over the upper hemisphere centered on the surface normal
to an integral centered on #
R. The reflection vector takes the
place of the normal in the analysis, with (#) referring to #
R, and
. The Phong BRDF after reparameterization is mathematically
analogous to the Lambertian BRDF just discussed. In
fact, the properties of convolution can be used to show that for the
Phong BRDF, blurring the lighting and using a mirror BRDF is
equivalent to using the real lighting and real BRDF. This formalizes
the transformation often made in rendering with environment
maps [23]. Specifically, equation 15 can be written as
# l Llm
Here, L # lm is the blurred illumination and -
# l is the mirror BRDF 4 .
The BRDF coefficients depend on s, and are given by
Z #/2\Theta cos # i
This integral may be solved analytically. Formulae are in the ap-
pendix, and numerical plots are in figure 5.
-0.20.20.61l ->
coefficient
->
Figure
5: Numerical plots of the Phong coefficients # l -
l , as defined by equation 18.
The solid lines are the approximations in equation 19.
4 The formula # l -
not identical to equation 14 since we have now reparameterized
by the reflection vector. This accounts for the slightly different normalization.
For large s and l # s, a good approximation is
l 2
2s
The coefficients fall off as a gaussian with width of order # s. The
Phong BRDF behaves in the frequency domain like a gaussian fil-
ter, with the filter width controlled by the shininess. Therefore,
inverse lighting calculations will be well-conditioned only up to
order # s. As s approaches infinity, # l -
and the frequency
spectrum becomes constant, corresponding to a perfect mirror.
Microfacet BRDF: We now consider a simplified 4-parameter
Torrance-Sparrow [36] model, with parameters Kd , Ks , - and #.
This microfacet model is widely used in computer graphics.
FS
F (-,
exp
The subscript h stands for the half-way vector. F (-, # is the
Fresnel term for refractive index -; we normalize it to be 1 at normal
exitance. Actually, F depends on the angle with respect to the
half-way vector; in practice, this angle is usually very close to #
For simplicity in the analysis, we have omitted the geometric attenuation
factor G. In practice, this omission is not very significant
except for observations made at grazing angles, which are usually
assigned low confidence anyway in practical applications.
We focus on the specular component, reparameterizing by the
reflection vector, as for the Phong BRDF. It will also simplify matters
to fix the exitant direction, and focus on the frequency-space
representation of the incident-angle dependence. Precise analytic
are difficult to derive, but we can make a good approxi-
mation, as shown in the appendix. For normal exitance,
\Theta
For normal exitance, the specular part of the BRDF is a gaussian,
so equation 20 simply states that even in the spherical-harmonic
basis, the frequency spectrum of a gaussian is also gaussian, with
the frequency width related to the reciprocal of the angular width.
For non-normal exitance, microfacet BRDFs are not symmetric
about the reflection vector. Unlike for the Phong BRDF, there is a
preferred direction, determined by the exitant angle. However, the
BRDF filter is essentially symmetric about the reflected direction
for small viewing angles, as well as for low frequencies l. Hence, it
can be shown by Taylor-series expansions and verified numerically,
that the corrections to equation 20 are small under these conditions.
Finally, we approximate the effects of the Fresnel factor at non-normal
exitance by multiplying our expressions by F (-, #
With respect to the conditioning of inverse problems, equation
20 indicates that inverse lighting from a microfacet BRDF is
well-conditioned only for frequencies up to order l # -1 . Equation
also indicates that BRDF estimation is ill-conditioned under
low-frequency lighting. For low-frequency lighting, we may apply
the properties of convolution as we did for Phong BRDFs, filtering
the lighting using equations 17 and 20, while using a mirror BRDF.
Note that for frequencies l << # -1 , the effects of this filtering
are insignificant. The BRDF passes through virtually all the low-frequency
energy. Therefore, if the lighting contains only low
frequencies, the reflected light field from a microfacet BRDF
is essentially independent of the BRDF filter width # -1 ; this
makes estimation of the surface roughness # ill-conditioned.
5.3 Practical Representation
Thus far, we have presented the theoretical foundation for, and
some implications of, a frequency-space view of reflection. A signal
processing approach has been used before in some other areas
of computer graphics, notably the theory of aliasing. Just as a
frequency-space analysis of aliasing gives many insights difficult
to obtain by other means, the last two sections lead to new ways of
analyzing inverse rendering problems. However, the Fourier-space
theory of aliasing is not generally used directly for antialiasing. The
ideal Fourier-space bandpass filter in the spatial domain, the sinc
function, is usually modified for practical purposes because it has
infinite extent and leads to ringing. Similarly, representing BRDFs
purely as a linear combination of spherical harmonics leads to ring-
ing. Moreover, it is difficult to compute Fourier spectra from sparse
irregularly sampled data. Similarly, it is difficult to compute the reflected
light field coefficients Blmpq from a few photographs; we
would require a very large number of input images, densely sampling
the entire sphere of possible directions.
For these reasons, the frequency-space ideas must be put into
practice carefully. Here, we first discuss two useful practical
techniques-dual angular and frequency-space representations,
and the separation of the lighting into slow and fast-varying com-
ponents. Finally, we use these ideas, and the insights gained from
the previous subsection, to derive a simple practical model of the
reflected light field for the microfacet BRDF. This representation
will be used extensively in the practical algorithms of section 6.
Dual Angular and Frequency-Space Representations:
Quantities local in angular space have broad frequency spectra and
vice-versa. By developing a frequency-space view of reflection,
we ensure that we can use either the angular-space or frequency-space
representation, or even a combination of the two. The diffuse
BRDF component is slowly varying in angular-space, but is local
in frequency-space, while the specular BRDF component is local in
the angular domain. For representing the lighting, the frequency-space
view is appropriate for the diffuse BRDF component, while
the angular-space view is appropriate for the specular component.
Separation of slow and fast-varying lighting: For the
angular-space description of the lighting, used in computing the reflected
light field from the specular BRDF component, we separate
the lighting into a slow varying component corresponding to low
frequencies or area sources-for which we filter the lighting and
use a mirror BRDF-and a fast varying component corresponding
to high frequencies or directional sources. For the frequency-space
lighting description, used for the diffuse BRDF component, this
distinction need not be made since the formulae for the Lambertian
BRDF are the same for both slow and fast varying components.
Model for Reflected Light Field: Our model for the reflected
light field from the microfacet BRDF includes three components.
s,fast
Bd is from the diffuse component of the BRDF. B s,slow represents
specularities from the slowly-varying lighting, and B s,fast specular
highlights from the fast varying lighting component.
To write Bd , corresponding to the Lambertian BRDF compo-
nent, we use the 9 parameter frequency-space representation of the
lighting. Explicitly noting l # 2, and with E being the irradiance,
E(#) =X
+l
m=-l
LlmYlm (#)
The numerical values of # l - # l are given in equation 16.
For B s,slow , we filter the lighting, using equations 17 and 20,
and treat the BRDF as a mirror. With #
R denoting the reflected
direction, and Lslow the filtered version of the lighting, we obtain
For the fast varying portion of the lighting-corresponding to
sources of angular width #-we treat the total energy of the
source, given by an integral over the (small) solid angle subtended,
as located at its center, so the lighting is a s,fast
is given by the standard equation for the specular highlight from a
directional source. The extra factor of 4 cos # o in the denominator
as compared to equation 22 comes from the relation between
differential microfacet and global solid angles.
j,fast
The subscript j denotes a particular directional source; there could
be several. Note that L j,fast is now the total energy of the source.
For BRDF estimation, it is convenient to expand out these equa-
tions, making dependence on the BRDF parameters explicit.
6 Algorithms and Results
This section shows how the theory, and in particular the model just
derived in section 5.3, can be applied to a broad range of practical
inverse rendering problems. We present two types of methods-
algorithms that recover coefficients of a purely frequency-space description
of the lighting or BRDF by representing these quantities
as a sum of spherical harmonic terms, and algorithms that estimate
parameters corresponding to our model of section 5.3. Section 6.1
on BRDF estimation demonstrates direct recovery of spherical harmonic
BRDF coefficients, as well as estimation of parametric microfacet
BRDFs using equation 24. Similarly, section 6.2 demonstrates
direct recovery of spherical harmonic lighting coefficients,
as well as estimation of a dual angular and frequency-space lighting
description as per the model of section 5.3. Section 6.3 shows how
to combine BRDF and lighting estimation techniques to simultaneously
recover the lighting and BRDF parameters, when both are
unknown. In this case, we do not show direct recovery of spherical
harmonic coefficients, as we have thus far found this to be imprac-
tical. Finally, section 6.4 demonstrates our algorithms on geometrically
complex objects, showing how it is straightforward to extend
our model to handle textures and shadowing.
To test our methods, we first used homogeneous spheres 5 of different
materials. Spheres are naturally parameterized with spherical
coordinates, and therefore correspond directly to our theory.
Later, we also used complex objects-a white cat sculpture, and a
textured wooden doll-to show the generality of our algorithms.
Data Acquisition: We used a mechanical gantry to position an
inward-looking Toshiba IK-TU40A CCD(x3) camera on an arc of
radius 60cm. Calibration of intrinsics was done by the method of
Zhang [40]. Since the camera position was computer-controlled,
extrinsics were known. The mapping between pixel and radiance
values was also calibrated. We acquired 60 images of the target
sphere, taken at 3 degree intervals. To map from image pixels to
angular coordinates (#
used image silhouettes to
find the geometric location of the center of the sphere and its radius.
Our gantry also positioned a 150W white point source along an
arc. Since this arc radius (90 cm) was much larger than the sphere
(between 1.25 and 2cm), we treated the point source as a directional
light. A large area source, with 99% of its energy in low-frequency
modes of order l # 6, was obtained by projecting white
light on a projection screen. The lighting distribution was determined
using a gazing sphere. This information was used directly
for experiments assuming known illumination, and as a reference
solution for experiments assuming unknown illumination.
We also used the same experimental setup, but with only the
point source, to measure the BRDF of a white teflon sphere using
the image-based method of Marschner et al. [21]. This independent
measurement was used to verify the accuracy of our BRDF
estimation algorithms under complex illumination.
5 Ordered from the McMaster-Carr catalog http://www.mcmaster.com
6.1 Inverse BRDF with Known Lighting
Estimation of Spherical Harmonic BRDF coefficients:
Spherical harmonics and Zernike polynomials have been fit [14] to
measured BRDF data, but previous work has not tried to estimate
coefficients directly. Since the BRDF is linear in the coefficients
# lpq , we simply solve a linear system to determine -
# lpq .
Figure
6 compares the parametric BRDFs estimated under complex
lighting to BRDFs measured using a single point source with
the method of Marschner et al. [21]. As expected [14], the recovered
BRDFs exhibit ringing. One way to reduce ringing is to attenuate
high-frequency coefficients. According to our theory, this
is equivalent to using low frequency lighting. Therefore, as seen
in figure 6, images rendered with low-frequency lighting do not
exhibit ringing and closely match real photographs, since only the
low-frequency components of the BRDF are important. However,
images rendered using directional sources show significant ringing.
Real (Marschner) Order 12
Order 6
Images
f
slices
Real
Rendered
Low-Frequency Lighting
Rendered
Directional Source
Figure
Top: Slices of the BRDF transfer function of a teflon sphere for fixed
exitant angle of 63 # i varies linearly from 0 # to 90 # from top to bottom, and
| linearly from 0 # to 360 # from left to right. The central bright feature
is the specular highlight. Left is the BRDF slice independently measured using the
approach of Marschner et al. [21], middle is the recovered value using a maximum
order 6, and right is the recovered version for order 12. Ringing is apparent in both
recovered BRDFs. The right version is sharper, but exhibits more pronounced ringing.
Bottom: Left is an actual photograph; the lighting is low-frequency from a large area
source. Middle is a rendering using the recovered BRDF for order 6 and the same
lighting. Since the lighting is low-frequency, only low-frequency components of the
BRDF are important, and the rendering appears very similar to the photograph even
though the recovered BRDF does not include frequencies higher than order 6. Right
is a rendering with a directional source at the viewpoint, and exhibits ringing.
For practical applications, it is usually more convenient to recover
low-parameter BRDF models since these are compact, can
be estimated from fewer observations, and do not exhibit ringing.
In the rest of this section, we will derive improved inverse rendering
algorithms, assuming our parametric BRDF model.
Estimation of Parametric BRDF Model: We estimate
BRDF parameters under general known lighting distributions using
equation 24. The inputs are images that sample the reflected light
field B. We perform the estimation using nested procedures. In the
outer procedure, a simplex algorithm adjusts the nonlinear parameters
- and # to minimize error with respect to image pixels. In
the inner procedure, a linear problem is solved for Kd and Ks . For
numerical work, we use the simplex method e04ccc and linear
solvers f01qcc and f01qdc in the NAG [9] C libraries. The
main difference from previous work is that equation 24 provides a
principled way of accounting for all components of the lighting and
BRDF, allowing for the use of general illumination conditions.
We tested our algorithm on the spheres. Since the lighting includes
high and low-frequency components (a directional source
and an area source), the theory predicts that parameter estimation
is well-conditioned. To validate our algorithm, we compared parameters
recovered under complex lighting for one of the sam-
ples, a white teflon sphere, to those obtained by fitting to the full
BRDF separately measured by us using the method of Marschner et
al. [21]. Unlike most previous work on BRDF estimation, we consider
the Fresnel term. It should be noted that accurate estimates
for the refractive index - require correct noise-free measurements
at grazing angles. Since these measurements tend to be the most
error-prone, there will be small errors in the estimated values of -
for some materials. Nevertheless, we find the Fresnel term important
for reproducing accurate specular highlights at oblique angles.
Parameter Our Method Fit to Data
Reflectance 0.86 0.87
Kd/(Kd +Ks) 0.89 0.91
Ks/(Kd +Ks) 0.11 0.09
RMS 9.3% 8.5%
Figure
7: Comparison of BRDF parameters recovered by our algorithm under complex
lighting to those fit to measurements made by the method of Marschner et al. [21].
The results in figure 7 show that the estimates of BRDF parameters
from our method are quite accurate, and there is only a small
increase in the error-of-fit when using parameters recovered by our
algorithm to fit the measured BRDF. We also determined percentage
RMS errors between images rendered using recovered BRDFs
and real photographs to be between 5 and 10%. A visual comparison
is shown in the first and third rows of figure 12. All these results
indicate that, as expected theoretically, we can accurately estimate
BRDFs even under complex lighting.
6.2 Inverse Lighting with Known BRDF
Previous methods for estimating the lighting have been developed
only for the special cases of mirror BRDFs (a gazing sphere), Lambertian
BRDFs (Marschner and Greenberg [20]), and when shadows
are present (Sato et al. [30]). Previous methods [20, 30] have
also required regularization using penalty terms with user-specified
weights, and have been limited by the computational complexity
of their formulations to a coarse discretization of the sphere.
We present two new algorithms for curved surfaces with general
BRDFs. The first method directly recovers spherical harmonic
lighting coefficients Llm . The second algorithm estimates parameters
of the dual angular and frequency-space lighting model of
section 5.3. This method requires no explicit regularization, and
yields high-resolution results that are sharper than those from the
first algorithm, but is more difficult to extend to concave surfaces.
The theory tells us that inverse lighting is ill-conditioned for
high-frequencies. Therefore, we will recover only low-frequency
continuous lighting distributions, and will not explicitly account
for directional sources, i.e. we assume that B s,fast = 0. The reflected
light field is essentially independent of the surface roughness
# under these conditions, so our algorithms do not explicitly
use #. The theory predicts that the recovered illumination will be a
filtered version of the real lighting. Directional sources will appear
as continuous distributions of angular width approximately #.
Estimation of Spherical Harmonic Lighting coefficients:
We represent the lighting by coefficients Llm with l # l # , and solve
a linear least-squares system for Llm . The first term in parentheses
below corresponds to Bd , and the second to B s,slow . The cutoff l #
is used for regularization, and should be of order l # -1 .
l #
l
m=-l
Estimation of Parametric Dual Lighting Model: Another
approach is to estimate the dual angular and frequency-space lighting
model of section 5.3. Our algorithm is based on subtracting out
the diffuse component Bd of the reflected light field. After this,
we treat the object as a mirror sphere, recovering a high-resolution
angular-space version of the illumination from the specular component
alone. To determine Bd , we need only the 9 lowest frequency-space
coefficients Llm with l # 2. Our algorithm uses the following
methods to convert between angular and frequency-space:
Phase 2
Phase 1
Input
f
s,slow
dB d1
lm lm
Figure
8: Estimation of dual lighting representation. In phase 1, we use frequency-space
parameters
lm to compute diffuse component B 1
d . This is subtracted from the
input image, leaving the specular component, from which the angular-space lighting
is found. In phase 2, we compute coefficients L 2
lm which can be used to determine
d
. The consistency condition is that B 1
d
or L 1
lm
In this and all
subsequent figures, the lighting is visualized by unwrapping the sphere so # ranges in
equal increments from 0 to # from top to bottom, and # ranges in equal increments
from 0 to 2# from left to right (so the image wraps around in the horizontal direction).
1. 9 parameters to High-Resolution Lighting: The inputs to
phase 1 are the coefficients L 1
lm . These suffice to find B 1
d by
equation 21. Since we assumed that B s,fast = 0,
Lslow
We assume the BRDF parameters are known, and B is the
input to the algorithm, so the right-hand side can be evaluated.
2. High-Resolution Lighting to 9 parameters: Using the angular
space values L found from the first phase, we can easily
find the 9 frequency-space parameters of the lighting L 2
lm .
Now, assume we run phase 1 (with inputs L 1
lm ) and phase 2
(with outputs L 2
sequentially. The consistency condition is that
lm -converting from frequency to angular to frequency
space must not change the result. Equivalently, the computed diffuse
components must match, i.e. B 1
lm ). This is
illustrated in figure 8. Since everything is linear in terms of the
lighting coefficients, the consistency condition reduces to a system
of 9 simultaneous equations. After solving for Llm , we run phase
1 to determine the high-resolution lighting in angular space.
Figure
9 compares the methods to each other, and to a reference
solution from a gazing sphere. Both algorithms give reasonably
accurate results. As predicted by the theory, high-frequency
components are filtered by the roughness #. In the first method,
involving direct recovery of Llm , there will still be some residual
energy for l > l # . Since we regularize by not considering
higher frequencies-we could increase l # , but this makes the result
noisier-the recovered lighting is somewhat blurred compared to
our dual angular and frequency-space algorithm (second method).
Real (Gazing Sphere) Algorithm 1
f
Figure
9: Comparison of inverse lighting methods. From left to right, real lighting
(from a gazing sphere), recovered illumination by direct estimation of spherical harmonic
coefficients with l estimation of dual angular and
frequency-space lighting model. To make the artifacts more apparent, we have set 0
to gray. The results from the dual algorithm are sharper, but still somewhat blurred
because of filtering by #. A small amount of ringing occurs for direct coefficient re-
covery, and can be seen for l Using l makes the solution very noisy.
6.3 Factorization-Unknown Lighting and BRDF
We can combine the inverse-BRDF and inverse-lighting methods
to factor the reflected light field, simultaneously recovering the
lighting and BRDF when both are unknown. Therefore, we are
able to recover BRDFs of curved surfaces under unknown complex
illumination, something which has not previously been demon-
strated. There is an unrecoverable global scale factor, so we set
we cannot find absolute reflectance. Also, the
theory predicts that for low-frequency lighting, estimation of the
surface roughness # is ill-conditioned-blurring the lighting while
sharpening the BRDF does not significantly change the reflected
light field. However, for high-frequency lighting, this ambiguity
can be removed. We will use a single manually specified directional
source in the recovered lighting distribution to estimate #.
Algorithm: The algorithm consists of nested procedures. In
the outer loop, we effectively solve an inverse-BRDF problem-a
nonlinear simplex algorithm adjusts the BRDF parameters to
minimize error with respect to image pixels. Since Kd
and # will not be solved for till after the lighting and other BRDF
parameters have been recovered, there are only
Ks and -. In the inner procedure, a linear problem is solved to
estimate the lighting for a given set of BRDF parameters, using the
methods of the previous subsection. Pseudocode is given below.
global Binput // Input images
global Kd ,Ks ,-,# // BRDF parameters
global L // Lighting
procedure Factor
Minimize(Ks ,-,ObjFun) // Simplex Method
Figure 10, Equation 26
function ObjFun(Ks ,-)
Lighting
Field
return RMS(Binput ,Bpred ) // RMS Error
Finding # using a directional source: If a directional
source is present-and manually specified by us in the recovered
lighting-we can estimate # by equating specular components predicted
by equations 22 and 23 for the center, i.e. brightest point, of
the light source at normal exitance. An illustration is in figure 10.
total
f
tot
Figure
10: We manually specify (red box) the region corresponding to the directional
source in a visualization of the lighting. The algorithm then determines Lcen ,
the intensity at the center (brightest point), L tot the total energy integrated over the
region specified by the red box, and computes # using equation 26. The method does
not depend on the size of the red box-provided it encloses the entire (filtered) source-
nor the precise shape into which the source is filtered in the recovered lighting.
Results: We used the method of this subsection-with the dual
angular and frequency-space algorithm for inverse lighting-to factor
the light field for the spheres, simultaneously estimating the
BRDF and lighting. The same setup and lighting were used for
all the spheres so we could compare the recovered illumination.
We see from figure 11 that the BRDF estimates under unknown
lighting are accurate. Absolute errors are small, compared to parameters
recovered under known lighting. The only significant
anomalies are the slightly low values for the refractive index -
caused because we cannot know the high-frequency lighting com-
ponents, which are necessary for more accurately estimating the
Fresnel term. We are also able to estimate a filtered version of the
lighting. As shown in figure 12, the recovered lighting distributions
from all the samples are largely consistent. As predicted by
the theory, the directional source is spread out to different extents
depending on how rough the surface is, i.e. the value of #. Finally,
figure 12 shows that rendered images using the estimated lighting
and BRDF are almost indistinguishable from real photographs.
Material K d Ks - #
Known Unknown Known Unknown Known Unknown Known Unknown
Teflon
Delrin 0.87 0.88 0.13 0.12 1.44 1.35 0.10 0.11
Neoprene Rubber 0.92 0.93 0.08 0.07 1.49 1.34 0.10 0.10
Sandblasted Steel 0.20 0.14 0.80 0.86 0.20 0.19
Bronze
Painted (.62,.71,.62) (.67,.75,.64) 0.29 0.25 1.38 1.15 0.15 0.15
Figure
11: BRDFs of various spheres, recovered under known (section 6.1) and unknown (section 6.3) lighting. The reported values are normalized so K d
values are reported for colored objects. We see that Ks is much higher for the more specular metallic spheres, and that # is especially high for the rough sandblasted sphere. The
Fresnel effect is very close to 1 for metals, so we do not consider the Fresnel term for these spheres.
Lighting
Known
Lighting
Unknown
Lighting
Images
Images
Images
Bronze
Sandblasted Painted
Delrin
f
Teflon
of real lighting
Filtered version
Real
Rendered
Rendered
Recovered
Real lighting
Figure
12: Spheres rendered using BRDFs estimated under known (section 6.1) and
unknown (section 6.3) lighting. The algorithm in section 6.3 also recovers the lighting.
Since there is an unknown global scale, we scale the recovered lighting distributions
in order to compare them. The recovered illumination is largely consistent between
all samples, and is similar to a filtered version of the real lighting. As predicted by
the theory, the different roughnesses # cause the directional source to be spread out to
different extents. The filtered source is slightly elongated or asymmetric because the
microfacet BRDF is not completely symmetric about the reflection vector.
6.4 Complex Objects-Texture and Shadowing
We now demonstrate our algorithms on objects with complex ge-
ometry, and discuss extensions to handle concave surfaces and textured
objects. Although the theory is developed for homogeneous
surfaces, our algorithms can be extended to textured objects simply
by letting the BRDF parameters be functions of surface position. It
would appear that concave regions, where one part of the surface
may shadow another, are a more serious problem since our theory
is developed for convex objects and assumes no self-shadowing.
However, using our new practical model of section 5.3, we will
see that the extensions necessary mainly just involve checking for
shadowing of the reflected ray and directional sources, which are
routine operations in a raytracer.
Shadowing-Concave Surfaces: In our practical model, the
reflected light field consists of 3 parts-Bd , B s,slow , and B s,fast .
s,slow depends on Lslow ( #
R), the slowly-varying component of
the lighting evaluated at the reflection vector. Our model allows us
to approximate the effects of shadowing simply by checking if the
reflected ray is shadowed. The other components are handled in the
standard manner. To consider shadowing when computing B s,fast ,
corresponding to specularities from directional sources, we check
if these sources are shadowed. Bd depends on the irradiance E,
which should now be computed in the more conventional angular-
space way by integrating the scene lighting while considering vis-
ibility, instead of using the 9-parameter lighting approximation of
equation 21. It should be emphasized that in all cases, the corrections
for visibility depend only on object geometry, and can be
precomputed for each point on the object using a ray tracer.
For parametric BRDF estimation, we modify each component of
equation 24 to consider visibility, as discussed above. Our first inverse
lighting method, that directly recovers the coefficients Llm , is
modified similarly. In equation 25, we check if the reflected ray is
shadowed, and consider shadowing when computing the irradiance
due to each Ylm . Note that B is still a linear combination of the
lighting coefficients, so we will still solve a linear system for Llm .
However, it is difficult to extend our dual angular and frequency-space
method for inverse lighting to handle concave surfaces because
Bd no longer depends only on the 9 lighting coefficients Llm
with l # 2. For light field factorization, we simply extend both the
and inverse-lighting methods as discussed.
A white cat sculpture was used to test our algorithms on complex
geometric objects that include concavities. Geometry was acquired
using a Cyberware range scanner and aligned to the images by manually
specifying correspondences. The lighting was slightly more
complex than that for the spheres experiment; we used a second
directional source in addition to the area source.
To show that we can recover BRDFs using a small number of
images, we used only 3 input photographs. We recovered BRDFs
under both known lighting, using the method of section 6.1, and
unknown lighting-using the factorization method of section 6.3,
with the inverse lighting component being direct recovery of spherical
harmonic coefficients using l Comparisons of photographs
and renderings are in figures 1 and 13. BRDF and lighting
parameters are tabulated in figure 14. This experiment indicates
that our methods for BRDF recovery under known and unknown
lighting are consistent, and are accurate even for complex lighting
and geometry. The rendered images are very close to the original
photographs, even under viewing and lighting conditions not used
for BRDF recovery. The most prominent artifacts are because of
imprecise geometric alignment and insufficient geometric resolu-
tion. For instance, since our geometric model does not include the
eyelids of the cat, that feature is missing from the rendered images.
Textured BRDFs: Since the theory shows that factorization of
lighting and texture is ambiguous, we consider only recovery of
textured BRDFs under known lighting. It is fairly straightforward
to allow Kd (x) and Ks (x) to be described by textures that depend
on surface position x. In the inner procedure of the parametric
BRDF estimation algorithm of section 6.1, we simply solve a separate
linear problem for each point x to estimate Kd (x) and Ks(x).
As an experimental test, we used a wooden doll. We compared
the real input photographs with images rendered using the recovered
textured BRDF. We also took a photograph of the same object
under a single directional source and compared this to a rendering
using the textured BRDF recovered under complex illumination.
The results in figure 15 show that our renderings closely resemble
real photographs. The main artifact is blurring of texture because
of geometry-image misregistration.
7 Conclusions and Future Work
This paper has developed a signal-processing framework for inverse
rendering. The qualitative observation that the reflected light
field is a convolution of the lighting and BRDF has been formalized
mathematically. We have shown in frequency-space why a gazing
sphere is well-suited for recovering the lighting-the frequency
spectrum of the mirror BRDF (a delta function) is constant-and
Input
View
View
Rendered, Known Lighting Real Photograph Rendered, Unknown Lighting
Figure
13: Comparison of real photographs (middle column) to images rendered
using BRDFs recovered under known lighting (left column), and using BRDFs (and
lighting) estimated under unknown lighting (right column). The top row is one of the
3 input views. The bottom row is a new view, not used for BRDF estimation. Note
that in the top row, we have composited the left and right renderings over the same
background as the middle photograph in order to make a meaningful comparison.
Parameter Known Lighting Unknown Lighting
BRDF Parameters
K d 0.88 0.90
Ks 0.12 0.10
Lighting Coefficients (l,m)
Figure
14: BRDF and lighting parameters for the cat sculpture. We see good agreement
between BRDF parameter values recovered with known and unknown lighting,
showing our methods are consistent. Note that we normalize so Kd
We may also check the accuracy of the recovered lighting. Since there is an unknown
global scale for the recovered values, we report normalized lighting coefficient values
for the first 9 spherical harmonic coefficients (in real form), which are the most
important, because they significantly affect the diffuse component of the BRDF.
why a directional source is well-suited for recovering the BRDF-
we are estimating the BRDF filter by considering its impulse re-
sponse. The conditioning properties and well-posedness of BRDF
and lighting estimation under various conditions have been de-
rived, as well as an explicit formula for factoring the reflected light
field into the lighting and BRDF. The ill-conditioning observed by
Marschner and Greenberg [20] in estimating the lighting from a
Lambertian surface has been explained, and we have shown that
factorization of lighting effects and low-frequency texture is am-
biguous. All these results indicate that the theory provides a useful
analytical tool for studying the properties of inverse problems.
The insights gained from the theory also lead to a new practical
representation. We can numerically represent quantities in angular
or frequency space, depending on where they are more lo-
cal. This leads to new algorithms which are often expressed in a
combination of angular and frequency-space. We can determine
which BRDF and lighting parameters are important, and can handle
the various components appropriately. For BRDF estimation,
the parametric recovery algorithms of Yu and Malik [39], Sato
and Ikeuchi [31], and Love [17]-which are designed specifically
for natural lighting-can be seen as special cases of this general
Rendered Real Rendered
1 view in original input sequence Same view, novel lighting
Real
Figure
15: Recovering textured BRDFs under complex lighting. The rendered images
closely resemble the real photographs, even under novel lighting.
they treat sunlight (high-frequency) and skylight (low-
separately. We provide a general framework for arbitrary
illumination, and also determine conditions under which parameter
recovery is robust. For instance, our theory predicts that
estimation of # is ill-conditioned on a cloudy day, with only low-frequency
lighting. Our framework can also be applied to developing
new frequency-space algorithms to estimate the lighting from
objects with general BRDFs. The use of frequency-space naturally
handles continuous lighting distributions. Our dual angular
and frequency-space algorithm effectively reduces the problem for
general BRDFs to that for a gazing sphere, requires no explicit reg-
ularization, and allows much higher resolutions to be obtained than
with previous purely angular-space methods [20, 30]. Finally, we
demonstrate a method for factoring the light field to simultaneously
estimate the lighting and BRDF. This allows us to estimate BRDFs
of geometrically complex objects under unknown general lighting,
which has not previously been demonstrated.
We have only scratched the surface of possible applications. In
the future, it is likely that many more algorithms can be derived
using the basic approaches outlined here. Possible algorithmic improvements
include extending the consistency condition for inverse
lighting so we can use color-space methods [13] to help separate
diffuse and specular components for colored objects. Finally, while
we have discussed only inverse rendering applications, we believe
the convolution-based approach is of theoretical and practical importance
in many other areas of computer graphics. We have already
shown [28] how to use the 9 term irradiance approximation
for efficient forward rendering of diffuse objects with environment
maps, and we believe there are many further applications.
Acknowledgements
: We are grateful to Marc Levoy for many helpful initial
discussions regarding both the interpretation of reflection as convolution and the
practical issues in inverse rendering, as well as for reading a draft of the paper. We
thank Szymon Rusinkiewicz for many suggestions regarding this research, for reading
early versions of the paper, and for being ever willing to assist us in debugging problems
with the gantry. Steve Marschner also deserves thanks for detailed comments on
many early drafts. Jean Gleason at Bal-tec assisted in supplying us with our mirror
sphere. Finally, we thank John Parissenti at Polytec Products for much useful advice
on obtaining uniform spheres, and for getting one of our steel spheres sandblasted.
The cat is a range scan of the sculpture Serenity by Sue Dawes. The work described in
this paper was supported in part by a Hodgson-Reed Stanford graduate fellowship and
NSF ITR grant #0085864 "Interacting with the Visual World."
--R
Lambertian reflectance and linear subspaces.
Increased photorealism for interactive architectural walkthroughs.
Bidirectional reflection functions from surface bump maps.
A volumetric method for building complex models from range images.
Reflectance and texture of real-world surfaces
Acquiring the reflectance field of a human face.
Estimating surface reflectance properties from images under unknown illumination.
Computational Models of Visual Processing
Numerical Algorithms Group.
Determining reflectance properties of an object using range and brightness images.
Group theory and its applications in physics.
An image based measurement system for anisotropic reflection.
The measurement of highlights in color images.
Phenomenological description of bidirectional surface reflection.
Lightness and retinex theory.
Surface Reflection Model Estimation from Naturally Illuminated Image Sequences.
Optical properties (bidirectional reflection distribution functions) of velvet.
Spherical harmonics
Inverse lighting for photography.
Inverse radiative transfer problems: a review.
Simulated objects in simulated and real environments.
Geometric Considerations and Nomenclature for Reflectance.
Efficient re-rendering of naturally illuminated environments
Hydrologic Optics.
Analysis of planar light fields from homogeneous convex curved surfaces under distant illumination.
An efficient representation for irradiance environment maps.
On the relationship between radiance and irradiance: Determining the illumination from images of a convex lambertian object.
Illumination distribution from brightness in shadows: adaptive estimation of illumination distribution with unknown reflectance properties in shadow regions.
Reflectance analysis under solar illumination.
Object shape and reflectance modeling from observation.
A global illumination solution for general reflectance distributions.
Inverse Rendering for Computer Graphics.
Estimating reflection parameters from a single color image.
Theory for off-specular reflection from roughened surfaces
In SIGGRAPH 92
Inverse global illumination: Recovering reflectance models of real scenes from photographs.
Recovering photometric properties of architectural scenes from photographs.
A flexible new technique for camera calibration.
--TR
Bidirectional reflection functions from surface bump maps
A global illumination solution for general reflectance distributions
Determining Reflectance Properties of an Object Using Range and Brightness Images
Measuring and modeling anisotropic reflection
A volumetric method for building complex models from range images
Object shape and reflectance modeling from observation
Recovering photometric properties of architectural scenes from photographs
Increased photorealism for interactive architectural walkthroughs
Reflectance and texture of real-world surfaces
Inverse global illumination
The digital Michelangelo project
Acquiring the reflectance field of a human face
A Flexible New Technique for Camera Calibration
An efficient representation for irradiance environment maps
Estimating Reflection Parameters from a Single Color Image
Reflectance Analysis Under Solar Illumination
Inverse rendering for computer graphics
--CTR
Makoto Okabe , Yasuyuki Matsushita , Takeo Igarashi , Heung-Yeung Shum, Illumination interactive design of image-based lighting, ACM SIGGRAPH 2006 Research posters, July 30-August 03, 2006, Boston, Massachusetts
G. H. Hu , S. K. Ong , Y. P. Chen , A. Y. C. Nee, Technical Section: Reflectance modeling for a textured object under uncontrolled illumination from high dynamic range maps, Computers and Graphics, v.31 n.2, p.262-270, April, 2007
Martin Fuchs , Volker Blanz , Hendrik Lensch , Hans-Peter Seidel, Reflectance from Images: A Model-Based Approach for Human Faces, IEEE Transactions on Visualization and Computer Graphics, v.11 n.3, p.296-305, May 2005
Gustavo Patow , Xavier Pueyo , Alvar Vinacua, Technical Section: User-guided inverse reflector design, Computers and Graphics, v.31 n.3, p.501-515, June, 2007
Kuang-Chih Lee , Jeffrey Ho , David J. Kriegman, Acquiring Linear Subspaces for Face Recognition under Variable Lighting, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.5, p.684-698, May 2005
Ravi Ramamoorthi , Melissa Koudelka , Peter Belhumeur, A Fourier Theory for Cast Shadows, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.2, p.288-295, February 2005
Enhua Wu , Qimin Sun , Xuehui Liu, Recovery of material under complex illumination conditions, Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, June 15-18, 2004, Singapore
Ning Xu , Narendra Ahuja, Shape and View Independent Reflectance Map from Multiple Views, International Journal of Computer Vision, v.73 n.2, p.123-138, June 2007
Bei Xiao , David H. Brainard, Color perception of 3D objects: constancy with respect to variation of surface gloss, Proceedings of the 3rd symposium on Applied perception in graphics and visualization, July 28-29, 2006, Boston, Massachusetts
Ravi Ramamoorthi, Analytic PCA Construction for Theoretical Analysis of Lighting Variability in Images of a Lambertian Object, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.10, p.1322-1333, October 2002
Jan Kautz , Peter-Pike Sloan , John Snyder, Fast, arbitrary BRDF shading for low-frequency lighting using spherical harmonics, Proceedings of the 13th Eurographics workshop on Rendering, June 26-28, 2002, Pisa, Italy
Raanan Fattal , Dani Lischinski , Michael Werman, Gradient domain high dynamic range compression, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
Sara Keren , Ilan Shimshoni , Ayellet Tal, Placing three-dimensional models in an uncalibrated single image of an architectural scene, Proceedings of the ACM symposium on Virtual reality software and technology, November 11-13, 2002, Hong Kong, China
Yang Wang , Dimitris Samaras, Estimation of multiple directional light sources for synthesis of augmented reality images, Graphical Models, v.65 n.4, p.185-205, July
Jefferson Y. Han , Ken Perlin, Measuring bidirectional texture reflectance with a kaleidoscope, ACM Transactions on Graphics (TOG), v.22 n.3, July
Imari Sato , Takahiro Okabe , Yoichi Sato, Appearance Sampling of Real Objects for Variable Illumination, International Journal of Computer Vision, v.75 n.1, p.29-48, October 2007
Paul Debevec , Andreas Wenger , Chris Tchou , Andrew Gardner , Jamie Waese , Tim Hawkins, A lighting reproduction approach to live-action compositing, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
Chang Ha Lee , Xuejun Hao , Amitabh Varshney, Light Collages: Lighting Design for Effective Visualization, Proceedings of the conference on Visualization '04, p.281-288, October 10-15, 2004
Jan Kautz , Peter-Pike Sloan , Jaakko Lehtinen, Precomputed radiance transfer: theory and practice, ACM SIGGRAPH 2005 Courses, July 31-August
Jaakko Lehtinen , Jan Kautz, Matrix radiance transfer, Proceedings of the symposium on Interactive 3D graphics, April 27-30, 2003, Monterey, California
Kenji Hara , Ko Nishino , Katsushi Ikeuchi, Light Source Position and Reflectance Estimation from a Single View without the Distant Illumination Assumption, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.4, p.493-505, April 2005
Ravi Ramamoorthi , Dhruv Mahajan , Peter Belhumeur, A first-order analysis of lighting, shading, and shadows, ACM Transactions on Graphics (TOG), v.26 n.1, p.2-es, January 2007
Martin Fuchs , Volker Blanz , Hendrik P.A. Lensch , Hans-Peter Seidel, Adaptive sampling of reflectance fields, ACM Transactions on Graphics (TOG), v.26 n.2, p.10-es, June 07
Srinivasa G. Narasimhan , Mohit Gupta , Craig Donner , Ravi Ramamoorthi , Shree K. Nayar , Henrik Wann Jensen, Acquiring scattering properties of participating media by dilution, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Xuejun Hao , Thomas Baby , Amitabh Varshney, Interactive subsurface scattering for translucent meshes, Proceedings of the symposium on Interactive 3D graphics, April 27-30, 2003, Monterey, California
Ravi Ramamoorthi , Pat Hanrahan, Frequency space environment map rendering, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
Sara Keren , Ilan Shimshoni , Ayellet Tal, Placing Three-Dimensional Models in an Uncalibrated Single Image of an Architectural Scene, Presence: Teleoperators and Virtual Environments, v.13 n.6, p.692-707, December 2004
Image-based skin color and texture analysis/synthesis by extracting hemoglobin and melanin information in the skin, ACM Transactions on Graphics (TOG), v.22 n.3, July
David Akers , Frank Losasso , Jeff Klingner , Maneesh Agrawala , John Rick , Pat Hanrahan, Conveying Shape and Features with Image-Based Relighting, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.46, October 22-24,
Imari Sato , Yoichi Sato , Katsushi Ikeuchi, Illumination from Shadows, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.3, p.290-300, March
G. Narasimhan , Shree K. Nayar, A practical analytic single scattering model for real time rendering, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Andrew Gardner , Chris Tchou , Tim Hawkins , Paul Debevec, Linear light source reflectometry, ACM Transactions on Graphics (TOG), v.22 n.3, July
Michael D. Grossberg , Shree K. Nayar, Modeling the Space of Camera Response Functions, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.10, p.1272-1282, October 2004
Frdo Durand , Nicolas Holzschuch , Cyril Soler , Eric Chan , Franois X. Sillion, A frequency analysis of light transport, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Wojciech Matusik , Hanspeter Pfister , Addy Ngan , Paul Beardsley , Remo Ziegler , Leonard McMillan, Image-based 3D photography using opacity hulls, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
Wei-Chao Chen , Jean-Yves Bouguet , Michael H. Chu , Radek Grzeszczuk, Light field mapping: efficient representation and hardware rendering of surface light fields, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
Ronen Basri , David W. Jacobs, Lambertian Reflectance and Linear Subspaces, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.2, p.218-233, February
Ko Nishino , Shree K. Nayar, Eyes for relighting, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Xuejun Hao , Amitabh Varshney, Real-time rendering of translucent meshes, ACM Transactions on Graphics (TOG), v.23 n.2, p.120-142, April 2004
Steven M. Seitz , Kiriakos N. Kutulakos, Plenoptic Image Editing, International Journal of Computer Vision, v.48 n.2, p.115-129, July 2002
Shree K. Nayar , Peter N. Belhumeur , Terry E. Boult, Lighting sensitive display, ACM Transactions on Graphics (TOG), v.23 n.4, p.963-979, October 2004
Hendrik P. A. Lensch , Jan Kautz , Michael Goesele , Wolfgang Heidrich , Hans-Peter Seidel, Image-based reconstruction of spatial appearance and geometric detail, ACM Transactions on Graphics (TOG), v.22 n.2, p.234-257, April
Ravi Ramamoorthi , Pat Hanrahan, A signal-processing framework for reflection, ACM Transactions on Graphics (TOG), v.23 n.4, p.1004-1042, October 2004
Rodrigo L. Carceroni , Kiriakos N. Kutulakos, Multi-View Scene Capture by Surfel Sampling: From Video Streams to Non-Rigid 3D Motion, Shape and Reflectance, International Journal of Computer Vision, v.49 n.2-3, p.175-214, September-October 2002
Hendrik P. A. Lensch , Michael Goesele , Yung-Yu Chuang , Tim Hawkins , Steve Marschner , Wojciech Matusik , Gero Mueller, Realistic materials in computer graphics, ACM SIGGRAPH 2005 Courses, July 31-August
Hendrik P. A. Lensch , Michael Goesele , Yung-Yu Chuang , Tim Hawkins , Steve Marschner , Wojciech Matusik , Gero Mueller, Realistic materials in computer graphics, ACM SIGGRAPH 2005 Courses, July 31-August | light field;irradiance;inverse rendering;illumination;BRDF;radiance;spherical harmonics;signal processing |
383321 | Synthesizing sounds from physically based motion. | This paper describes a technique for approximating sounds that are generated by the motions of solid objects. The technique builds on previous work in the field of physically based animation that uses deformable models to simulate the behavior of the solid objects. As the motions of the objects are computed, their surfaces are analyzed to determine how the motion will induce acoustic pressure waves in the surrounding medium. Our technique computes the propagation of those waves to the listener and then uses the results to generate sounds corresponding to the behavior of the simulated objects. | Figure
1: The top image shows a multi-exposure image from an
animation of a metal bowl falling onto a hard surface. The lower
image shows a spectrogram of the resulting audio for the first five
impacts.
Although the field of computer graphics traditionally focuses on
generating visuals, our perception of an environment encompasses
other modalities in addition to visual appearance. Because these
other modalities play an integral part in forming our impression of
real-world environments, the graphics goal of creating realistic synthetic
environments must also encompass techniques for modeling
the perception of an environment through our other senses. For ex-
ample, sound plays a large role in determining how we perceive
events, and it can be critical to giving the user/viewer a sense of
immersion.
The work presented in this paper addresses the problem of automatically
generating physically realistic sounds for synthetic en-
vironments. Rather than making use of heuristic methods that are
specific to particular objects, our approach is to employ the same
ACM SIGGRAPH 2001, Los Angeles, California, August 12-17, 2001
simulated motion that is already being used for generating animated
video to also generate audio. This task is accomplished by
analyzing the surface motions of objects that are animated using
a deformable body simulator, and isolating vibrational components
that correspond to audible frequencies. The system then determines
how these surface motions will generate acoustic pressure waves in
the surrounding medium and models the propagation of those waves
to the listener. For example, a finite element simulation of a bowl
dropping onto the floor was used to compute both the image shown
in
Figure
1 and the corresponding audio.
Assuming that the computational cost of physically based animation
is already justified for the production of visuals, the additional
cost of computing the audio with our technique is negligible. The
technique does not make use of specialized heuristics, assumptions
about the shape of the objects, or pre-recorded sounds. The audio is
generated automatically as the simulation runs and does not require
any user interaction. Although we feel that the results generated
with this technique are suitable for direct use in many applications,
nothing precludes subsequent modification by another process or a
Foley artist in situations where some particular effect is desired.
The remaining sections of this paper provide a detailed description
of the sound generation technique we have developed, a review
of related prior work, several examples of the results we have
obtained, and a discussion of potential areas for future work. Presenting
audio in a printed medium poses obvious difficulties. We
include plots that illustrate the salient features of our results, and
the proceedings video tape and DVD include animations with the
corresponding audio.
Background
Prior work in the graphics community on sound generation and
propagation has focussed on efficiently producing synchronized
soundtracks for animations [18, 30], and on correctly modeling re-
flections and transmissions within the sonic environment [14, 15,
21]. In their work on modeling tearing cloth, Terzopoulos and
Fleischer generated soundtracks by playing a pre-recorded sound
whenever a connection in a spring mesh failed [31]. The DIVA
project endeavored to create virtual musical performances in virtual
spaces using physically derived models of musical instruments
and acoustic ray-tracing for spatialization of the sound sources [27].
Funkhouser and his colleagues used beam tracing algorithms and
priority rules to efficiently compute the direct and reflected paths
from sound sources to receivers [15]. Van den Doel and Pai mapped
analytically computed vibrational modes onto object surfaces allowing
interactive sound generation for simple shapes [34]. Richmond
and Pai experimentally derived modal vibration responses using
robotic measurement systems, for interactive re-synthesis using
modal filters [26]. More recent work by van den Doel, Kry, and
Pai uses the output of a rigid body simulation to drive re-synthesis
from the recorded data obtained with their robotic measurement
system [33].
Past work outside of the graphics community on simulating the
acoustics of solids for the purpose of generating sound has centered
largely on the study of musical systems, such as strings, rigid bars,
membranes, piano sound boards, and violin and guitar bodies. The
techniques used include finite element and finite difference meth-
ods, lower dimensional simplifications, and modal/sinusoidal models
of the eigenmodes of sound-producing systems.
Numerical simulations of bars and membranes have used either
finite difference [3, 8, 10] or finite element methods [4, 5, 24]. Finite
differencing approaches have also been used to model the behavior
of strings [6, 7, 25].
Many current real-time techniques model the modes of acoustical
systems, using resonant filters[1, 9, 35, 37] or additive sinusoidal
synthesis [28]. In essence, modal modeling achieves ef-ficiency by removing the spatial dynamics, and by replacing the
actual physical system by an equivalent mass-spring system which
models the same spectral response. However, the dynamics (in particular
the propagation of disturbances) of the original system are
lost. If the modal shapes are known, the spatial information can be
maintained and spatial interactions can remain meaningful.
If certain assumptions are made, some systems can be modeled
in reduced dimensionality. For example, if it is assumed that a drum
head or the top-plate of a violin is very thin, a two-dimensional
mesh can be used to simulate the transverse vibration [36].
In many systems such as strings and narrow columns of air,
vibration can be considered one-dimensional, with the principal
modes oriented along only one axis. McIntyre, Schumacher and
Woodhouse's time-domain modeling technique has proven to be
very useful in simulating musical instruments with a resonant system
which is well approximated by the one-dimensional wave equation
[20]. Such systems exhibit the d'Alembert solution, which is
a decomposition of the one-dimensional wave equation into left-
going and a right-going traveling wave components. Smith introduced
extensions to the idea taken from scattering filter theory
and coined the term Waveguide Filters for simulations based on
this one-dimensional signal processing technique [29]. The waveguide
wave-equation formulation can be modified to account for frequency
dependent propagation speed due to stiffness, as described
in [11]. In this technique, the propagation speeds around the eigenmodes
of the system are modeled accurately, with errors introduced
in damping at frequencies other than the eigenmodes.
When considering that the various efficient techniques described
above are available, it should be noted that the approximations are
flawed from a number of perspectives. For example, except for
strings and simple bars, most shapes are not homogeneous. It is
important to observe that for even moderate inhomogeneity, re-
flections at the points of changing impedance have to be expected
that are not captured in a straightforward way. Further, the wave-equation
and Euler-Bernoulli equation derivations also assume that
the differential equations governing solid objects are linear. As a
result the above methods can produce good results but only under
very specific conditions. The method that we describe in the next
section requires more computation than most of the above tech-
niques, but it is much more general.
3 Sound Modeling
When solid objects move in a surrounding medium, they induce
disturbances in the medium that propagate outward. For most fluid
media, such as air or water, the significance of viscous effects decreases
rapidly with distance, and the propagating disturbance has
the characteristics of a pressure wave. If the magnitude of the
pressure wave is of moderate intensity so that shock waves do not
form and the relationship between pressure fluctuation and density
change is approximately linear, then the waves are acoustic waves
described by the equation
where t is time, p is the acoustic pressure defined as the difference
between the current pressure and the fluid's equilibrium pressure,
and c is the acoustic wave speed (speed of sound) in the fluid. Fi-
nally, if the waves reach a listener at a frequency between about
20 Hz and 20,000 Hz, they will be perceived as sound. (See Chapter
five of the text by Kinsler et al. [19] for a derivation of Equation
(1).)
The remainder of this section describes our technique for approximating
the sounds that are generated by the motions of solid
objects. The technique builds on previous work in the field of physically
based animation that uses deformable models to simulate the
Deformable
Object SimulatorExtract Surface Vibrations
Motion Data Compute Wave Propagation
Image Renderer Sound Renderer
Figure
2: A schematic overview of joint audio and visual rendering.
(a) (b)
Figure
3: Tetrahedral mesh for an F]3 vibraphone bar. In (a), only
the external faces of the tetrahedra are drawn; in (b) the internal
structure is shown. Mesh resolution is approximately 1 cm.
behavior of the objects. As the motion of the solid objects is com-
puted, their surfaces are analyzed to determine how the motion will
induce acoustic pressure waves in the surrounding media. The system
computes the propagation of those waves to the listener and
then uses the results to generate sounds corresponding to the simulated
behavior. (See Figure 2.)
3.1 Motions of Solid Objects
The first step in our technique requires computing the motions of
the animated objects that will be generating sounds. As these motions
are computed, they will be used to generate both the audio and
visual components of the animation.
Our system models the motions of the solid objects using a
non-linear finite element method similar to the one developed by
O'Brien and Hodgins [22, 23]. This method makes use of tetrahedral
elements with linear basis functions to compute the movement
and deformation of three-dimensional, solid objects. (See Figure 3.)
Green's non-linear finite strain metric is used so that the method can
accurately handle large magnitude deformations. A volume-based
penalty method computes collision forces that allow the objects to
interact with each other and with other environmental constraints.
For the sake of brevity, we omit the details of this method for modeling
deformable objects which are adequately described in [22].
We selected this particular method because it is reasonably fast,
reasonably accurate, easy to implement, and treats objects as solids
rather than shells. However, the sound generation process is largely
independent of the method used to generate the object motion. So
long as it fulfills a few basic criteria, another method for simulating
deformable objects could be selected instead. These criteria are
Temporal Resolution - Vibrations at frequencies as high
as about 20,000 Hz generate audible sounds. If the simulation
uses an integration time-step larger than approximately
105 s, then it will not be able to adequately model high frequency
vibrations.
Dynamic Deformation Modeling - Most of the sounds that
an object generates as it moves arise from vibrations driven by
elastic deformation. These vibrations will not be present with
techniques that do not model deformation (e.g. rigid body
simulators). Similarly, these vibrations will not be present
with inertia-less techniques.
Computer Graphics Proceedings, Annual Conference Series, 2001
Surface Representation - Because the surfaces of the objects
are where vibrations transition from the objects to the
surrounding medium, the simulation technique must contain
some explicit representation of the object surfaces.
Physical Realism - Simulation techniques used for physically
based animation must produce motion that is visually
acceptable for the intended application. Generating sounds
from the motion will reveal additional aspects of the motion
that may not have been visibly apparent, so a simulation
method used to generate audio must compute motion that is
accurate enough to sound acceptable in addition to looking
acceptable.
The tetrahedral finite element method we are using meets all of
these criteria, but so do many other methods that are commonly
used for physically based animation. For example, a mass and
spring system would be suitable provided the exterior faces of the
system were defined.
3.2 Surface Vibrations
Once we have a method for computing the motion of the objects,
the next step in the process requires analyzing the surface's motions
to determine how it will affect the pressure in the surrounding fluid.
Let be the surface of the moving object(s), and let ds be a differential
surface element in with unit normal ^ and velocity . If
we neglect viscous shear forces then the acoustic pressure, p, of the
fluid adjacent to ds is given by
c is the fluid's specific acoustic impedance. From [19],
the density of air at 20C under one atmosphere of pressure is
and the acoustic wave speed is
giving
Representing the pressure field over requires some form of
discretization. We will assume that a triangulated approximation of
exists (denoted ) and we will approximate the pressure field
as being constant over each of the triangles in .
Each triangle is defined by three nodes. The position, , and
velocity, _, of each node are computed by a physical simulation
method as discussed in Section 3.1. We will refer to the nodes of a
given triangle by indexing with square brackets. For example, [2]
is the position in world coordinates of the triangle's second node.
The surface area of each triangle is given by
and its unit normal by
2a
The average pressure over the triangle is computed by substituting
the triangle's normal and average velocity, , into Equation (2) so
that
The variable p tells us how the pressure over a given triangle
fluctuates, but we are only interested in fluctuations that correspond
to frequencies in the audible range. Frequencies above this range
will need to be removed or they will cause aliasing artifacts.1 Lower
1We assume that the simulation time-step is smaller than the audio sampling
period. This is the case for our examples which used an integration
time-step between 106 s and 107 s.
ACM SIGGRAPH 2001, Los Angeles, California, August 12-17, 2001
{
l l l
Current Time Pressure Impulse
Time Delay
Figure
4: One-dimensional accumulation buffer used to account
for travel time delay.
frequencies will not cause aliasing problems, but they will interact
with later computations to create other difficulties. For example,
an object moving at a constant rate will generate a large, constant
pressure in front of it. The corresponding constant term will show
up as an inconvenient offset in the final audio samples. More im-
portantly, it may interact with latter visibility computations to create
unpleasant artifacts. To remove undesirable frequency components,
we make use of two filters that are applied to the pressure variable
at each triangle in .
First, a low-pass filter is applied to pto remove high frequencies.
The low-pass filter is implemented using a normalized kernel, K,
built from a windowed sinc function given by
sin(2fmaxt)
where fmax is the highest frequency to be retained, t is the simulation
time-step, and w is the kernel's half-width. The low-pass
filtered pressure, g, is obtained by convolving p with K and sub-sampling
the result down to audio rate.
The second filter is a DC-blocking filter that will remove any
constant component and greatly attenuate low-frequency ones. It
works by differentiating a signal and then re-integrating the signal
using a lossy integrator. The final filtered pressure, p~, after application
of the DC-blocking filter is given by
where is a loss constant between zero and one, g is the low-pass
filtered pressure, and the subscripts index time at audio rate.
For the examples presented in this paper, fmax was 22,050 Hz
and we sub-sampled to an audio rate of 44,100 Hz. The low-pass
filter kernel's half-width was three times the wavelength of fmax
d3=(fmaxt)e). The value of was selected by trial and
error yielding good results.
3.3 Wave Radiation and Propagation
Once we know the pressure distribution over the surface of the
objects we must compute how the resulting wave propagates outward
towards the listener. The most direct way of accomplishing
this task would involve modeling the region surrounding the objects
with Equation (1), and using the pressure field over as prescribed
boundary conditions. This approach would lead to a coupled
solid/fluid simulation. Unfortunately, the additional cost of
the fluid simulation would not be trivial. Instead, we can make a
few simplifying assumptions and use a much more efficient solution
method.
Huygen's principle states that the behavior of a wavefront may
be modeled by treating every point on the wavefront as the origin
of a spherical wave, which is equivalent to stating that the behavior
of a complex wavefront can be separated into the behavior of a set
of simpler ones [17]. Using this principle, we can approximate theAmplitude (dB)40200
Predicted
Simulated
Frequency (Hz)
Figure
5: Spectrum generated by plucking the free end of a
clamped bar. Predicted values are taken from [19].
result of propagating a single pressure wave outward from by
summing the results of a many simpler waves, each propagating
from one of the triangles in .
If we assume that the environment is anechoic (no reflections)
and we ignore the effect of diffraction around obstacles, then a reasonable
approximation for the effect on a distant receiver of the
pressure wave generated by a triangle in is given by
p~ax!r
where is the location of the receiver, is the center of the triangle,
is the angle between the triangle's surface normal and the vector
, and x!r is a visibility term that is one if an unobstructed
ray can be traced from to and zero otherwise.2 The cos() is
a rough approximation to the first lobe of the frequency-dependent
beam function for a flat plate [19].
Equation (10) is nearly identical to similar equations that are
used in image rendering with local lighting models, and the decision
to ignore reflected and diffracted sound waves is equivalent
to ignoring secondary illumination. A minor difference is that the
falloff term is inversely proportional to distance, not to distance
squared. The sound intensity, measured in energy per unit time and
area, does falloff with distance squared, but eardrums and microphones
react to pressure which is proportional to the square-root of
intensity [32].
2The center of a triangle is computed by averaging the locations of its
vertices and low-pass filtering the result using the sinc kernel from Equation
(6). Likewise, the normal is obtained from low-pass filtered vertex lo-
cations. The filtering is necessary because the propagation computations are
performed at audio rate, not simulation rate, and incorrectly sub-sampling
the triangle centers or normals will result in audible aliasing artifacts.
Computer Graphics Proceedings, Annual Conference Series, 2001
A more significant difference arises because sound travels several
orders of magnitude slower than light, and we cannot assume
that sound propagation is instantaneous. In fluids such as air, sound
does travel rapidly enough that we may not directly
notice the delay except over large distances, but we do notice indirect
effects even at small distances. For example, very small delays
are responsible for both Doppler shifting and the generation of
interference patterns. Additionally, if we wish to compute stereo
sound by using multiple receiver locations, then delay differences
between a human listener's ears as small as 20 s provide important
cues for localizing sounds [2].
To account for propagation delay we make use of a one-dimensional
accumulation buffer that stores audio samples. All
entries in the buffer are initially set to zero. When we compute
s for each of the triangles in at a given time, we also compute a
corresponding time delay
c
The s values are then added to the entries of the buffer that correspond
to the current simulation time plus the appropriate delay.
(See
Figure
4.)
In general, d will not be a multiple of the audio sampling rate. If
we were to round to the nearest entry in the accumulation buffer, the
resulting audio would contain artifacts akin to the jaggies that occur
when scan-converting lines. These artifacts will manifest themselves
in the form of an unpleasant buzzing sound as if a saw-tooth
wave had been added to the result. To avoid this problem, we add
the s values into the buffer by convolving the contribution with a
narrow (two samples wide) Gaussian and splatting the result into
the accumulation buffer.
As the simulation advances forward in time, values are read from
the entry in the accumulation buffer that corresponds to the current
time. This value is treated as an audio sample that is sent to the
output sink. If stereo sound is desired we compute this propagation
step twice, once for each ear.
We have implemented the described technique for generating audio
and tested the system on several examples. For all of the examples,
two listener locations were used to produce stereo audio. The locations
are centered around the virtual viewpoint and separated by
along a horizontal axis that is perpendicular to the viewing
direction. The sound spectra shown in the following figures follow
the convention of plotting frequency amplitude using decibels, so
that the vertical axes are scaled logarithmically.
Figure
1 shows an image taken from an animation of a bowl
falling onto a hard surface and a spectrogram of the resulting audio.
In this example, only the surface of the bowl is used to generate
the audio, and the floor is modeled as rigid constraint. The spectrogram
reveals the bowl's vibrational modes as darker horizontal
lines. Variations in the degree to which each of the modes are excited
occur because different parts of the bowl hit the surface on
each bounce.
The bowl's shape is arbitrary and there is no known analytical
solution for its vibrational modes. While being able to model arbitrary
shapes is a strength of the proposed method, we would like
to verify its accuracy by comparing its results to known solutions.
Figure
5 shows the results computed for a rectangular bar that is
clamped at one end and excited by applying a vertical impulse to
the other. The accompanying plot shows the spectrum of the resulting
sound with vertical lines indicating the mode frequencies
predicted by a well known analytical solution [19]. Although the
(a)
(b)
Figure
A square plate being struck (a) on center and (b) off
center.
Amplitude (dB)6040200
Predicted
Simulated
Frequency (Hz)
Amplitude (dB)6040200
Predicted
Simulated
Frequency (Hz)
Figure
7: Spectrum generated by striking the square plate shown
in
Figure
6 in the center (top) and off from center (right). Predicted
values are taken from [12].
results correspond reasonably well, the simulated results are noticeably
flatter than the theoretical predictions. One possible explanation
for the difference is that the analytical solution assumes
the bar to be perfectly elastic, while our simulated bar experiences
internal damping.
Figure
6 shows two trials from a simulation of a square plate
(with finite thickness) that is held fixed along its border while being
struck by a weight. In the first trial the weight hits the plate
on-center, while in the second trial the weight lands off-center horizontally
by 25% of the plate's width and vertically by 17%. The
frequencies predicted by the analytical solution given by Flecher
and Rossing [12] are overlaid on the spectra of the two resulting
sounds. As with a real plate, the on-center strike does not sig-
nificantly excite vibrational modes that have nodes lines passing
ACM SIGGRAPH 2001, Los Angeles, California, August 12-17, 2001
Real Bar
Simulation
Simulated 1
Simulated 2
Measured
Tuning
Amplitude (dB)6040200
Frequency (Normalized)
Figure
8: The top image plots a comparison between the spectra of
a real vibraphone bar (Measured), and simulated results for a low-resolution
(Simulated 1) and high-resolution mesh (Simulated 2).
The vertical lines located at 1, 4, and 10 show the tuning ratios
reported in [12].
through the center of the plate (e.g. the modes indicated by the second
and third dashed, red lines). The off-center strike does excite
these modes and a distinct difference can be heard in the resulting
audio. The simulation's lower modes match the predicted ones
quite well, but for the higher modes the correlation becomes ques-
tionable. Poor resolution of the higher modes is to be expected as
they have a lower signal-to-noise ratio and are more easily affected
by discretization and other error sources.
The bars in a vibraphone are undercut so that the first three partials
are tuned according to a 1:4:10 ratio. While this modifica-
tion makes the instrument sound pleasant, the change in transverse
impedance between the thin and thick portions of the bar prevent
an analytical solution. We ran two simulations of a 36 cm long bar
with mesh resolutions of 1 cm and 2 cm, and compared them to a
recording of a real bar being struck. (The 1 cm mesh is shown in
Figure
3.) To facilitate the comparison, the simulated audio was
warped linearly in frequency to align the first partials to that of the
real bar at 187 Hz ( F]3), which is equivalent to adjusting the simulated
bar's density so that is matches the real bar. The results of this
comparison are shown in Figure 8. Although both the simulated and
real bars differ slightly from the ideal tuning, they are quite similar
to each other. All three sounds also contain a low frequency component
below the bar's first mode that is created by the interaction
with the real or simulated supports.
The result of striking a pendulum with a fast moving weight is
shown in Figure 9. Because of Doppler effects, the pendulum's periodic
swinging motion should modulate both the amplitude and the
frequency of the received sound. Because our technique accounts
for both distance attenuation and travel delay, it can model these
phenomena. The resulting modulation is clearly visible in the spectrogram
(particularly in the line near 500 Hz) and can be heard in
the corresponding audio.
Because this sound generation technique does not make additional
assumptions about how waves travel in the solid objects, it
can be used with non-linear simulation methods to generate sounds
for objects whose internal vibrations are not modeled by the linearwave equation. The finite element method we are using employs
a non-linear strain metric that is suitable for modeling large de-
formations. Figure 10 shows frames from two animation of a ball
dropping onto a sheet. In the first one, the sheet is nearly rigid and
the ball rolls off. In the second animation, the sheet is highly compliant
and it undergoes large deformations as it interacts with the
ball. Another example demonstrating large deformations is shown
in
Figure
11 where a slightly bowed sheet is being bent back and
forth to create crinkling sound. Animations containing the audio for
these, and other, examples have been included on the proceedings
video tape and DVD. Simulations times are listed in Table 1.
5 Conclusions and Future Work
In this paper, we have described a general technique for computing
physically realistic sounds that takes advantage of existing simulation
methods already used for physically based animation. We have
also presented a series of examples that demonstrate the results that
can be achieved when our technique is used with a particular, finite
element based simulation method.
One area for future work is to combine this sound generation
technique with other simulation methods. As discussed in Section
3.1, it should be possible to generate audio from most deformable
body simulation methods such as mass and spring systems
or dynamic cloth simulators. It may also be possible to generate
audio from some fluid simulation methods, such as the method
developed by Foster and Metaxas for simulating water [13].
Of the criteria we listed in Section 3.1, we believe that the required
temporal resolution is most likely to pose difficulties. If
the simulation integrator uses time-steps that are larger than about
105 s, the higher frequencies of the audible spectrum will not be
sampled adequately so that, at best, the resulting audio will sound
dull and soggy. Unfortunately, small time-steps result in slow sim-
ulations, and as a result a significant amount of research has focused
on finding ways to allow numerical integrators to take larger
steps while remaining stable. In general, the larger time-steps are
achieved by removing the high-frequency components from the
motions being computed. While removing these high-frequency
components will at some point create visually objectionable arti-
facts, it is likely that sound quality will be adversely affected first.
Rigid body simulations are also excluded by our criteria because
they do not model the deformations that drive most of the vibrations
that produce sound. This limitation is particularly unfortunate
because rigid body simulations are widely used, particularly in
interactive applications. Because they are so commonly used, developing
general methods for computing sounds for rigid body and
large time-step simulation methods is an important area for future
work.
Although our sound propagation model is relatively cheap to
compute, it is also quite limited. Reflected and diffracted sound
transport often play a significant role in determining what we hear
in an environment. To draw an analogy with image rendering, our
current method is roughly equivalent a local illumination model and
adding reflection and diffraction would be equivalent to stepping up
to global illumination. In some ways global sound computations
would actually be more complex than global illumination because
one cannot assume that waves propagate instantaneously. Other researchers
have investigated techniques for modeling acoustic reflec-
tions, for example [14], and combining our work with theirs would
probably be useful.
Our listener model could also be improved. As currently im-
plemented, a listener receives pressure waves equally well from all
directions. While this behavior is ideal for an omni-directional mi-
crophone, human ears and most real microphones behave quite dif-
ferently. One obvious effect is that the human head acts as a blocker
so that high frequency sounds from the side tend to be heard better
Computer Graphics Proceedings, Annual Conference Series, 20012500Frequency (Hz)Amplitude (dB)500400
Time
Figure
9: The spectrogram produced by a swinging bar after it is
struck by a weight.
by that ear while low frequencies diffract around the head and are
heard by both ears. Other, more subtle effects, such as the pattern
of reflections off of the outer ear, also contribute to allowing us to
localize the sounds we hear. Other researchers have taken extensive
measurements to determine how sounds are affected as they enter a
human ear, and used the compiled data to build head-related transfer
functions [2, 16]. Filters derived from these transfer functions have
been used successfully for generating three-dimensional spatialized
audio. We are currently working on adding this functionality to our
existing system.
Our primary goal in this work has been to generate a tool that is
useful for generating audio. However, we have also noticed that the
audio produced by a simulation makes an excellent debugging tool.
For example, we have observed that symptoms of a simulation going
unstable often can be heard before they become visible. Other
common simulation errors, such as incorrect collision response, are
also evidenced by distinctively odd sounds. As physically based
animation continues to becomes more commonly used, sound generation
could become useful not only for its own sake but also as
standard tool for working with and debugging simulations.
Acknowledgments
The images in this paper were generated using Alias-Wavefront's
Maya and Pixar Animation Studios' RenderMan software running
on Silicon Graphics hardware. This research was supported, in part,
by a research grant from the Okawa foundation, by NSF grant number
9984087, and by the State of New Jersey Commission on Science
and Technology Grant Number 01-2042-007-22.
--R
Numerical simulations of xylophones: II.
Measurements and efficient simulations of bowed bars.
The Physics of Musical Instruments.
Realistic animation of liquids.
A beam tracing approach to acoustic modeling for interactive virtual environments.
Figure 10: These figures show a round weight being dropped onto
The surface shown in (a) is rigid while the
one shown in (b) is more compliant.
Los Angeles
Example Figure Simulation t Nodes Elements Surface Elements Total Time Audio Time
Clamped Bar 5 1 107 s 125 265 246 240:4 min 1:26 min 0:5
Vibraphone Bar 8 1 107 s 539 1484 994 1309:7 min 5:31 min 0:4
Swinging Bar 9 3 107 s 130 281 254 88:4 min 1:42 min 1:6
Rigid Sheet 10.
Compliant Sheet 10.
Bent Sheet 11 1 107 s 678
Table 1: The total times indicate the total number of minutes required to compute one second of simulated data
listed indicate the time spent generating audio as a percentage of the total simulation time.
using one 350 MHz MIPS R12K processor while unrelated processes were running on the machine's other processors.
HRTF measurements of a KEMAR dummy head microphone.
Introduction to Fourier Optics.
An integrated approach to sound and motion.
Fundamentals of Acoustics.
On the oscillation of musical instruments.
Graphical modeling and animation of brittle fracture.
Animating fracture.
Nonuniform beams with harmonically related overtones for use in percussion instruments.
Physical modeling by directly solving wave PDE.
Robotic measurement and modeling of contact sounds.
A computer model for bar percussion instruments.
Sound rendering.
Deformable models.
Handbook for Acoustic Ecology.
Physical modeling with the 2D waveguide mesh.
VLSI models for sound synthesis.
Figure 11: A slightly bowed sheet being bent back and forth.
--TR
--CTR
Nikunj Raghuvanshi , Ming C. Lin, Interactive sound synthesis for large scale environments, Proceedings of the 2006 symposium on Interactive 3D graphics and games, March 14-17, 2006, Redwood City, California
Golan Levin , Zachary Lieberman, Sounds from shapes: audiovisual performance with hand silhouette contours in the manual input sessions, Proceedings of the 2005 conference on New interfaces for musical expression, May 26-28, 2005, Vancouver, Canada
Davide Rocchesso , Roberto Bresin , Mikael Fernstrm, Sounding Objects, IEEE MultiMedia, v.10 n.2, p.42-52, March
James F. O'Brien , Chen Shen , Christine M. Gatchalian, Synthesizing sounds from rigid-body simulations, Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 21-22, 2002, San Antonio, Texas
Ying Zhang , Reza Sotudeh , Terrence Fernando, The use of visual and auditory feedback for assembly task performance in a virtual environment, Proceedings of the 21st spring conference on Computer graphics, May 12-14, 2005, Budmerice, Slovakia
Ying Zhang , Terrence Fernando , Hannan Xiao , Adrian R. L. Travis, Evaluation of auditory and visual feedback on task performance in a virtual assembly environment, Presence: Teleoperators and Virtual Environments, v.15 n.6, p.613-626, December 2006
M. Cardle , S. Brooks , Z. Bar-Joseph , P. Robinson, Sound-by-numbers: motion-driven sound synthesis, Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, July 26-27, 2003, San Diego, California
Doug L. James , Dinesh K. Pai, DyRT: dynamic response textures for real time deformation simulation with graphics hardware, ACM Transactions on Graphics (TOG), v.21 n.3, July 2002
Kees van den Doel, From physics to sound: Comments on van den Doel, ICAD 2004, ACM Transactions on Applied Perception (TAP), v.2 n.4, p.547-549, October 2005
Laura Ottaviani , Davide Rocchesso, Auditory perception of 3D size: Experiments with synthetic resonators, ACM Transactions on Applied Perception (TAP), v.1 n.2, p.118-129, October 2004
Yoshinori Dobashi , Tsuyoshi Yamamoto , Tomoyuki Nishita, Real-time rendering of aerodynamic sound using sound textures based on computational fluid dynamics, ACM Transactions on Graphics (TOG), v.22 n.3, July
Georg Essl , Stefania Serafin , Perry R. Cook , Julius O. Smith, Musical Applications of Banded Waveguides, Computer Music Journal, v.28 n.1, p.51-63, March 2004
Kees van den Doel, Physically based models for liquid sounds, ACM Transactions on Applied Perception (TAP), v.2 n.4, p.534-546, October 2005
Doug L. James , Jernej Barbi , Dinesh K. Pai, Precomputed acoustic transfer: output-sensitive, accurate sound generation for geometrically complex vibration sources, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Cynthia Bruyns, Modal Synthesis for Arbitrarily Shaped Objects, Computer Music Journal, v.30 n.3, p.22-37, September 2006
Kees van den Doel , Dave Knott , Dinesh K. Pai, Interactive simulation of complex audiovisual scenes, Presence: Teleoperators and Virtual Environments, v.13 n.1, p.99-111, February 2004
Georg Essl , Stefania Serafin , Perry R. Cook , Julius O. Smith, Theory of Banded Waveguides, Computer Music Journal, v.28 n.1, p.37-50, March 2004
Kees van den Doel , Paul G. Kry , Dinesh K. Pai, FoleyAutomatic: physically-based sound effects for interactive simulation and animation, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, p.537-544, August 2001
Richard Corbett , Kees van den Doel , John E. Lloyd , Wolfgang Heidrich, Timbrefields: 3d interactive sound models for real-time audio, Presence: Teleoperators and Virtual Environments, v.16 n.6, p.643-654, December 2007
Mashhuda Glencross , Alan G. Chalmers , Ming C. Lin , Miguel A. Otaduy , Diego Gutierrez, Exploiting perception in high-fidelity virtual environmentsAdditional presentations from the 24th course are available on the citation page, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts | animation techniques;finite element method;physically based modeling;sound modeling;simulation;dynamics;surface vibrations |
383594 | Stochastic control of path optimization for inter-switch handoffs in wireless ATM networks. | One of the major design issues in wireless ATM networks is the support of Inter-switch handoffs. An inter-switch handoff occurs when a mobile terminal moves to a new base station connecting a different switch. Apart from resource allocation at the new base station, inter-switch handoff also requires connection rerouting. With the aim of minimizing the handoff delay while using the network resources efficiently, the two-phase handoff protocol uses path extension for each inter-switch handoff, followed by path optimization if necessary. The objective of this paper is to determine when and how often path optimization should be performed. The problem is formulated as a semi-Markov decision process. Link cost and signaling cost functions are introduced to capture the tradeoff between the network resources utilized by a connection and the signaling and processing load incurred on the network. The time between inter-switch handoffs follows a general distribution. A stationary optimal policy is obtained when the call termination time is exponentially distributed. Numerical results show significant improvement over four other heuristics. | Figure
1 (b), the switch chosen to perform this function is usually referred to as the crossover switch
[5]. Depending on the performance criteria of the crossover switch discovery algorithms [5]-[6], the end-
to-end path after rerouting may not be optimal. In this paper, we define an optimal path as the best path
among a set of feasible paths that can satisfy the prescribed end-to-end QoS constraints.
A two-phase handoff protocol was proposed in [8] that combines the advantages of path extension
and path rerouting schemes. The two-phase handoff protocol consists of two stages: path extension and
possible path optimization. Referring to Figure 2, path extension is performed for each inter-switch
handoff, and path optimization is performed whenever it is necessary. During path optimization, the
network determines the optimal path between the source and the destination (i.e., the path between the
remote terminal and the current target switch in Figure 2) and transfers the user information from the old
path to the new path. The major steps in the path optimization process generally involve [11]: (1)
determining the location of the crossover switch; (2) setting up a new branch connection; (3) transferring
the user information from the old branch connection to the new one; and (4) terminating the old branch
connection.
Since the mobile terminal is still communicating over the extended path via the current base station
while path optimization takes place, this gives enough time for the network to perform the necessary
functions while minimizing any service disruptions. Notice that the path optimization process described
above is not restricted to the two-phase handoff protocol. It can also be applied to other connection
rerouting protocols where the end-to-end path after rerouting is sub-optimal. In addition, when the mobile
terminal moves to another switch during the execution of path optimization, path extension can still be
used to extend the connection to the target switch. To ensure a seamless path optimization, three
important issues need to be addressed:
1. How to determine the location of the crossover switch?
2. How can the service disruptions be minimized during path optimization?
3. When and how often should path optimization be performed?
For the first issue, a crossover switch determination algorithm based on PNNI (Private Network-to-
Network Interface) standard was proposed in [11]. Five different crossover switch algorithms for wireless
ATM local area networks are proposed in [5]. For the second issue, cell loss and cell mis-sequencing can
be prevented by using appropriate signaling and buffering at the anchor and crossover switches during
path optimization [11].
In this paper, we focus on the third issue. Our work is motivated by the fact that path optimization
does not have to be performed after each inter-switch handoff. Although path optimization can increase
the network utilization by rerouting the connection to a more efficient route, transient QoS degradations
such as cell loss and an increase in cell delay variation may occur. In addition, if there are a large number
of mobile users with high movement patterns, performing path optimization after each path extension
will increase the processing load of certain switches and the signaling load of the network. The decision
to perform path optimization should be based on several factors including: (1) the amount of network
resources (e.g., bandwidth) utilized by the connection; (2) QoS requirements; (3) the remaining time of
the connection; and (4) the signaling load of the network.
To this end, we propose a stochastic model to determine the optimal time to perform path
optimization for the two-phase handoff protocol. The path optimization problem is formulated as a semi-Markov
decision process. Link cost and signaling cost functions are introduced to capture the trade-off
between the network resources utilized by a connection and the signaling and processing load incurred on
the network. The objective is to determine the optimal policy which minimizes the expected total cost per
call. The major contribution of our work lies in the formulation of a general model that is applicable to a
wide range of conditions. Distinct features of our model include: (1) different link cost functions can be
assigned to different service classes (e.g., CBR, VBR, ABR) with different bandwidth requirements; (2)
different signaling cost functions can be used based on the complexity of the path optimization
procedures and the signaling load of the network; and (3) the time between inter-switch handoffs can
follow an arbitrary general distribution.
The rest of the paper is organized as follows. The model formulation of the path optimization problem
is described in Section 2. In Section 3, we describe the optimality equations, the value iteration algorithm,
and the structure of the optimal policy. The implementation issues are described in Section 4. Extension
of the model to include mobile-to-mobile connections and other QoS constraints are described in Section
5. In Section 6, we present numerical results and compare the optimal policy with four other heuristics.
Conclusions are given in Section 7 where further work is also discussed. The proofs of the propositions
stated in this paper are shown in the appendix. For general background on Markov decision processes,
please refer to [14], [15].
2. Model Formulation
Each mobile connection may experience a number of inter-switch handoffs during its connection lifetime.
During each inter-switch handoff, path extension can be used to extend the connection from the current
anchor switch to the target switch. Although path extension is simple to implement, the connection
utilizes more network resources than necessary. Occasional path optimization is required to reroute the
connection to an optimal path. Path optimization is a complex process. It increases the processing and
signaling load of the network. Thus, there is a trade-off between the network resources utilized by the
connection and the processing and signaling load incurred on the network. We formulate the above
problem as a semi-Markov decision process. After each path extension, the network must decide whether
to perform subsequent path optimization. The decision is based on the current number of links of the path
and the locations of the anchor and target switches. The model is described below.
2.1 Semi-Markov Decision Process Model
When an inter-switch handoff occurs, a path extension is performed. After that, a decision must be made
whether to perform subsequent path optimization. Those time instants are called decision epochs.
Referring to Figure 3, the sequence s ,s , represents the time of successive decision epochs. Since
inter-switch handoff only occurs during the call lifetime, the time interval requiring mobility monitoring
is between a call arrival and its termination. The term represents the arrival time of a new call andthe random variable T denotes the call termination time. The random variable f(T )denotes the total
number of inter-switch handoffs that occur before the termination time T .
At each decision epoch, the network must decide whether to perform subsequent path optimization.
PO}denote the action set, where PO corresponds to perform path optimization after
path extension, and NPO corresponds to perform path extension only. We use Y to denote the action
chosen at decision epoch n .
The action chosen is based on the current state of the connection. The state space is denoted by S . For
each state s S, the state information includes the locations of the target and anchor switches, and the
number of links of the current path. The random variable X denotes the state at decision epoch n .
Two cost functions are introduced to account for the network resources utilized and the signaling load
incurred due to an inter-switch handoff. The link cost function reflects the amount of network resources
used during the connection lifetime, while the signaling cost function captures the processing and
signaling load incurred on the network due to path extension and path optimization. The signaling costs
are incurred only at the decision epochs, while the link cost is accrued continuously over the call lifetime.
The function f(s)denotes the link cost rate in state s. If the state is equal to sduring the time
interval (s , s ), then the link cost incurred during that period is equal to (s- s )f(s). The
function b(s, a)denotes the signaling cost incurred when the decision maker chooses action ain state s.
Thus, b(s, NPO)represents the signaling cost of performing path extension, and b(s, PO)represents the
signaling cost of performing path extension and subsequent path optimization. All cost functions are
assumed to be finite and nondecreasing with respect to the number of links of the current path.
A decision rule prescribes a procedure for action selection in each state at a specified decision epoch.
Deterministic Markovian decision rules are functions d : Sfi A, that specify the action choice when the
system occupies state sat decision epoch t< T. That is, for each sS, d(s) A . This decision rule is
said to be Markovian (memoryless) because it depends on previous system states and actions only
through the current state of the system, and deterministic because it chooses an action with certainty. A
policy p specifies the decision rules to be used at all decision epochs. That is, a policy is a sequence of
decision rules, p= (d,d ,). The set of all policies is denoted by P .
Let v (s)denote the expected total cost per call given policy pis used with initial state s . Thus,
denotes the expectation with respect to policy p and initial state s . In (1), the first summation
s
in the right hand side corresponds to the lump sum portion of the signaling cost, each term in the second
summation corresponds to the continuous portion of the link cost incurred at rate f(X)between decision
epochs nand n+ 1 , and the last term corresponds to the link cost incurred at rate f(X )between
decision epoch f(T )and termination time T. In this paper, we assume the call termination time is
exponentially distributed with rate m . In that case, (1) can be written as:
where
c(s, a)= b(s, a)+ Es --m-(1 - e )f(s). (3)
For a proof of this fact, see Proposition 1 in the appendix. The expression in (2) is the expected total cost
of an infinite-horizon semi-Markov decision process with discount rate m . The function c(s, a)in (3) is
the expected total cost between two decision epochs, given the system occupies state s and the decision
maker chooses action ain state s. This cost function is further discussed in Section 2.3.
Since the optimization problem that we consider is to minimize the expected total cost, we define that
a policy p* is optimal in P if vp*(s)vp(s)for all p P.
Let G(tX, Y )denote the cumulative distribution function of the time between decision epochs n
current state X and action Y is chosen. The time between decision epochs corresponds
to the time between inter-switch handoffs. In this formulation, the time between inter-switch handoffs
follows a general distribution and can depend on the location of a particular anchor switch that the mobile
terminal is connected to. We use G(dtX, Y )to represent the time-differential. That is,
A policy is said to be stationary if d = d for all t . A stationary policy has the form p= (d,d,);
for convenience we denote it by d. For a stationary policy d, (2) can be written as:
d -mt d
s' S 0
where P[s' s, d(s)]denotes the transition probability that the next state is s' , given the current state is s
and action d(s)is chosen. For a proof of this fact, see Proposition 1 in the appendix. Our objective is to
determine an optimal stationary deterministic policy d* which minimizes (4).
To simplify the analysis, two assumptions are made. First, we assume the distribution of the time
between inter-switch handoffs is independent of the state and action chosen, i.e., G(tX, Y )= G(t).
Second, we assume the mobile terminal is communicating with a remote terminal which is stationary.
That is, we consider a mobile-to-fixed connection. The model formulation for mobile-to-mobile
connection is described in Section 5.1.
2.2 State Transition Probability Function
A state change occurs when there is an inter-switch handoff. The state space S is three dimensional. For
each state (i,j,k) S , i denotes the location of the target switch; j denotes the location of the current
anchor switch; and k denotes the number of links of the current path. Thus,
where Ndenotes the total number of nodes in the network and Lrepresents the maximum number of
links allowed in a path. The number of links of any path is always finite. We assume the number of links
increased by a path extension is bounded by Mwhich is much smaller than L(i.e., M L).
Since the end-to-end delay is proportional to the number of links of the path, a sub-optimal path with
a large number of links not only increases the delay but also increases the call dropping probability and
the congestion level of the network. We impose the condition that whenever the number of links in a
connection is greater than or equal to L- Mand there is an inter-switch handoff, path extension is
performed followed by path optimization with certainty. For convenience, we let K= L- M. Later we
show that path optimization is always performed when the number of links exceeds a certain threshold,
and this threshold is much smaller than K .
Given the current state s= (i,j,k), the available action set is:
{NPO, PO},1 k< K
{PO}, KkL.
Thus, after each path extension, path optimization may be performed if the number of links is less than
K , while path optimization is performed with certainty whenever the number of links is greater than or
equal to K .
Two probability distribution functions are introduced to govern the state changes. Let
p(mi, j)denote the probability that the number of links of the optimal path is m , given that the
locations of the two end-points are i and j , respectively.
q(li)denote the probability that the location of the target switch in the next decision epoch is l , given
that the location of the target switch at the current decision epoch is i .
In ATM networks, source routing is being used for all connection setup requests. That is, the source
switch selects a path based on topology, loading, and reachability information in its database. As
networks grow in size and complexity, full knowledge of network parameters is typically unavailable.
Each single entity in the network cannot be expected to have detailed and instantaneous access to all
nodes and links. Routing must rely on partial or approximate information, and still meet the QoS
demands [16][17]. The ATM Forum PNNI standard [18] introduces a hierarchical process that aggregates
information as the network gets more and more remote. However, the aggregation process inherently
decreases the accuracy of the information and introduces uncertainty. Thus, in large networks it is more
appropriate to model the number of links of a path between two endpoints in a probabilistic manner.
On the other hand, for small networks with periodic routing information update, the number of links
of a path between the two endpoints can be modeled in a deterministic manner. Let (i, j)denote the
number of links of the optimal path between the two endpoints i and j . The functions p(mi, j)and
(i, j)are related by
Let D denote the location of the destination (i.e., the remote terminal), which is assumed to be fixed.
The transition probability that the next state is s' = (i',j',k')given that the current state is s= (i,j,k)
and action a is chosen, is given by:
0, otherwise.
Equation (7) states that if action NPO is chosen, the number of links is increased by m with probability
p(mi, j)after path extension. On the other hand, if action PO is chosen, the number of links is equal to
nwith probability p(ni, D)after path optimization. In both cases, the location of the target switch at the
next decision epoch is equal to i' with probability q(i' i).
2.3 Cost Functions
For each path extension event, the network incurs a fixed signaling cost C > 0 and a variable signaling
cost h (m)where m represents the number of links increased during path extension. The terms C and
h (m)capture the cost of setting up the extended path between the anchor and target switches.
For each path optimization performed, the network incurs a fixed signaling cost C > 0 and a
PO
variable signaling cost h (l)where l represents the number of links reduced during path optimization.
PO
These two terms capture the cost of (1) locating the crossover switch; (2) setting up the new branch
connection; (3) terminating the old branch connection; and (4) updating the connection server about the
status of the existing route.
We assume the link cost rate only depends on the number of links of the current path. That is,
f(s)= f(k)for all s S. Recall from (2) that c(i,j,k,a)denotes the expected total cost between two
decision epochs, given the system occupies state (i,j,k)and action a is chosen. Since the first inter-switch
handoff occurs at time s , the locations of the anchor and target switches are the same at the callsetup time s . Thus, during the time interval (s , s ] , we have i= jand the cost function
c(j,j,k,NPO)= I f(k) (8) t -mt
G(dt). The function I1 f(k)is the expected discounted link cost between two
decision epochs, given that the current number of links is k .
For other decision epochs not equal to s , the locations of the anchor and target switches are alwaysdifferent (i.e., ijif s s). In that case, if action NPO is chosen, then the cost functionM
The function CPE j)is the expected signaling cost for path extension, given that the
locations of the anchor and target switches are i and j , respectively.
For path optimization, we assume the number of links of the optimal path is always less than or equal
to the number of links of the current path, and less than K . For decision epoch snot equal to s, ifaction PO is chosen, then the cost function
where x y= min (x, y). The expression CPO D)is the expected signaling cost for
path optimization, given that the current number of links is k , and the locations of the source and
destination are iand D, respectively.
3. Optimality Equations
In this section, we introduce the optimality equations (sometimes referred to as the Bellman equations or
functional equations) and investigate their properties. We show that solutions of these equations
correspond to optimal value functions and that they also provide a basis for determining optimal policies.
Let v(s)denote the minimum expected total cost per call given state s . That is,
The optimality equations are given by
-mt
a A s' S 0
-mt
Equation (12) can be expanded as follows:For i=jand 1 k< K,
For
v(i,j,k)= min -c(i,j,k,NPO)+ I2v(l,i,k+ m)p(mi, j)q(li),
c(i,j,k,PO)+ I2v(l,i,n)p(ni, D)p(mi, j)q(li).
For
v(i,j,k)= c(i,j,k,PO)+ I2v(l,i,n)p(ni, D)q(li). (15)
At call setup time s , the locations of the anchor and target switches are the same. Thus in (13), no pathextension or path optimization is performed. For other decision epochs not equal to s , the locations ofthe anchor and target switches are different. If the number of links of the path is less than K , then after
each path extension, the network will decide whether to perform subsequent path optimization. This fact
is stated in (14). Since path optimization is always performed if the number of links is greater than or
equal to K, in (15), the action POis chosen when there is an inter-switch handoff.
If the signaling cost function for path optimization is zero (i.e., C =h (l)=0 ), the problem of
PO PO
finding an optimal policy is trivial. It is optimal to perform path optimization after each inter-switch
handoff. This is because the link cost function is nondecreasing with respect to the number of links of the
current path. After each path optimization, there is a reduction in the number of links. However, if the
signaling cost function for path optimization is nonzero, it is not obvious as to what constitutes the
optimal policy. Note that if m > 0 , the state space is finite, and the cost functions are bounded, then the
solutions for equations (13)-(15) exist. By solving these equations, a stationary deterministic optimal
policy can be obtained.
3.1 Value Iteration Algorithm
There are a number of iteration algorithms available to solve the above optimality equations. Examples
include the value iteration, policy iteration, and linear programming algorithms [14]. Value iteration is the
most widely used and best understood algorithm for solving discounted Markov decision problems. Value
iteration is also called by other names including successive approximations, over-relaxation, and pre-
Jacobi iteration. The following value iteration algorithm finds a stationary deterministic optimal policy
and the corresponding expected total cost.
Algorithm1. Set v (s)= 0 for each state sS. Specify e> 0 and set
2. For each s S, compute v (s)by:
a A s' S 0
3. If v - v < e , go to step 4. Otherwise increment n by 1 and return to step 2.
4. For each s S, the stationary optimal policy
-mt n
a A s' S 0
and stop.
There are a number of definitions for the function norm . In this paper, the function norm is
defined as: Convergence of the value iteration algorithm is ensured since the
operation in step 2 corresponds to a contraction mapping. Thus, the function v (s)converges in norm to
v(s). Note that the convergence rate of the value iteration algorithm is linear.
In small networks, if each node maintains perfect information of all nodes and links, then the function
v[i,i,(i, D)]is the minimum expected total cost per call given source iand destination D. On the other
hand, in large networks, the number of links of a path determined by the source is modeled in a
probabilistic manner. In that case, the expression
v(i,i,k)p(ki, D) (16)
is the minimum expected total cost per call given source iand destination D, averaged over the number
of links of the optimal path.
3.2 Structure of the Optimal Policy
We now provide a condition under which the optimal policy has a control limit (or threshold) structure.
The control limit structure states that path optimization is performed with certainty whenever the number
of links of the current path exceeds a certain threshold. For convenience, we let Dg(k)= g(k
for some function g .
Proposition: Given state (i,j,k)S , there exists an optimal policy d* that has a control limit structure:
PO, k* kL
when I1Df(k+ m)-DhPO(k+ m- n)p(ni, D) 0for all msuch that p(mi,
The proof of the above proposition is shown in Proposition 2 in the appendix. The value k* is the
control limit or threshold. Consider the special case where the cost functions are linear. That is,
f(k)= C k and h (l)= w l where C and w are positive constant. In this case, if
link PO PO link PO
I C -w 0, then path optimization is always performed when the number of links is greater than or
link PO
equal to k* . An optimal policy with threshold structure facilitates its implementation. For each mobile
connection, the network only has to maintain the information of the minimum number of links to initiate
path optimization for all anchor and target switch pairs. The decision to perform path optimization can be
made via a table lookup.
Note that the optimal policy still maintains a threshold structure for other cost functions as long as
they are convex and nondecreasing. Interested readers can refer to [19] for a proof of this fact. For those
cost functions, the value iteration algorithm described in Section 3.1 can still be used to determine the
minimum expected total cost and the optimal policy.
4. Implementation Aspects
Having identified the different parameters involved in the model, we are now in a position to explain the
steps that need to be taken in order to implement the model. For each mobile connection, during its
connection setup phase, the network controller assigns the cost functions based on the service class and
the signaling load of the network. Different service classes (e.g., CBR, VBR, ABR) with different
bandwidth requirements are assigned different link cost functions to reflect the network resources
consumed. The assigned signaling cost function reflects the complexity of the path optimization
procedures and the current signaling load of the network. By keeping the mobility profile of each user
(i.e., the movement history and call history), the average time between inter-switch handoff as well as the
average duration of the connection can be estimated [20][21].
Given the input parameters (i.e., cost functions and various distributions), the value iteration
algorithm can be used to determine the optimal policy. The optimal policy is then stored in a tabular
format. Each entry of the table specifies the minimum number of links to initiate path optimization for a
specific pair of anchor and target switches. Whenever there is an inter-switch handoff, the network
performs a table lookup at the corresponding anchor and target switch entry. Path optimization is
performed if the number of links is greater than the threshold.
The optimal policy table needs to be updated when there are changes in network topology or
signaling load of the network. The update can be performed off-line. That is, whenever spare processing
capacity is available at the network controller.
5. Model Extensions
In the previous sections, we consider a connection between a mobile terminal and a fixed endpoint. In this
section we extend the above model to a connection between two mobile terminals and take into
consideration other QoS constraints.
5.1 Extension to Mobile-to-Mobile Connection
The problem formulation for mobile-to-mobile connection is similar to that of mobile-to-fixed
connection. Consider mobile terminals 1 and 2 communicating with each other via a wireless ATM
network. Each mobile terminal has its own movement pattern. A path extension is performed when there
is an inter-switch handoff, (initiated from either side), followed by a path optimization if necessary. In
this formulation, the state space needs to include the locations of the two endpoints, as well as the
information of which mobile terminal initiates the path extension. For each state (i,j,j ,k,J) S , i
denotes the location of the target switch; j and j denote the locations of the anchor switches connected
to mobile terminals 1 and 2, respectively; k denotes the number of links of the current path; and J
denotes the identifier of the mobile terminal which initiates the inter-switch handoff.
Since the movement pattern of each mobile user is different, the time between inter-switch handoffs
for each mobile user is also different. Suppose the time between inter-switch handoffs for mobile terminal
r, (r{1, 2}), is exponentially distributed with rate l, then the time between decision epochs is also
r
exponentially distributed with rate l + l .
Since the state space has changed, the cost functions and the state transition probability function have
to be modified accordingly. As the modification is conceptually similar to the functions derived in Section
2, the details are omitted. The optimality equations are
a A 1 2 s' S
The value iteration algorithm described in Section 3.1 can be used to evaluate the expected total cost and
the optimal policy. The conditions for the optimal policy with a threshold structure can also be derived.
5.2 Extension to QoS Constraints
In Sections 2 and 3, path optimization is triggered based on the number of links of the current path. In
general, a mobile connection can have multiple QoS constraints such as bandwidth, delay, delay jitter,
etc. Suppose the connection has to maintain a delay constraint. In this case path optimization is
performed with certainty if the end-to-end delay after path extension exceeds the delay constraint, while
path optimization may be performed if the delay after path extension is still below the constraint.
To incorporate the delay constraint into the model, the state space needs to be extended to include the
end-to-end delay of the current path. We assume the end-to-end delay of a path is the sum of the delay on
each link of the path. The delay information on each link can be obtained from the network by
measurement. Let zdenote the end-to-end delay of the current path and Ybe the delay constraint. Let
F(i, j)denote the delay of the path between the two endpoints i and j . The optimality equations
described in Section 3 then include the constraint z+ F(i, denote the locations
of target and anchor switches respectively. Note that the value iteration algorithm described in Section 3.1
cannot be used to solve the optimality equations with constraints. However, the optimality equations can
be transformed into primal or dual linear programs, which can then be solved by the simplex algorithm.
Due to the space limitations, please refer to [14] for the details of the transformation.
In summary, multiple QoS constraints can be incorporated into the model by extending the state space
and including the constraint equations into the set of optimality equations. The expected total cost and the
optimal policy can be obtained by transforming the model into a linear programming model.
6. Numerical Results and Discussions
In this section, we compare the performance of the optimal policy with four heuristics. For the first
heuristic, path optimization is performed after each path extension. We denote this policy as always
PO
perform PO or d . For the second heuristic, no path optimization is performed during the connection
NPO
lifetime. We denote this policy as never perform PO or d . For the third heuristic, periodic path
optimization is considered. The use of periodic path optimization has been proposed within the ATM
Forum [22]. For periodic path optimization, after each fixed time period, the network determines if the
connection requires path optimization. Path optimization is performed if an inter-switch handoff has
occurred during the time interval. We assume this fixed time interval is equal to the average time between
inter-switch handoffs. For the last heuristic, we consider the Bernoulli path optimization scheme which
we proposed and analyzed in [23]. For the Bernoulli scheme, path optimization is performed with
probability p after each extension.
opt
The performance metrics are the expected total cost per call and the expected number of path
optimizations per call. The expected total cost per call is defined in Section 2. The expected number of
path optimizations per call given policy p with initial state s is:
where 1[.]denotes the indicator function (i.e.,
6.1 Simulation Model
In the simulation model, a wireless ATM network is modeled as a non-hierarchical random graph.
Random graphs have been used to model ATM networks [24]. Different variations of random graph
models have also been proposed to model the topology of the Internet [25][26]. The generation of a non-hierarchical
random graph consists of the following steps [25]:
1. N nodes are randomly distributed over a rectangular coordinate grid. Each node is placed at a location
with integer coordinates. A minimum distance is specified so that a node is rejected if it is too close to
another node. The Euclidean metric is used to calculate the distance a(i, j)between each pair of
nodes (i, j).
2. A fully connected graph is constructed with the link weight equal to the Euclidean distance.
3. Based on the fully connected graph, a minimum weight spanning tree is constructed.
4. To achieve a specified average node degree1 of the graph, edges are added one at a time with increasing
distance.
If node i and j are connected, then the link weight, denoted as w , is assumed to be equal to:
where b is a uniformly distributed random variable in the range 0 bb . In (19), the first term can be
interpreted as the propagation delay of the link, and the second term approximately models the queueing
delay of the link.
A 20-node random graph generated from the above model is shown in Figure 4. The size of the
rectangular coordinate grid is 100 . 100 . The minimum distance between any two nodes is 15. The
average node degree of the graph is 3. The value of b is 100. Each node represents an ATM switch and
each edge represents a physical link connecting two switches. Since we are only concerned about inter-switch
handoff, base stations are not included in the model.
Based on the above network model, we obtain the adjacency matrix of the network as well as the
number of links of the shortest path between any two nodes. We assume the number of links of the
shortest path estimated by the source is deterministic. The call duration is assumed to be exponential. The
time between inter-switch handoffs follows a Gamma distribution. When there is an inter-switch handoff,
we assume each of the neighboring switches has the same probability to be the target switch (i.e., uniform
distribution).
For each source and destination pair, the value iteration algorithm described in Section 3.1 is used to
determine the minimum expected total cost and the optimal policy. From the optimal policy, the value
iteration algorithm is used again to calculate the expected number of path optimizations by solving (18).
The minimum expected total cost and the expected number of path optimizations are then averaged over
all possible source and destination pairs. We repeat this for 100 random graphs and determine the
averages.
PO NPO
For the two heuristic policies d and d , the expected total cost and the expected number of path
optimizations for each source and destination pair are also determined by the value iteration algorithm.
These values are then averaged over all possible source and destination pairs. Again, we repeat this for
100 random graphs and determine the averages.
For the periodic and Bernoulli path optimization policies, simulation must be used. Given the network
topology, a call is generated with two nodes chosen as the source and destination. Dijskstra's algorithm is
used to compute the shortest path between these two nodes. The destination node is assumed to be
1 The average node degree is defined as the average number of links connected to a node.
stationary. The source node becomes the anchor switch of the mobile connection. During each inter-switch
handoff, the target switch is restricted to be one of the neighboring switches of the current anchor
switch. Path extension is used to extend the connection from the anchor switch to the target switch. Path
optimization is performed periodically for the periodic scheme. For the Bernoulli scheme, path
optimization is performed with probability p after each extension. For each source and destination
opt
simulation runs are performed. The average total cost and the average number of path
optimizations per call are determined. We repeat this for 100 random graphs and determine the averages.
All the cost functions are assumed to be linear. The link cost function f(k)= C k where
link
. The term C captures the bandwidth used by the connection. Different C can be assigned
link link link
for different traffic classes. The variable cost function for path extension h (m)= w m where
denotes the number of links increased during path extension. The variable cost function
for path optimization h (l)= w l where w > 0 and l denotes the number of links reduced during
PO PO PO
path optimization.
6.2 Results
Figure
5 shows the expected total cost versus the link cost rate C . The optimal policy gives the lowest
link
expected total cost compared to the other four heuristics. When the link cost rate is small, there is no
incentive to perform path optimization. The operating point, p , for the Bernoulli policy is close to
opt
NPO
zero. The optimal policy is to perform path extension only. Thus, results of the Bernoulli, d , and
optimal policies are the same. When the link cost rate increases, the optimal policy for some source and
NPO
destination pairs is to perform path optimization. Results of the optimal, Bernoulli, and d policies
PO
diverge, while the results of the Bernoulli and d policies begin to converge.
Figure
6 shows the expected number of path optimizations versus the link cost rate C . Since no
link
NPO
path optimization is performed for the d policy, the expected number of path optimizations is always
equal to zero. Note that since both the call termination rate and the inter-switch handoff rate are constant,
in this case the expected number of inter-switch handoffs is also a constant. Thus, results for the periodic
PO
and d policies are independent of the link cost rate. For the Bernoulli and optimal policies, when the
link cost rate is small, there is no incentive to perform path optimization. The expected number of path
optimizations is small. As the link cost rate increases, some source and destination pairs perform path
optimization after inter-switch handoff. Thus, there is an increase in the number of path optimizations
performed.
Figure
7 shows the expected total cost versus the inter-switch handoff rate l . The expected total cost
increases as the inter-switch handoff rate increases. When l is small (i.e., the average time between inter-switch
ghandoffs is larger than the average call duration), an inter-switch handoff is unlikely to occur
during the connection lifetime. Thus, the results between the five policies are close. As the inter-switch
PO
handoff rate increases, these five curves begin to diverge. The d policy gives the highest expected total
NPO NPO
cost, which is followed by the periodic, d , and Bernoulli policies. Results of the d and Bernoulli
policies are very close. Although we can conclude that the expected total cost increases in l and the
optimal policy always gives the minimum expected total cost, the performance comparisons between the
PO
other four heuristics differ when another set of parameters are chosen. That is, the d policy can
NPO
sometimes have a better performance than the periodic and d policies
Figure
8 shows the expected number of path optimizations versus the inter-switch handoff rate l . The
expected number of path optimizations increases as l increases. Results of the Bernoulli and optimal
policies are quite close. Due to the threshold structure of the optimal policy, path optimization is
performed only after a certain number of inter-switch handoffs. Thus, the expected number of path
PO
optimizations for the optimal policy is smaller than the periodic and d policies.
Figure
9 shows the expected total cost versus the call termination rate m . The expected total cost
decreases as the call termination rate increases, which is intuitive since the link cost is accrued
continuously during the call lifetime. When m is large (i.e., the call duration is short), all the connections
experience a small number of inter-switch handoffs. Thus, the results of all these policies are close. When
the call duration increases, the results begin to diverge. We can see a significant cost difference between
the optimal policy and the other heuristics when the call duration is long (i.e., m is small).
Figure
shows the expected number of path optimizations versus the call termination rate m . The
expected number of path optimizations decreases as m increases. Due to the threshold structure of the
optimal policy, path optimization is performed only after a certain number of path extensions. Thus, the
expected number of path optimizations performed for the optimal policy is much smaller than the
PO
periodic and d policies.
In the previous results, we assume the time between inter-switch handoffs follows a Gamma
distribution. We also consider exponential and hyper-exponential distributions for the time between inter-switch
handoffs. For a fair comparison, the average time between inter-switch handoffs is the same for
various distributions. Figures 11 and 12 show the minimum expected total cost of the optimal policy
versus l and C , respectively. These results indicate that the expected total cost is relatively insensitive
link
to the distributions of the time between inter-switch handoffs.
6.3 Sensitivity Analysis
In order to calculate the minimum expected cost, the optimal policy table needs to be determined. The
policy obtained depends on the values of different parameters (e.g, l, m, C , and C ). Although the
link PO
parameters C and C can be determined by the network, the values of land may not always be
link PO
estimated correctly by the mobile terminal during call setup. If that is the case, the optimal policy may not
indeed be the optimal one. In this section, we are interested in determining the percentage change of the
expected cost per call to the variation of the average call duration and the average time between inter-switch
handoffs. The procedures for the sensitivity analysis consist of the following steps:
1. Given the actual call termination rate m and other cost and mobility parameters, we first determine the
minimum expected total cost, denoted as Cost (optimal).
2. Let m^ denote the estimated call termination rate and D denote the percentage change of the average
call duration. These parameters are related by the following equation:
Based on the estimated call duration rate m^ and other parameters, the sub-optimal policy is
determined. From this sub-optimal policy and other cost and mobility parameters (i.e., l, m, etc), the
sub-optimal expected total cost, denoted as Cost (sub-optimal), is computed.
3. The change in the expected total cost with respect to the variation of the call duration is characterized
by the cost ratio, which is defined as: Cost (sub-optimal) / Cost (optimal).
The results for different m are shown in Figure 13. When the average call duration is over-estimated by
more than -40 %, the cost ratio is almost equal to one, which implies that the optimal policy is insensitive
to the change of the average call duration. However, within the (-90, -50)percentage range, there is an
increase in the cost ratio. The cost ratio can be as high as 1.53 for These results imply that if
there is uncertainty in estimating the average call duration, it may be better to over-estimate the value in
order to reduce the cost ratio difference.
We use the similar procedures described above to investigate the percentage change of the expected
total cost to the variation of the time between inter-switch handoffs. Figure 14 shows the cost ratio versus
the percentage change in average time between inter-switch handoffs for different l . Within the
percentage range of interest, the cost ratio is always less than 1.07 (i.e., 7%). Within the (-50, 100)
percentage range, the cost ratio is less than 1.01 (i.e., 1%). These results imply that the optimal policy is
relatively insensitive to the change of the average time between inter-switch handoffs.
6.4 Discussions
In our simulation studies, we found the value iteration algorithm to be very efficient and stable. The
number of iterations is quite predictable from point to point, changing slowly as the independent
parameter changes. In general, the number of iterations to convergence does not depend on the cost
parameters (C , C , C ), but depends on the values of land m. As an example, for the optimal
link PO PE
policy in Figure 9, the value iteration algorithm required only 24 iterations to converge when
but it required 170 iterations when Note that there are other iteration algorithms available (e.g,
policy iteration algorithm) which have a higher rate of convergence. Interested readers can refer to [14]
for details.
In this paper, the wireless ATM network is modeled as a non-hierarchical random graph. One
question that arises is whether the results will differ if some other network topologies are being used. The
answer is affirmative. The relative performance between the four heuristics will change if another
network topology is being used. This is essentially the same as changing the values in the functions
p(mi, j)or (i, j). However, the optimal policy will always give the lowest expected total cost
compared to the other four heuristics.
7. Conclusions
In this paper, we have addressed the issue of when to initiate path optimization for the two-phase handoff
protocol. The path optimization problem is formulated as a semi-Markov decision process. A link cost
function is used to reflect the network resources utilized by a connection. Signaling cost function is used
to capture the signaling and processing load incurred on the network. The time between inter-switch
handoffs follows an arbitrary general distribution. When an inter-switch handoff occurs, based on the
current state information, the network controller decides whether to perform path optimization after path
extension.
We have presented the value iteration algorithm which determines the expected total cost and the
optimal policy. Under certain conditions, we have shown the existence of an optimal policy which has a
threshold structure. That is, path optimization is always performed when the number of links of the path
is greater than a certain threshold. The threshold structure of the optimal policy facilitates the
implementation. When an inter-switch handoff occurs, the decision of performing path optimization can
be made by a simple table lookup.
The performance of the optimal policy has been compared with four heuristics. Simulation results
indicate that the optimal policy gives a lower expected cost per call than those heuristics. These results
imply that by using the optimal policy, the mobile connection maintains a good balance between the
network resources utilized and the signaling load incurred on the network during its connection lifetime.
We have also performed sensitivity analysis for the optimal policy with respect to the variation of the
average call duration and the average time between inter-switch handoffs. Results indicate that the
optimal policy is relatively insensitive to the change of the average time between inter-switch handoffs. If
there is uncertainty in estimating the average call duration, it may be better to over-estimate the value in
order to reduce the cost ratio difference.
Future work includes extending the proposed model to analyze the (1) mobile-to-mobile connection
scenario; (2) multicast connection in which a group of mobile users are communicating with each other;
and (3) path optimization problem with QoS constraints. Although the proposed model captures the
trade-off between the network resources used and the handoff processing and signaling load incurred on
the network, the model is not without drawbacks. In our formulation, the call duration is exponentially
distributed. Although the exponential distribution is valid for voice traffic, this may not be appropriate for
multimedia applications [27][28]. This also points to the need for new analytical models for general call
durations.
Appendix
Proposition 1: Assume the cost, transition probabilities, and sojourn times are time homogeneous. If the
termination time of a finite-horizon semi-Markov decision process is exponentially distributed with mean
then it is equivalent to an infinite-horizon semi-Markov decision process with discount rate m . That
is, we must show that (1) is equivalent to
where
c(s, a)= b(s, a)+ Es --m-(1 - e )f(s).
Proof: For clarity, we will analyze the three terms in (1) separately. Let
where
For the first term in (1),
dt.
If m represents the last decision epoch before termination, then
By interchanging the order of summation, we have
dt.
p-msn
For the second term in (1), we have
dt.
By interchanging the order of summation, we obtain,
For the third term in (1),
pT
If n represents the last decision epoch before termination, then
ts f(Xn)dtme-mtdt .
By interchanging the order of integration, we have,
Substitute (22)-(24) into (21), we obtain:
Recall c(s, a)denotes the expected total cost between two decision epochs, given that the system
occupies state sat the first decision epoch and that the decision maker choose action ain state s. Since
the cost, transition probabilities, and sojourn times are assumed to be time homogeneous,
at -mt
c(s, a)= b(s, a)+ Es - e f(s)dt
Substitute (26) into (25), we have
The discrete-time version of this result can be found in Chapter 5 of [14].
For stationary deterministic policy d :
d d-mt1 d
-mt d
Lemma 1: For each state (i,j,k) S , the expected total cost v(i,j,k)is a nondecreasing function with
respect to the number of links k .
Proof: The proof of this lemma is by induction. We must show v(i,j,k)-v(i,j,k+ 1) 0. Recall
-mt
v(i,j,k)= c(i,j,k,PO)+ I2v(l,i,n)p(ni, D)q(li).
From (9) and (10), it is clear that c(i,j,k,a) c(i,j,k+ 1,a)for all k . Hence,
v(i,j,k)< c(i,j,k+ 1,PO)+ I2v(l,i,n)p(ni, D)q(li)
Thus, v(i,j,k) v(i,j,k+ 1)for K k< L.
c(i,j,K- 1,PO)+ I2v(l,i,n)p(ni, D)p(mi, j)q(li)
c(i,j,K- 1,PO)+ I2v(l,i,n)p(ni, D)q(li)
c(i,j,K,PO)+ I2v(l,i,n)p(ni, D)q(li)
v(i,j,K).
For We need to show
From (14),
v(i,j,k)= min -c(i,j,k,NPO)+ I2v(l,i,k+ m)p(mi, j)q(li),
c(i,j,k,PO)+ I2v(l,i,n)p(ni, D)p(mi, j)q(li).
Let a* denote the optimal action of state (i,j,k+ 1). If
c(i,j,k+ 1,PO)+ I2v(l,i,n)p(ni, D)p(mi, j)q(li)
On the other hand, if
c(i,j,k+ 1,NPO)+ I2v(l,i,k+1 +m)p(mi, j)q(li)
To complete the proof, we need to show v(j,j,k)-v(j,j,k+ 1) 0for 1 k< K. From (13),
c(j,j,k+ 1,NPO)+ I2v(l,j,k+ 1)q(lj)
v(j,j,k+ 1).
Thus, by the principle of induction, for each state (i,j,k) S , the expected total cost v(i,j,k)is a
nondecreasing function in k .
Proposition 2: For (i,j,k)S , there exists an optimal policy d* that has a control limit (or threshold)
structure:
PO, k* kL
when I1Df(k+ m)-DhPO(k+ m- n)p(ni, D) 0for all msuch that p(mi,
Proof: Let
r(i,j,k)= I1 f(k+ m)p(mi, j)+ I2v(l,i,k+ m)p(mi, j)q(li)
Thus, the action PO is chosen if r(i,j,k) 0 and the action NPO is chosen if r(i,j,k)< 0 . Let k^ be the
smallest ksuch r(i,j,k)0 . For convenience, let Dr(i,j,k^)= r(i,j,k^
Since v(i,j,k)is a nondecreasing function in k , Dv(l,i,k^ + m)0 . Thus, Dr(i,j,k^) 0 when
for all msuch that p(mi,
Then
Dr(i,j,k)= I2Dv(l,i,k+ m)p(mi, j)q(li)+ I1Df(k+ m)p(mi,
Since v(i,j,k)is a nondecreasing function in k , Dv(l,i,k+ m)0 . Thus,
(k+ m) (K - 1)
I1Df(k+ m)- DhPO(k+ m- n)p(ni, D) 0
for all msuch that p(mi, . Thus, by induction, the optimal policy has a threshold structure when
I1Df(k+ m)-DhPO(k+ m- n)p(ni, D) 0for all msuch that p(mi,
Acknowledgments
The authors would like to thank the anonymous reviewers as well as Martin Puterman and Henry Chan
for their comments on an earlier draft of this paper.
--R
Special Issue on Wireless ATM
Discrete Stochastic Dynamic Programming.
Introduction to Stochastic Dynamic Programming.
The ATM Forum Technical Committee
The ATM Forum Wireless ATM Working Group
--TR
A quantitative comparison of graph-based models for Internet topology
Performance evaluation of connection rerouting schemes for ATM-based wireless networks
Performance evaluations of path optimization schemes for inter-switch handoffs in wireless ATM networks
Route optimization in mobile ATM networks
QoS routing in networks with inaccurate information
Performance evaluation of path optimization schemes for inter-switch handoff in wireless ATM networks
Markov Decision Processes
--CTR
Dilek Karabudak , Chih-Cheng Hung , Benny Bing, A call admission control scheme using genetic algorithms, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Shilpa Yogesh Tisgaonkar , Chih-Cheng Hung , Benny Bing, Intelligent handoff management with interference control for next generation wireless systems, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia | path optimization;inter-switch bandoff;connection rerouting;wireless ATM |
383741 | A recursive formulation of Cholesky factorization of a matrix in packed storage. | A new compact way to store a symmetric or triangular matrix called RPF for Recursive Packed Format is fully described. Novel ways to transform RPF to and from standard packed format are included. A new algorithm, called RPC for Recursive Packed Cholesky, that operates on the RPG format is presented. ALgorithm RPC is basd on level-3 BLAS and requires variants of algorithms TRSM and SYRK that work on RPF. We call these RP_TRSM and RP_SYRK and find that they do most of their work by calling GEMM. It follows that most of the execution time of RPC lies in GEMM. The advantage of this storage scheme compared to traditional packed and full storage is demonstrated. First, the RPC storage format uses the minimal amount of storage for the symmetric or triangular matrix. Second, RPC gives a level-3 implementation of Cholesky factorization whereas standard packed implementations are only level 2. Hence, the performance of our RPC implementation is decidedly superior. Third, unlike fixed block size algorithms, RPC, requires no block size tuning parameter. We present performance measurements on several current architectures that demonstrate improvements over the traditional packed routines. Also MSP parallel computations on the IBM SMP computer are made. The graphs that are attached in Section 7 show that the RPC algorithms are superior by a factor between 1.6 and 7.4 for order around 1000, and between 1.9 and 10.3 for order around 3000 over the traditional packed algorithms. For some architectures, the RPC performance results are almost the same or even better than the traditional full-storage algorithms results. | Introduction
A very important class of linear algebra problems are those in which the coefcient
matrix A is symmetric and positive denite [5, 11, 23]. Because of the
symmetry it is only necessary to store either the upper or lower triangular part
of the matrix.
Lower triangular caseB
7 14 21 28
Upper triangular caseB
43
9
Figure
1: The mapping of 7 7 matrix for the LAPACK Cholesky Algorithm
using the full storage (LDA= 7 if in Fortran77)
Lower triangular caseB
3 9 14
Upper triangular caseB
Figure
2: The mapping of 7 7 matrix for the LAPACK Cholesky Algorithm
using the packed storage
1.1 LAPACK POTRF and PPTRF subroutines
The LAPACK library[3] oers two dierent kind of subroutines to solve the same
problems, for instance POTRF 1 and PPTRF both factorize symmetric, positive
Four names SPOTRF, DPOTRF, CPOTRF and ZPOTRF are used in LAPACK for real
symmetric and complex Hermitian matrices[3], where the rst character indicates the precision
and arithmetic versions: S { single precision, D { double precision, C { complex and Z { double
complex. LAPACK95 uses one name LA POTRF for all versions[7]. POTRF and/or PPTRF
express, in this paper, any precision, any arithmetic and any language version of the PO
and/or PP matrix factorization algorithms.
denite matrices by means of the Cholesky algorithm. The only dierence is
the way the triangular matrix is stored (see gures 1 and 2).
In the POTRF case the matrix is stored in one of the lower left or upper right
triangles of a full square matrix[16, page 64] 2 , the other triangle is wasted (see
gure 1). Because of the uniform storage scheme, blocking and level 3 BLAS[8]
subroutines can be employed, resulting in a high speed solution.
In the PPTRF case the matrix is kept in packed storage ([1], [16, page 74,
75]), which means that the columns of the lower or upper triangle are stored
consecutively in a one dimensional array (see gure 2). Now the triangular
matrix only occupies the strictly necessary storage space but the nonuniform
storage scheme means that use of full storage BLAS is impossible and only the
level 2 BLAS[20, 9] packed subroutines can be employed, resulting in a low
speed solution.
To summarize, there is a choice between high speed with waste of memory
versus low speed with no waste of memory.
1.2 A new Way of Storing Real Symmetric and Complex
Hermitian and, in either case, Positive Denite Matrice
Together with some new recursively formulated linear algebra subroutines, we
propose a new way of storing a lower or upper triangular matrix that solves this
dilemma[14, 24]. In other words we obtain the speed of POTRF with the amount
of memory used by PPTRF. The new storage scheme is named RPF, recursive
packed format(see gure 4), and it is explained below.
The benet of recursive formulations of the Cholesky factorization and the
LU decomposition is described in the works of Gustavson [14] and Toledo [22].
The symmetric, positive denite matrix in the Cholesky case is kept in full
matrix storage, and the emphasis in these works are the better data locality
and thus better utilization of the computers memory hierarchy, that recursive
formulations oer. However, the recursive packed formulation also has this
property.
We will provide a very short introduction on a computer memory hierarchy
and the Basic Linear Algebra Subprograms (BLAS) before describing the
Recursive Packed Cholesky (RPC) and the Recursive Packed Format (RPF).
1.3 The Rationale behind introducing our New Recursive
Algorithm, RPC and the New Recursive Data Format,
RPF
Computers have several levels of memory. The
ow of data from the memory
to the computational units is the most important factor governing performance
of engineering and scientic computations. The object is to keep the functional
units running at their peak capacity. Through the use of a memory hierarchy
In Fortran column major, in C row major.
system (see gure 3), high performance can be achieved by using locality of
reference within a program. In the present context this is called blocking.
Registers
Cache Level 1
Cache Level 2
Cache Level 3
Shared Memory
Distributed Memory
Secondary Storage 1
Secondary Storage 2
Faster,
smaller,
more
expensive
Slower,
larger,
cheaper
Figure
3: A computer memory hierarchy
At the top of the hierarchy is a Central Processing Unit (CPU). It communicates
directly with the registers. The number of the registers is usually very
small. A Level 1 cache is directly connected to the registers. The computer
will run with almost peak performance if we are able to deliver the data to the
(level 1) cache in such way that the CPU is permanently busy. There are
several books describing problems associated with the computer memory hier-
archy. The literature in [10, 5, 11] is adequate for Numerical Linear Algebra
specialists.
The memories near the CPU (registers and caches) have a faster access to
CPU than the memories further away. The fast memories are very expensive
and this is one of the reason that they are small. The register set is tiny. Cache
memories are much larger than the set of registers. However, L1 cache is still
not large enough for solving scientic problems. Even a subproblem like matrix
factorization does not t into cache if the order of the matrix is large.
A special set of Basic Linear Algebra Subprograms (BLAS) have been developed
to address the computer memory hierarchy problem in the area of Numerical
Linear Algebra. The BLAS are documented in [20, 9, 8, 6]. BLAS are
very well summarized and explained for Numerical Linear Algebra specialists
in [10, 5].
There are three levels of BLAS: Level 1 BLAS shows vector vector opera-
tions, Level 2 BLAS shows vector matrix (and/or matrix vector) operations,
and Level 3 BLAS shows matrix matrix operations.
For Cholesky factorization one can make the following three observations
with respect to the BLAS.
1. Level 3 implementations using full storage format run fast.
2. Level 3 implementations using packed storage format rarely exist. A level 3
implementation was previously used in [16], however, at great programming
cost. Conventional Level 2 implementations using packed storage
format run, for large problem sizes, considerably slower than the full storage
implementations.
3. Transforming conventional packed storage to RPF and using our RPC
algorithm produces a Level 3 implementation using the same amount of
storage as packed storage.
1.4 Overview of the Paper
Section 2 describes the new packed storage data format and the data transformations
to and from conventional packed storage. Section 2.1 describes conventional
lower and upper triangular packed storage. Section 2.2 discusses how
to transform in place either a lower or upper trapezoid packed data format to
recursive packed data format and vice versa. Section 2.3 describes the possibility
to transpose the matrix while it is reordered from packed to recursive
packed format and vice versa. Finally, in Section 2.4 the recursive aspects of
the data transformation is described. These four subsections describe the in
place transformation pictorially via several gures.
In Sections 3.1 and 3.2, recursive TRSM and SYRK, both which work on
RPF, are described. Both routines do almost all their required
oating point
operations by calling level 3 BLAS GEMM. Finally, in Section 3.3, the RPC
algorithm is described in terms of using the recursive algorithms of Sections 3.1
and 3.2. As in Section 2, all three algorithms are described pictorially via several
gures. Note that the RPC algorithm only uses one Level 3 BLAS subroutine,
namely GEMM. Usually the GEMM routine is very specialized, highly tuned
and done by the computer manufacturer. If not, the ATLAS[25] GEMM can be
used.
Section 4 explains that the RPC algorithm is numerically stable.
Section 5 describes performance graphs of the packed storage LAPACK[3]
algorithms and of our recursive packed algorithms on several computers; the
most typical computers like COMPACQ, HP, IBM SP, IBM SMP, INTEL Pen-
tium, SGI and SUN were considered (gures results
show that the recursive packed Cholesky factorization (RP PPTRF) and the
solution (RP PPTRS) are 4 { 9 times faster than the traditional packed sub-
routines. There are three more graphs. One demonstrates successful use of
OpenMP[17, 18] parallelizing directives (gure 18). The second graph shows
that the recursive data format is also eective for the complex arithmetic (g-
ure 19). The third one shows the performance of all three algorithms for the
factorization (POTRF, PPTRF and RP PPTRF) and the solution
(POTRS, PPTRS and RP PPTRS) (gure 20).
Section 6 discusses the most important developments in this paper.
2 The recursive packed storage
A new way to store triangular matrices in packed storage called recursive packed
is presented. This is a storage scheme by its own right, and a way to explain it,
is to describe the conversion from packed to recursive packed storage and vice
versa (see gures 2 and 4).
Lower triangular caseB
22
Upper triangular caseB
9 12 15
19 2022 24
26 271
Figure
4: The mapping of 7 7 matrix for the Cholesky Algorithm using the
recursive packed storage. The recursive block division is illustrated.
2.1 Lower and upper triangular packed storage
Symmetric, complex hermitian or triangular matrices may be stored in packed
storage form (see LAPACK Users' Guide [3], IBM ESSL Library manual[16,
pages 66{67] and gure 2). The columns of the triangle are stored sequentially
in a one dimensional array starting with the rst column. The mapping between
positions in full storage and in packed storage for a triangular matrix of size m
is,
A i;j
AP i+(j 1)(2m j)=2
'L'The advantage of this storage is the saving of almost half 4 the memory
compared to full storage.
3 For upper triangular and for lower triangular of A is stored.
4 At least m (m 1)=2. This formulae is a function of LDA (leading dimension of
and m in Fortran77. The saving in Fortran77 is m
2.2 Reordering of a lower and upper trapezoid
Packed storage
packed storage memory map
buer p(p 1)=2 words
Recursive packed storage memory map
Figure
5: Reordering of the lower packed matrix. First, the last p 1 columns
of the leading triangle are copied to the buer. Then, in place, the columns of
the accentuated rectangle are assembled in the bottom space of the trapezoid.
Last, the buer is copied back to the top of the trapezoid.
It is assumed that the matrices are stored in column major order, but the
concepts in the paper are fully applicable also if the matrices are stored in row
major order. As an intermediate step to transform a packed triangular matrix
to a recursive packed matrix, the matrix is divided into two parts along
a column thus dividing the matrix in a trapezoidal and a triangular part as
shown in g. 5 and 6. The triangular part remains in packed form, the
trapezoidal part is reordered so it consists of a triangle in packed form, and
a rectangle in full storage form. The reordering demands a buer of the size
of the triangle minus the longest column. The reordering in the lower case,
g. 5, takes the following steps. First the columns of the triangular part of
the trapezoid are moved to the buer (note that the rst column is in correct
place), then the columns of the rectangular part of the trapezoid are
moved into consecutive locations and nally the buer is copied back to the
correct location in the reordered array. If p in gure 5 is chosen to bm=2c
the rectangular submatrix will be square or deviate from a square only by a
single column. The buer size is p(p 1)=2 and the addresses of the lead-
8Packed storage
packed storage memory map
buer
(m p)(m p 1)=2 words
Recursive packed storage memory map
Figure
Reordering of the upper packed matrix. First, the rst
columns of the trailing triangle are copied to the buer. Then, in place, the
columns of the accentuated rectangle are assembled in the top space of the
trapezoid. Last, the buer is copied back to the bottom of the trapezoid.
ing triangle, the rectangular submatrix and the trailing triangle are given by,
After the reordering the leading and trailing triangles are both in the same
lower or upper packed storage scheme as the original triangular matrix. The
reordering can be implemented as subroutines,
subroutine TPZ TO TR(m; p; AP)
and
subroutine TR TO TPZ (m; p; AP)
where TPZ TO TR means the reordering of the trapezoidal part from packed
format to the triangular-rectangular format just described. TR TO TPZ is the
opposite reordering.
2.3 Transposition of the rectangular part
The rectangular part of the reordered matrix are now kept in full matrix storage.
If desired, this oers an excellent opportunity to transpose the matrix while it is
transformed to recursive packed format. If the rectangular submatrix is square
the transposition can be done completely in-place. If it deviates from a square by
a column, a buer of the size of the columns is necessary to do the transposition,
for this purpose we can reuse the buer used for the reordering.
2.4 Recursive application of the reordering
The method of reordering is applied recursively to the leading and trailing triangles
which are still in packed storage, until nally the originally triangular
packed matrix is divided in rectangular submatrices of decreasing size, all in
full storage. The implementation of the complete transformation from packed
to recursive packed format, P TO RP is (compare the gures 2 and 4),
recursive subroutine P TO RP(m;AP)
if (m > 1) then
call TPZ TO TR(m; p; AP)
call P TO RP(p; AP)
call P TO RP(m
and the inverse transformation from recursive packed to packed, RP TO P is,
recursive subroutine RP TO P(m;AP)
if (m > 1) then
call RP TO P(p; AP)
call TR TO TPZ (m; p; AP)
call RP TO P(m
The examples shown here concerns the lower triangular matrix, but the upper
triangular transformation, and the transformation with transposition follows
the same pattern. The gure 7 illustrates the recursive division of small lower
and upper triangular matrices.
Figure
7: The lower and upper triangular matrices, in recursive packed storage
data format, for 20. The rectangular submatrices, shown in the gures,
are kept in full storage in column major order, in the array containing the whole
matrices.
Recursive formulation of the Cholesky algorithm
and its necessary BLAS
Two BLAS[6] operations, the triangular solver with multiple right hand sides,
TRSM 5 and the rank k update of a symmetric matrix, SYRK are needed for the
recursive Cholesky factorization and solution, RP PPTRF 6 and RP PPTRS[2].
In this section RP TRSM, RP SYRK, RP PPTRF and RP PPTRS are formulated
recursively and their use of recursive packed operands are explained.
TRSM, SYRK, PPTRF and PPTRS operate in various cases depending of the
operands and the order of the operands. In the following we only consider single
specic cases, but the deduction of the other cases follows the same guidelines.
All the computational work in the recursive BLAS routines RP TRSM and
RP SYRK (and also RP TRMM) is done by the non recursive matrix-matrix
multiply routine GEMM[19, 25]. This is a very attractive property, since GEMM
usually is or can be highly optimized on most current computer architectures.
The GEMM operation is very well documented and explained in [12, 13, 6].
The speed of our computation depends very much from the speed of a good
GEMM. Good GEMM implementations are usually developed by computer
manufacturers. The model implementation of GEMM can be obtained from
5 On naming of TRSM, SYRK, HERK and GEMM see footnote of POTRF on page 1.
6 The prex RP indicates that the subroutine belongs to the Recursive Packed library, for
example RP PPTRF is the Recursive Packed Cholesky factorization routine.
netlib [6]; it works correctly but slowly. The Innovative Computing Laboratory
at the University of Tennessee in Knoxville developed an automatic system
called which usually can produce a very fast GEMM subroutine. Another
automatic code generator scheme for GEMM was developed at Berkeley[4].
In ESSL, see [1], GEMM and all other BLAS are produced via blocking and
high performance kernel routines. For example, ESSL produces a single kernel
routine, DATB, which has the same function as the ATLAS on chip GEMM
kernel. The principles underlying the production of both kernels are similar.
The major dierence is that ESSL's GEMM code is written by hand whereas
ATLAS' GEMM code is parametrized and run over all parameter settings until
a best parameter setting is found for the particular machine.
3.1 Recursive TRSM based on non-recursive GEMM
Fig. 8 shows the splitting of the TRSM operands. The operation now consists
of the three suboperations,
Based on this splitting, the algorithm can be programmed as follows,
recursive subroutine RP TRSM (m; n;
if (n == 1) then
do
do
else
call RP TRSM (m;
call GEMM ( 0 N
call RP TRSM (m; n
3.2 Recursive SYRK based on non-recursive GEMM
Fig. 9 shows the splitting of the SYRK operands. The operation now consists
of the three suboperations,
A 21
A 11
A 22
Figure
8: The recursive splitting of the matrices in the RP TRSM operation for
the case where SIDE=Right, UPLO=Lower and TRANSA=Transpose.
Based on this splitting, the algorithm can be programmed as follows,
recursive subroutine RP SYRK (m; n;
if (m == 1) then
do
do
else
call RP SYRK (p; n;
call GEMM ( 0 N
call RP SYRK (m
3.3 Recursive PPTRF and PPTRS based on recursive
TRSM and recursive SYRK
Fig. 10 shows the splitting of the PPTRF operand. The operation now consists
of four suboperations,
M-P
C 22
M-P
bC 22
M-P
A 11
A 21
M-P
Figure
9: The recursive splitting of the matrices in the RP SYRK operation
for the case where UPLO=Lower and TRANS=No transpose.
22 RP PPTRF
Based on this splitting the algorithm can be programmed as follows,
recursive subroutine RP PPTRF (m; AP)
if (m == 1) then
else
call RP PPTRF (p; AP)
call RP TRSM (m
call RP SYRK (m
call RP PPTRF (m
The solution subroutine RP PPTRS performs consecutive triangular solutions
to the transposed and the non-transposed Cholesky factor. This routine
is not explicitly recursive, as it just calls the recursive RP TRSM twice.
4 Stability of the Recursive Algorithm
The paper [24] shows that the recursive Cholesky factorization algorithm is
equivalent to the traditional algorithms in the books[5, 11, 23]. The whole theAP
A 21
M-P
A 22
M-P
M-P
M-P
Figure
10: The recursive splitting of the matrix in the RP PPTRF operation
for the case where UPLO=Lower.
ory of the traditional Cholesky factorization and BLAS (TRSM and SYRK) algorithms
carries over to the recursive Cholesky factorization and BLAS (TRSM
and SYRK) algorithms described in Section 3. The error analysis and stability
of these algorithms is very well described in the book of Nicholas J. Higham[15].
The dierence between LAPACK algorithms PO, PP and RP 7 is how inner
products are accumulated. In each case a dierent order is used. They are all
mathematically equivalent, and, stability analysis shows that any summation
order is stable.
5 Performance results
SUN UltraSparc II @ 400 MHz
SGI R10000 @ 195 MHz
COMPAQ Alpha EV6 @ 500 MHz
HP PA-8500 @ 440 MHz
INTEL Pentium III @ 500 MHz
Table
1: Computer names
The new recursive packed BLAS (RP TRSM and RP SYRK), and the new
recursive packed Cholesky factorization and solution (RP PPTRF and RP PPTRS)
routines were compared to the traditional LAPACK subroutines, both concerning
the results and the performance. The comparisons were made on seven
7 full, packed and recursive packed.
dierent architectures, listed in the Table 1. The result graphs are attached in
the appendix of this paper. The double precision arithmetic in Fortran90[21]
was used in all cases.
IBM-PPC ESSL 3.1.0.0 -lesslsmp
IBM-PW2 ESSL 2.2.2.0 -lesslp2
SUN Sun Performance Library 2.0 -lsunperf
SGI Standard Execution Environment 7.3 -lblas
COMPAQ DXML V3.5 -ldxmp ev6
HP HP-UX PA2.0 BLAS Library 10.30 -lblas
INTEL ATLAS 3.0
Table
2: Computer library versions
The following procedure was used in carrying out our performance tests.
On each machine the recursive and the traditional routines were compiled
with the same compiler and compiler
ags and they call the same vendor
optimized, or otherwise optimized, BLAS library. The BLAS library
versions can be seen in Table 2.
The compared recursive and traditional routines received the same input
and produced the same output for each time measurement. The time
spent in reordering the matrix to and from 8 recursive packed format is
included in the run time for both RP PPTRF and RP PPTRS. For the
traditional routines there was no data transformation cost.
The CPU time is measured by the timing function ETIME except on the
PowerPC machine, which is a 4 way SMP. On this machine the run time
was measured by the wall clock time by means of a special IBM utility
function called RTC. Except for the operating system no other programs
were running during these test runs.
For each machine the timings were made for a sequence of matrix sizes
ranging from in steps of In case of the
HP and Intel machines the matrix size starts at We start at
because the resolution of the ETIME utility was too coarse. The
number of right hand sides were taken to be nrhs = n=10. Due to memory
limitations on the actual HP machine, this test series could only range to
The operation counts for Cholesky factorization and solution are
8 However it is only necessary to perform the to transformation in RP PPTRF and no
transformation in RP PPTRS, to get the correct results.
where n is the number of equations and nrhs the number of right hand
sides. These counts were used to convert run times to Flop rates.
Ten gures (gure show performance graph comparisons,
between the new RPC algorithms and the traditional LAPACK algorithms. The
RPC algorithms use the RPF data format in all comparisons. As mentioned the
cost of transforming from packed format to RPF and from RPF to packed format
is included in the both the recursive packed factor and solve routines. The
subroutines DPPTRF, ZPPTRF, DPPTRS and ZPPTRS use packed
data format, and DPOTRF and DPOTRS use full data format. Figure 20
compares all three algorithms RPC, LAPACK full storage and LAPACK packed
storage.
Every gure has two subgures and one caption. The upper subgure shows
comparison curves for Cholesky factorization. The lower subgures show comparison
curves of forward and backward substitutions. The captions describe
details of the performance gures. The rst seven gures
describe the same comparison of performance on several dierent computers.
5.1 The IBM SMP PowerPC
Figure
11 shows the performance on the the IBM 4-way PowerPC 604e 332 MHz
computer.
The LAPACK routine DPPTRF (the upper subgure) performs at about
100 MFlops. Performance of the 'U' graph is a little better than the 'L' graph.
Performance remains constant as the order of the matrix increases.
The performance of the RPC factorization routine increases as n increases.
The 'U' graph increases from 50 MFlops to almost 600 MFlops and the 'L' graph
from 200 MFlops to 650 MFlops. The 'U' graph performance is better than the
'L' graph performance. The relative ('U', 'L') RPC algorithm performance is
(4.9, 7.2) times better than the DPPTRF algorithm for large matrix sizes.
The performance of the RPC solution routine (the lower subgure) for the
'L' and 'U' graphs are almost equal. The DPPTRS routine performs about
100 MFlops for all matrix sizes. The RPC algorithm curve increases from 250
MFlops to almost 800 MFlops. The relative ('U', 'L') performance of the RPC
algorithm is (5.7, 5.5) times faster than the DPPTRS algorithm for large matrix
sizes.
The matrix size varies from 300 to 3000 on these subgures.
5.2 The IBM Power2
Figure
12 shows the performance on the IBM Power2 160 MHz computer.
The LAPACK routine DPPTRF (the upper subgure) 'U' graph performs
at about 200 MFlops, the 'L' graph performs at about 150 MFlops. There is no
increase in both graphs as the size of the matrix grows.
The performance graphs of the RPC factorization routine both increase, the
'U' graph from 300 to a little more than 400 MFlops, and the 'L' graph from
200 MFlops to 450 MFlops. The 'L' graph is better than the 'U' graph when
the matrix sizes are between 750 and 3000. The 'U' graph is better than the
'L' graph when the matrix sizes are between 300 and 750. Both graphs grow
very rapidly for matrix sizes between 300 and 500. The relative ('U', 'L') RPC
algorithm performance is (1.9, 3.1) times faster than the DPPTRF algorithm
for large matrix sizes.
The performance of the RPC solution routine (the lower subgure) for the
'L' and 'U' graphs are almost equal. The performance of the DPPTRS algorithm
stays constant at about 250 MFlops decreasing slightly as n ranges from 300 to
3000. The performance of the RPC algorithm increases from 350 to more than
500 MFlops. The relative ('U', 'L') RPC algorithm performance is (2.3, 2.3)
times faster than the DPPTRS algorithm for large matrix sizes.
The matrix size varies from 300 to 3000 on these subgures.
5.3 The Compaq Alpha EV6
Figure
13 shows the performance on the the COMPAQ Alpha EV6 500 MHz
computer.
The LAPACK routine DPPTRF (the upper subgure) 'U' graph performs
better than the 'L' graph. The dierence is about 50 MFlops. The performance
starts at about 300 MFlops, increases to 400 MFlops and than drops down to
about 200 MFlops.
The performance of the RPC factorization routine increases as n increases.
Both graphs (the 'U' and 'L' graphs) are almost equal. The 'U' graph is a
little higher for matrix sizes between 300 and 450. The relative ('U', 'L') RPC
algorithm performance is (3.4, 5.0) times faster than the DPPTRF algorithm
for large matrix sizes.
For the routine DPPTRS the shape of the solution performance curves (the
lower subgure) for the 'L' and 'U' graphs are almost equal. The performance
of the DPPTRS routine decreases from 450 to 250 MFlops as n increases from
300 to 3000. The RPC performance curves increases from about 450 MFlops
to more than 750 MFlops. The performance ('U', 'L') of the RPC algorithm is
(3.3, 3.3) times faster than DPPTRS algorithm for large matrix sizes.
The matrix size varies from 300 to 3000 on these subgures.
5.4 The SGI R10000
Figure
14 shows the performance on the the SGI R10000 195 MHz computer,
on one processor only.
The LAPACK routine DPPTRF (the upper subgure) 'U' graph performs
better than the 'L' graph for matrix sizes from 300 to about 2000, after which
both the 'U' and the 'L' graphs are the same. The DPPTRF performance slowly
decreases.
The performance of the RPC factorization routine ('U' and 'L' graphs) increases
from about 220 to 300 MFlops as n increases from 300 to about 1000,
and stays constant as n increase to 3000. The relative ('U', 'L') RPC algorithm
performance is (4.9, 4.9) times faster than the DPPTRF algorithm for large
matrix sizes.
For the routine DPPTRS the shape of the solution performance curves (the
lower subgure) for the 'L' and 'U' graphs are almost equal. The performance
of the DPPTRS routine decreases from 130 MFlops to 60 MFlops as n increases
from 300 to 3000. The Performance of the RPC solution routine increases in the
beginning, and then runs constantly at about 300 MFlops. The performance
('U', 'L') of the RPC algorithm is (5.1, 5.2) times faster than the DPPTRS
algorithm for large matrix sizes.
The matrix size varies from 300 to 3000 on these subgures.
5.5 The SUN UltraSparc II
Figure
15 shows the performance on the the SUN UltraSparc II 400 MHz computer
The LAPACK routine DPPTRF (the upper subgure) 'U' and 'L' graphs
show almost equal performance when n > 1500. These functions start between
200 and 225 MFlops and then decrease down to about 50 MFlops.
For the RPC factorization routine, the performance of the 'U' and 'L' graphs,
are also almost equal over the whole interval. Their function values start from
quickly rise to 350 MFlops and then slowly increase to about 450
MFlops. The RPC factorization ('U', 'L') algorithm is (9.7, 10.2) times faster
than the DPPTRF algorithm for large matrix sizes.
The performance of the RPC solution routine (the lower subgure) for the 'L'
and 'U' graphs are almost equal. The DPPTRS performance graphs decreases
from 225 to 50 MFlops. The performance for the RPC solution graphs increases
from 330 to almost 450 MFlops. The RPC solution ('U', 'L') algorithm is (10.0,
times faster than the DPPTRS algorithm for large matrix sizes.
The matrix size varies from 300 to 3000 on these subgures.
5.6 The HP PA-8500
Figure
shows the performance on the the HP PA-8500 440 MHz computer.
The LAPACK routine DPPTRF (the upper subgure) 'U' and 'L' graphs
are decreasing functions. The 'U' graph function values go from about 370 to
100 MFlops. The 'L' graph function goes from 280 to about 180 MFlops.
The performance of the RPC factorization graphs are increasing functions as
the matrix size increases from 1000 to 3000. The performance varies for matrix
sizes between 500 and 1500. The 'U' graph function values range from about
700 MFlops to almost 800 MFlops, the 'L' graph function values range from 600
MFlops to a little more than 700 MFlops. The RPC algorithm ('U', 'L') is (4.7,
times faster than the DPPTRF algorithm for large matrix sizes.
The performance of the RPC solution routine (the lower subgure) for the 'L'
and 'U' graphs are almost equal. The DPPTRS routine performance decreases
from 300 MFlops to 200 MFlops. The RPC algorithm curve increases from 550
MFlops to almost 810 MFlops. The RPC algorithm ('U', 'L') is (5.2, 5.0) times
faster than the DPPTRS algorithm for large matrices in the solution case.
The matrix size varies from 500 to 2500 on these subgures.
5.7 The INTEL Pentium III
Figure
17 shows the performance on the INTEL Pentium III 500 MHz computer.
The LAPACK routine DPPTRF (the upper subgure) 'U' and 'L' graphs
are decreasing functions. The 'U' graph function ranges from about 100 to 80
MFlops. The 'L' graph function ranges from less than 50 to about 25 MFlops.
For the RPC factorization routine the 'U' and the 'L' graphs are almost
equal. The graphs are increasing functions from about 200 to 310 MFlops.
The RPC factorization algorithm ('U', 'L') is (4.2, 9.2) times faster than the
DPPTRF algorithm for large matrices.
The performance of the RPC solution routine (the lower subgure) for the 'L'
and 'U' graphs are almost equal. The DPPTRS performance graphs decreases
from about 80 to about 50 MFlops. The RPC algorithm curves increases from
240 to about 330 MFlops. The RPC algorithm ('U', 'L') is (5.9, 6.0) times faster
than the DPPTRS algorithm for large matrices.
The matrix size varies from 500 to 3000 on these subgures.
5.8 The IBM SMP PowerPC with OpenMP directives
Figure
shows the performance on the the IBM 4-way PowerPC 604e 332 MHz
computer.
These graphs demonstrate successful use of OpenMP[17, 18] parallelizing
directives. The curves LAPACK(L), LAPACK(U), Recursive(L) and Recur-
are identical to the corresponding curves of gure 11. We compare curves
Recursive(L), Recursive(U), Rec.Par(L) and Rec.Par(U). The Rec.Par(L) and
Rec.Par(U) curves result from double parallelization. The RPC algorithms call
a parallelized DGEMM and they are parallelized themselves by the OpenMP
directives.
The Rec.Par(L) curve is not much faster than Recursive(L), sometimes it
is slower. The Rec.Par(U) is the fastest, specially for large size matrices. The
doubly parallelized RPC algorithm (Rec.Par(U)) is about 100 MFlops faster
than the ordinary RPC algorithm (Recursive(U)). The relative ('U', 'L') RPC
factorization algorithm performance is (5.6, 7.6) times faster than the DPPTRF
algorithm for large matrices.
The RPC double parallelization algorithm for the solution (lower subgure)
exceeds 800 MFlops. The relative ('U', 'L') RPC solution algorithm performance
is (6.5, 6.6) times faster than the DPPTRF algorithm for large matrices.
The matrix size varies from 300 to 3000 on these subgures.
5.9 The INTEL Pentium III running Complex Arithmetic
Figure
19 shows the performance on the INTEL Pentium III 500 MHz computer.
This gure demonstrate the successful use of RPC algorithm for Hermitian
positive denite matrices. The performance is measured in Complex MFlops.
To compare with the usual real arithmetic MFlops the Complex MFlops should
be multiplied by 4.
The LAPACK routine ZPPTRF (the upper subgure) 'U' graph performs a
little better than the 'L' graph. These routine performs at about 80 MFlops.
The RPC Hermitian factorization routine 'U' graph performs better than
the 'L' graph. The RPC performance graphs are increasing functions. They
go from 240 up to 320 MFlops. The RPC Hermitian factorization algorithm
('U', 'L') is (3.8, 4.3) times faster than the ZPPTRF algorithm for the large size
matrices.
The performance of the RPC solution routine (the lower subgure) for the
'L' and 'U' graphs are almost equal. The ZPPTRS performance decreases from
about 108 to 80 MFlops. The RPC solution algorithm increases from about 240
up to more than 320 MFlops. The RPC algorithm ('U', 'L') is (3.9, 3.7) times
faster than the ZPPTRS algorithm for large Hermitian matrices.
The matrix size varies from 500 to 3000 on these subgures.
5.10 The INTEL Pentium III with all three Cholesky Algorithm
Figure
20 shows the performance on the INTEL Pentium III 500 MHz computer.
The graphs on this gure depict all three Cholesky algorithms, the LAPACK
full storage (DPOTRF and DPOTRS) algorithms, the LAPACK packed storage
(DPPTRF and DPPTRS) algorithms and the RPC (factorization and solution)
algorithms.
The LAPACK packed storage algorithms (DPPTRF and DPPTRS) are previously
explained on gure 17.
The DPOTRF routine (the upper subgure), for both the 'U' and 'L' cases,
performs better than the RPC factorization routine for smaller matrices. For
larger matrices the RPC factorization algorithm performs equally well or slightly
better than the DPOTRF algorithm.
The performance of the DPOTRS algorithms ('U' and 'L' graphs) are better
than the RPC performance for this computer.
However, the POTRF and POTRS storage requirement is almost twice the
storage requirement of the RPC algorithms.
The matrix size varies from 500 to 3000 on these subgures.
6 Conclusion
We summarize and emphasize the most important developments described in
our paper.
A recursive packed Cholesky factorization algorithm based on BLAS Level 3
operations has been developed.
The RPC factorization algorithm works with almost the same speed as
the traditional full storage algorithm but occupies the same data storage
as the traditional packed storage algorithm. Also see bullet 4.
The user interface of the new packed recursive subroutines (RP PPTRF
and RP PPTRS) is exactly the same as the traditional LAPACK sub-routines
(PPTRF and PPTRS). The user will see identical data formats.
However, the new routines run much faster.
Two separate routines are described here: RP PPTRF and RP PPTRS.
The data format is always converted from LAPACK packed data format
to the recursive packed data format before the routine starts its operation
and converted back to LAPACK data format afterwards. The RP PPSV
subroutine exists in our package which is equivalent to the LAPACK PPSV
routine. In the RP PPSV subroutine the data is not converted between
the factorization and the solution.
New recursive packed Level 3 BLAS, RP TRSM and RP SYRK, written
in Fortran90[21] were developed. They only call the GEMM routine.
This GEMM subroutine can be developed either by the computer manufacturer
or generated by ATLAS system[25]. The ATLAS generated GEMM
subroutine is usually compatible with the manufacturer developed routine.
Acknowledgements
This research was partially supported by the LAWRA project, the UNIC collaboration
with the IBM T.J. Watson Research Center at Yorktown Heights.
The last two authors were also supported by the Danish Natural Science Re-search
Council through a grant for the EPOS project (E-cient Parallel Algorithms
for Optimization and Simulation).
--R
Exploiting functional parallelism on power2 to design high-performance numerical algorithms
Applied Numerical Linear Algebra.
BLAS (Basic Linear Algebra Subprograms).
An extended set of FORTRAN basic linear algebra subroutines.
Matrix Computations.
Recursion leads to automatic variable blocking for dense linear-algebra algorithms
Accuracy and Stability of Numerical Algorithms.
Basic linear algebra subprograms for Fortran usage.
FORTRAN 90/95 Explained.
Locality of Reference in LU Decomposition with Partial Pivoting.
Numerical Linear Algebra.
Automatically Tuned Linear Algebra Software (ATLAS).
--TR
An extended set of FORTRAN basic linear algebra subprograms
A set of level 3 basic linear algebra subprograms
Exploiting functional parallelism of POWER2 to design high-performance numerical algorithms
Matrix computations (3rd ed.)
Optimizing matrix multiply using PHiPAC
Applied numerical linear algebra
Locality of Reference in LU Decomposition with Partial Pivoting
Recursion leads to automatic variable blocking for dense linear-algebra algorithms
GEMM-based level 3 BLAS
Fortran 90/95 explained (2nd ed.)
Basic Linear Algebra Subprograms for Fortran Usage
Accuracy and Stability of Numerical Algorithms
Numerical Linear Algebra for High Performance Computers
Recursive Formulation of Cholesky Algorithm in Fortran 90
Superscalar GEMM-based Level 3 BLAS - The On-going Evolution of a Portable and High-Performance Library
Recursive Blocked Data Formats and BLAS''s for Dense Linear Algebra Algorithms
--CTR
Chan , Enrique S. Quintana-Orti , Gregorio Quintana-Orti , Robert van de Geijn, Supermatrix out-of-order scheduling of matrix operations for SMP and multi-core architectures, Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures, June 09-11, 2007, San Diego, California, USA
Steven T. Gabriel , David S. Wise, The Opie compiler from row-major source to Morton-ordered matrices, Proceedings of the 3rd workshop on Memory performance issues: in conjunction with the 31st international symposium on computer architecture, p.136-144, June 20-20, 2004, Munich, Germany
Dror Irony , Gil Shklarski , Sivan Toledo, Parallel and fully recursive multifrontal sparse Cholesky, Future Generation Computer Systems, v.20 n.3, p.425-440, April 2004
Bjarne S. Andersen , John A. Gunnels , Fred G. Gustavson , John K. Reid , Jerzy Waniewski, A fully portable high performance minimal storage hybrid format Cholesky algorithm, ACM Transactions on Mathematical Software (TOMS), v.31 n.2, p.201-227, June 2005
F. G. Gustavson, High-performance linear algebra algorithms using new generalized data structures for matrices, IBM Journal of Research and Development, v.47 n.1, p.31-55, January | complex Hermitian matrices;cholesky factorization and solution;real symmetric matrices;recursive algorithms;novel packed matrix data structures;BLAS;positive definite matrices |
383755 | Flexible network support for mobile hosts. | Fueled by the large number of powerful light-weight portable computers, the expanding availability of wireless networks, and the popularity of the Internet, there is an increasing demand to connect portable computers to the Internet at any time and in any place. However, the dynamic nature of a mobile host's connectivity and its use of multiple network interfaces require more flexible network support than has typically been available for stationary workstations. This paper introduces two flow-oriented mechanisms, in the context of Mobile IP [25], to ensure a mobile host's robust and efficient communication with other hosts in a changing environment. One mechanism supports multiple packet delivery methods (such as regular IP or Mobile IP) and adaptively selects the most appropriate one to use according to the characteristics of each traffic flow. The other mechanism enables a mobile host to make use of multiple network interfaces simultaneously and to control the selection of the most desirable network interfaces for both outgoing and incoming packets for different traffic flows. We demonstrate the usefulness of these two network layer mechanisms and describe their implementation and performance. | Introduction
Light-weight portable computers, the spread of wireless networks and services, and the popularity of
the Internet combine to make mobile computing an attractive goal. With these technologies, users should
be able to connect to the Internet at any time and in any place, to read email, query databases, retrieve
information from the web, or entertain themselves.
To achieve the above goal, a mobile host requires a unique unchanging address, despite the fact that
as it switches from one communication medium to another, or from one network segment to another, the IP
address associated with its network interface must change accordingly. The IP address must change because
IP [25] assumes that a host's IP address uniquely identies the segment of the network through which a host
is attached to the Internet. Unfortunately, changing the address will break ongoing network conversations
between a mobile host and other hosts, because connection-oriented protocols such as TCP [26] use the IP
addresses of both ends of a connection to identify the connection. Therefore, Mobile IP [23], or another
similar mechanism that allows a host to be addressed by a single address, is needed to accommodate host
mobility within the Internet.
However, due to the dynamic nature of a mobile host's connectivity and its use of multiple interfaces,
providing network support for a mobile host can be a much more complex task than for its stationary
counterparts. Mobile IP takes an important step towards supporting mobility and is the context for our
work, but through day-to-day experience using the MosquitoNet [16] mobile network, we have found that it
is desirable to have ner control over the network tra-c of a mobile host on a per
ow basis than is provided
by Mobile IP. The work presented in this paper is mainly motivated by the following observations on the
unique characteristics of a mobile host:
Duality: The mobile host has two roles as both a host virtually connected to its home network and as a
normal host on the network it is visiting. In this sense, we can consider the mobile host to be virtually
multi-homed. This duality brings along increased complexity as well as opportunities for optimization.
Dynamically changing point of attachment: We connect a portable to various networks, such as our o-ce
network while at work, the network of a wireless Internet access service provider while on the road, or
another network at home or elsewhere. Dierent networks may have dierent policies for dealing with
packets from mobile hosts. Depending on where a mobile host currently operates and with whom it
communicates, it may need to use dierent packet delivery methods that are either more robust or more
e-cient.
Multiple network interfaces: To achieve connectivity in any place at any time, mobile hosts will likely
require more than one type of network device. For example, our mobile hosts use Ethernet or WaveLAN
[28] when in a suitably equipped o-ce or home, but they use a slower wireless packet radio network
such as Metricom Ricochet [18] elsewhere.
There is no single network device that can provide the desired quality of service (QoS) all the time.
There is always a trade-o among coverage, performance, and price. There are even times when multiple
network devices need to be used at the same time (such as a satellite connection for downlink and a
modem connection for uplink). In this case, the mobile host is physically multi-homed as well. Making
use of these network interfaces simultaneously for dierent
ows of tra-c is a challenge.
Our goal is to enable a mobile host to communicate both robustly and e-ciently with other hosts
as it moves from place to place. While there are ways to satisfy one goal or the other, satisfying both at
once is a challenge. For example, we can treat a mobile host as virtually connected to its home network by
always tunneling packets between the mobile host and its home agent. Although robust, this is obviously not
e-cient. Achieving e-ciency as well requires a mobile host to have more
exibility than has been provided
by previously existing mechanisms.
This paper addresses the following two issues in providing the
exibility desirable for a mobile host.
First is the need at the network routing layer to support multiple packet delivery methods (such as whether
to use regular IP or Mobile IP). At the network layer, we have developed a general-purpose mechanism, the
Mobile Policy Table, which supports multiple packet delivery methods simultaneously for dierent
ows and
adaptively selects the most appropriate method according to the characteristics of each tra-c
ow. Second
is the need to make use of multiple network interfaces simultaneously and to control the interface selection
of both outgoing and incoming packets for dierent tra-c
ows. This is achieved by extending the base
Mobile IP protocol to control the choice of interfaces to use for incoming tra-c to a mobile host. We also
amend the routing table lookup to enable the use of multiple network interfaces for outgoing tra-c
ows
from a mobile host. The result is a system that applies Mobile IP more
exibly by taking into consideration
tra-c characteristics on a
ow basis.
The rest of the paper is organized as follows: In section 2, we give a brief description of Mobile IP, the
context in which our work is done. In section 3, we illustrate scenarios for the use of multiple packet delivery
methods for dierent
ows of tra-c. In section 4, we detail the general-purpose mechanism that supports
this use of multiple packet delivery methods. In section 5, we describe the simultaneous use of multiple
network interfaces. In section 6, we report the implementation status of the system and present the results
of system performance measurements obtained from our experiments. In section 7, we list related work. In
section 8, we consider the applicability and potential of our work with IPv6. Finally, we present conclusions
together with some future and continuing work in section 9.
2. Background: Mobile IP
Mobile IP [23] is a mechanism for maintaining transparent network connectivity to mobile hosts. Mobile
IP allows a mobile host to be addressed by the IP address it uses in its home network (home IP address),
regardless of the network to which it is currently physically attached. Figure 1 illustrates the operation of
basic Mobile IP.
Host
Host
Correspondent
Mobile
Foreign Network
Agent
Home Home Network
Figure
1. Basic Mobile IP Protocol. Packets from the correspondent host to the mobile host are always sent to the mobile host's
home network rst, and then forwarded (tunneled) by the home agent to the mobile host's current point of attachment (care-of
address). Packets originating from the mobile host are sent directly to the correspondent host, thus forming a triangular route.
The thick line indicates that the original packet is encapsulated in another IP packet when forwarded, and is therefore of a
larger
The Mobile IP specication allows for two types of attachment for a mobile host visiting a \foreign"
network (a network other than the mobile host's home network). For the rst type of attachment, the mobile
host can connect to the foreign network through a \foreign agent", an agent present in the foreign network,
by registering the foreign agent's IP address with its home agent. The home agent then tunnels [22] packets
to the foreign agent, which decapsulates them and sends them to the mobile host via link-level mechanisms.
Although our Mobile IP implementation supports the use of foreign agents, our work is more focused
on the second type of attachment, which provides a mobile host with its own \co-located" care-of address
in the foreign network. In this scenario, the mobile host receives an IP address to use while it visits the
network, via DHCP [6] or some other protocol or policy. It registers this address with its home agent, which
then tunnels packets directly to the mobile host at this address. The disadvantage of this scenario is that the
mobile host has to decapsulate packets itself and more IP addresses are needed in the foreign network, one
for each visiting mobile host. The advantage is that the mobile host also becomes more directly responsible
for the addressing and routing decisions for the packets it sends out, and it therefore has more control over
such operations.
While Mobile IP has laid the groundwork for Internet mobility, there are still many challenges to
tackle, as seen from on-going eorts in this area. These eorts include route optimization [13], rewall
traversal [20], and \bi-directional tunneling" (or \reverse-tunneling") [19] to allow packets to cross security-conscious
boundary routers. This last problem, as described in section 3.2, is one of our motivations for
making it possible for mobile hosts to choose dynamically between dierent packet addressing and routing
options. We believe that these eorts make evident the inherent need for mobile hosts to use dierent
techniques under dierent circumstances.
3. Supporting Multiple Packet Delivery Methods
Supporting multiple packet delivery methods is the rst area in which we have increased the
exibility
of mobile hosts. By avoiding a single method of delivery, the mobile host only pays for the extra cost of
mobility support or security perimeter traversal when it is truly needed.
We illustrate some of the situations for which we have found such
exibility to be benecial in practice.
Although Figure 2 only shows the examples we have implemented so far, the mechanism we propose here
can be extended to support other delivery methods when other choices become desirable.
Web traffic,
Transparent
Mobility?
Tunneling?
Reverse
Bi-directional
Tunneling
Triangular Routing
Classical
of
basic Mobile IP
Mobile IP
routers
ingree-filtering
needs to traverse
Traffic that
etc.
outside a mobile host
Traffic initiated
IP telephony,
Telnet,
etc.
Regular IP
Figure
2. Simultaneous Use of Multiple Packet Delivery Methods. This gure summarizes the dierent packet delivery choices
(in rectangle boxes) we have implemented so far. Listed in the circles are example uses for each packet delivery choice. The
policy decisions to be made are in diamonds.
3.1. Providing Transparent Mobility Support Only When Necessary
The transparent mobility support of Mobile IP is important for long-lived connection-oriented tra-c
or for incoming tra-c initiated by correspondent hosts. However, this transparent mobility support does
not come without cost. In the absence of route optimization for Mobile IP, packets destined to a mobile host
are delivered to its home network and then forwarded to the mobile host's current care-of address in the
network it is visiting. If a mobile host is far away from home but relatively close to its correspondent host,
the path traversed by these packets is signicantly longer than the path traveled if the mobile host and the
correspondent host talk to each other directly. The extra path length not only increases latency but also
generates extra load on the Internet. It even increases load on the home agent, potentially contributing to
a communication bottleneck if the home agent is serving many mobile hosts simultaneously.
It would be ideal to use Mobile IP route optimization [13]. However, since Mobile IP route optimization
requires extra support on correspondent hosts in addition to support on the mobile host and its home agent,
it requires widespread changes throughout the Internet, which is unrealistic for the near future.
Fortunately, there are certain types of tra-c for which a mobile host may not require Mobile IP support.
Examples are most web browsing tra-c and communication with local services discovered by the mobile
host. Web connections are usually short-lived, so it is unlikely that a mobile host will change its foreign
network address in the midst of a connection (exceptions are web push technology and HTTP 1.1 [8], which
can potentially use persistent transport connections). Even if it does, the user can simply press reload, and
the web transfer will be retried. With such tra-c, we can avoid the extra cost associated with mobility
support.
3.2. Supporting Bi-directional Tunnels and Triangle Routes
When communication requires transparent mobility, there still remains a choice of packet delivery
methods. An important example is communication that must traverse security-conscious boundary routers.
As a result of IP address spoong attacks and in accordance with a CERT advisory [4], more routers are
ltering on the source address (ingress ltering) [7] and will drop a packet whose address is not \topologically
correct" (whose originating network cannot be the one identied by the source address). In the presence of
such routers, the triangle route as specied in the basic Mobile IP protocol will fail. Figure 3 illustrates an
example of this problem.
Correspondent
Host
Home
Agent
Mobile
Host
Figure
3. The Problem with Source IP Address Filtering at a Security-conscious Boundary Router. When the mobile host sends
packets directly to the correspondent host in its home domain with the source IP address of the packets set to the mobile host's
home IP address, these packets will be dropped by the boundary router, because they arrive from outside of the institution and
yet claim to originate from within.
Host
Mobile
Agent
Home
Correspondent
Host
Figure
4. Solution to the Ingress Filtering Problem. To address the problem caused by source IP address ltering on security-conscious
boundary routers, the mobile host sends packets by tunneling along the reverse path as well. Since the encapsulated
packets in the tunnel from the mobile host to its home agent use the mobile host's care-of address as their source IP address
(which is topologically correct), these packets will no longer be dropped by the security-conscious boundary routers.
As another example, if the boundary router is in the domain visited by the mobile host, it may drop
packets that are received from inside but claim to originate from outside; these packets look as if they are
\transit tra-c", and not all networks will carry transit tra-c.
To address the above problems, we can tunnel packets sent by the mobile host through its home agent
to its correspondent hosts, in much the same way we tunnel packets sent to the mobile host. This is called
\bi-directional tunneling" [19]. Figure 4 illustrates the solution.
This bi-directional tunneling addresses the problem related with ingress ltering routers, but again,
this comes with increased cost. If a mobile host visits a network far away from home and tries to talk to a
correspondent host in a nearby network, packets originating from the mobile host will now have to travel all
the way home and then back to the correspondent host, increasing the length of this reverse path.
However, not all the packets need to be sent this way. It is unnecessary to force all tra-c through a bi-directional
tunnel just because some ingress ltering routers would drop tra-c sent to specic destinations.
Such tunneling may be unnecessary for a large part of the tra-c for which the topologically incorrect source
IP address in packet headers is not a problem. Therefore, it is important to support the use of both triangle
routing and bi-directional tunneling simultaneously, so that only those tra-c
ows that truly need to use
bi-directional tunneling pay for this extra cost.
3.3. Joining Multicast Groups in Dierent Ways
The nal packet delivery mechanism we have experimented with is to allow a mobile host to choose
to join multicast groups either remotely (through its home network), using its home IP address, or locally,
using its co-located care-of address in the foreign network. This
exibility is necessary because multicast
tra-c is often limited to a particular site by scoping [5]. Also, there are advantages and disadvantages to
either choice [1] even if scoping is not an issue.
Joining locally: In this respect the mobile host is no dierent from any normal host on the same subnet.
The advantage is that the delivery of multicast tra-c to the mobile host is more e-cient. The disadvantages
are that it requires the existence of a multicast-capable router in the foreign network, and those
mobile hosts actively participating in the multicast session will have to re-identify themselves within the
group when they move to another network.
Joining through the home network: All multicast tra-c has to be tunneled bi-directionally between the
mobile host and its home agent. The advantages of this choice are that it does not require multicast
support on the foreign network, and the mobile host will retain its membership as it moves around. The
disadvantage is that the route is less e-cient. Although there are optimizations that allow a single copy
of multiple packets to be tunneled to a foreign agent serving multiple mobile hosts [9], the home agent
still must tunnel a copy of a multicast packet to each foreign network that has one of its mobile hosts
visiting.
We can choose either option for dierent multicast groups. To join a multicast group locally, we
add an entry in the Mobile Policy Table instructing tra-c destined to certain multicast addresses to use
conventional IP support. To join a multicast group remotely, we add an entry in the Mobile Policy Table
to use bi-directional tunneling for tra-c destined to these multicast groups. The mobile host also needs to
notify its home agent, in a registration packet, to forward multicast tra-c to it.
4. A General-purpose Mechanism for Flexible Routing
In this section, we describe the mechanism used to achieve the
exibility features described in the
previous section. This general-purpose mechanism is centered around the idea of introducing a Mobile
Policy Table in the routing layer of the network software stack. The IP route lookup routine ip rt route
is augmented to take the Mobile Policy Table into consideration together with the normal routing table in
determining how a packet should be sent.
4.1. Support in the Network Layer
We choose to add our support for multiple packet delivery methods to the network layer due to its
unique position in the network software stack. By modifying the layer through which packets converge and
then diverge, we avoid a proliferation of modications. Relatively fewer changes need to be made in the
kernel network software, and both the upper layer protocol modules (such as TCP or UDP) and lower layer
drivers for dierent network devices remain unchanged.
We further identify the route lookup routine as a natural place to add such support. Along with normal
route selection, the enhanced route lookup also makes decisions for choosing among multiple ways to deliver
packets, as necessitated by the changing environment of a mobile host.
4.2. The Mobile Policy Table
The Mobile Policy Table species how the packets should be sent and received for each tra-c
ow
matching certain characteristics. The routing and addressing policy decisions currently supported in the
Mobile Policy Table are:
whether to use transparent mobility support (Mobile IP) or regular IP;
whether to use triangular routing or bi-directional tunneling, if using Mobile IP.
Destination Netmask PortNum Mobility Tunneling
Table
A Sample Mobile Policy Table. This mobile policy table species that all tra-c destined back to the
mobile host's home domain should use bi-directional tunneling, to satisfy the boundary routers at its
home institution (rst row); all tra-c to port 80 (web tra-c) should avoid using transparent mobility
support (second row); and the remaining tra-c should by default use Mobile IP with a regular triangular
route (third row). The second entry applies to all tra-c with a destination port number of 80, even for
destinations matching the rst entry, since port number specication takes precedence.
These policies are specied through two types of entries: \per-socket" entries and \generic" entries,
with per-socket entries taking precedence. While the Mobile Policy Table only contains generic entries, a
per-socket entry kept within the socket data structure allows any application to override the general rules.
Without a per-socket entry, tra-c is subject only to generic entries in the policy table, which specify the
delivery policy for all tra-c matching the given characteristics.
For generic entries, the Mobile Policy Table lookup currently determines which policy entry to use
based on two tra-c characteristics: the correspondent address and port number (for TCP and UDP). The
correspondent address is useful, because we often want to treat
ows to dierent destinations dierently.
The port number is useful as well, because there are many reserved port numbers that indicate the nature of
the tra-c, such as TCP port 23 for telnet, or port 80 for HTTP tra-c. While these are the characteristics
currently taken into consideration, we can extend the technique to include other characteristics in the future.
Table
shows, as an example, the Mobile Policy Table currently used on our mobile hosts when visiting
places outside of their home domain. The Mobile Policy Table lookup operation always chooses the most
specically matched entries (those with more restricted netmask and/or port number specications) over
more general ones.
4.3. How it Works
Figure
5 illustrates the use of the Mobile Policy Table and routing table within the Linux kernel. The
modication to the kernel is mainly limited to the route lookup function. For backward compatibility, the
normal routing table remains intact. During a route lookup, extra arguments such as other characteristics
of the tra-c
ow (currently only the TCP or UDP port number) and the source IP address chosen are used
in addition to the correspondent host's address for deciding how the packet should be sent.
The new route lookup function uses the specied source IP address to determine if the packet is subject
to policy decisions in the Mobile Policy Table. If the source IP address has already been set to the IP address
associated with one of the physical network interfaces, this indicates that no mobility decision should be
made for the packet. Packets may have their source address set either after they are looped back by the
virtual interface (described below), since a mobility decision has already been made by that time, or by
certain applications. An example of such an application is the mobile host daemon handling registration
and deregistration with the home agent; this daemon needs to force a packet through a particular real
interface using regular IP. In cases where the source IP address is already set, only the normal routing table
is consulted based upon the destination address, and the resulting route entry is returned. For the rest of the
packets, the Mobile Policy Table needs to be consulted to choose among multiple packet delivery methods.
The virtual interface (\vif") handles packets that need to be encapsulated and tunneled. It provides
the illusion that the mobile host is still in its home network. Packets sent through vif are encapsulated and
then looped back to the IP layer (as shown by the wide bi-directional arc in the gure) for delivery to the
home agent. This time, however, the source IP address of the encapsulating packet has already been chosen,
MPTMPT
Table
Routing
ether
IPIP
UDP
Network Layer (IP)
IP Route Lookup
loopback vif
Figure
5. This gure shows where the Mobile Policy Table (MPT) ts into the link, network, and transport layers of our
protocol stack. The MPT resides in the middle (network) layer, consulted by the IP route lookup function in conjunction with
the normal routing table to determine how packets should be sent. The bottom (link) layer shows the device interfaces, with vif
being a virtual interface that handles encapsulation and tunneling of packets. The top (transport) layer shows TCP and UDP,
along with an IP-within-IP processing module. The arrows depict the passing of data packets down the layers. The shaded
boxes indicate that the IPIP and vif modules have overlapping functionalities. The bold bi-directional arc shows the need to
pass packets between the two modules.
so it will now be sent through one of the physical interfaces. Accordingly, packets being tunneled to the
mobile host by its home agent are also looped by the IPIP module to vif, so that they appear to have arrived
at the mobile host as if it were connected to its home network.
To maintain reasonable processing overhead, policy table entries are cached in a manner similar to
routing table entries. If the characteristics of the tra-c match a cached entry, the software uses the cached
entry to speed up the process of policy lookup. Otherwise, a new policy table lookup will be carried out.
Whenever the mobile policy table is modied, the cached entries are
ushed.
We believe this is a general-purpose mechanism, because it can be easily extended to take other tra-c
characteristics into consideration and to add more policy decisions if it becomes desirable to do so. For
instance, if particular correspondent hosts have the ability to decapsulate packets themselves, we could
note this information in the Mobile Policy Table for those destinations, and the mobile host could send
encapsulated packets directly to these hosts, bypassing the home agent yet still providing the robustness of
a bi-directional tunnel.
4.4. Dynamic Adaptation of the Mobile Policy Table
The entries in the Mobile Policy Table can be changed at runtime to specify a dierent policy. We
currently have the following two mechanisms to adjust entries in the Mobile Policy Table dynamically.
First, we can swap in a new set of entries for the Mobile Policy Table whenever a mobile host changes
its care-of address. When a mobile host changes its point of attachment to the Internet, the best way to
communicate with other hosts often changes accordingly. For example, when our mobile hosts move from
within the Stanford domain to a foreign network outside, they need to switch to using reverse tunneling for
tra-c back to their home domain due to the ingress ltering routers present at the boundary of Stanford's
domain. For foreign networks frequently visited by a mobile host, we use predened conguration les that
contain the set of entries to use in the Mobile Policy Table so that a suitable set of policies can be put in
place quickly when a mobile host changes location. For previously unknown foreign networks, a default set
of policies will be used.
In addition, we make it possible to change specic entries in the Mobile Policy Table dynamically. For
instance, we support adaptively selecting between the most robust packet delivery method (i.e. bi-directional
tunneling) and a more e-cient packet delivery method (i.e. triangular routing). This mechanism helps when
a current setting in the Mobile Policy Table fails or when it is simply impossible to specify the right policy
beforehand. The adaptation applies only to cached entries, i.e. we only adjust those entries that are actually
in use.
To implement this mechanism, we automatically dispatch a separate process that probes each destination
address by sending ICMP echo requests using triangular routing, with the interval between probes
being increased exponentially up to a precongured maximum value. For a
ow using reverse tunneling,
if a reply to any of the probes is successfully received, the Mobile Policy Table entry for this
ow will be
changed from bi-directional tunneling to use the more e-cient triangular routing. For a
ow using triangular
routing, the Mobile Policy Table entry is changed back to use bi-directional tunneling if a certain number of
probes (currently, ve) have failed in a row. After the initial stage, this separate process keeps probing in
the background with the maximum probe interval. If a series of probes fails while a
ow is using triangular
routing, we switch to use bi-directional tunneling. If a probe succeeds while a
ow is using bi-directional
tunneling, the Mobile Policy Table entry will be changed to use triangular routing immediately. In all other
cases, no action is necessary. This way, a
ow is able to select adaptively the most e-cient packet delivery
method.
5. Supporting the Use of Multiple Network Interfaces Simultaneously
The use of multiple network interfaces is dierent on mobile hosts than on typical stationary machines.
Stationary hosts with multiple network interfaces are usually routers, forwarding packets with dierent
destination addresses through dierent network interfaces. On a mobile host, these network devices instead
represent dierent ways this single mobile host can communicate with the outside world. For example, we
may want to use two dierent interfaces (one for telnet and another for le downloading) for communication
with the same host.
Although some operating systems can direct broadcast or multicast tra-c out through a particular
network interface to its immediately connected subnets, all these operating systems (except for Linux)
make routing decisions by destination addresses alone, without taking other tra-c
ow characteristics into
consideration. The standard Linux distribution includes our contribution that enables each socket to choose
among dierent simultaneously active network interfaces for outgoing tra-c to destinations beyond the
directly connected subnets. Our goal is to enable a mobile host to make use of multiple active network
interfaces simultaneously for dierent
ows of tra-c in a more general way.
5.1. Motivation for Multiple Interfaces
We nd the support for the use of multiple active interfaces useful for the following reasons:
1. Smoother hand-os: With the ability to use multiple network interfaces simultaneously, a mobile host
can probe the usability of other interfaces beyond its directly connected subnet without disturbing the
interface currently in use. The current network interface can remain in use until the new care-of address
on the new network can be successfully registered with the home agent. This eliminates unnecessary
packet loss when switching care-of addresses.
2. Quality of service (QoS): The dierent physical networks to which a user has access may oer dierent
QoS guarantees. For example, a mobile user may have simultaneous access to a GSM data network [27]
that has low bandwidth but relatively low latency, as well as to a Metricom Ricochet network that
oers higher bandwidth but has higher and more variable latency. The mobile host might decide to use
the GSM network for its low-bandwidth interactive
ows (such as its telnet tra-c) which require low
latency for user satisfaction, while using the Metricom network for its bulk data transfer
ows (such as
ftp tra-c) which require high bandwidth but do not demand as low a latency. The combination of these
dierent types of networks can deliver a larger range of QoS to mobile users.
3. Link asymmetry: Some networks only provide unidirectional connectivity. This is the case for many
satellite systems, which usually provide downlinks only. In these systems, connectivity in the reverse
direction is provided via a dierent means, such as a SLIP or PPP dialup line, a cellular modem, or
a CDPD [2] device. Being able to specify the incoming and outgoing interfaces explicitly in a natural
manner is a useful feature.
4. Cost and billing: The cost of accessing dierent networks may be a decisive factor in interface selection.
Users might select dierent interfaces according to the identity of the bill payer. For example, they might
choose a cheaper and lower quality access network for personal communications and a more expensive
and better quality one for business communications.
5. Privacy and security: Privacy and security may also be of considerable importance in interface selection.
Users may trust some networks more than others and prefer to use them for condential and important
communications. They might also want, for privacy reasons, to choose dierent networks for business
and personal communications.
The following sections explain how to support this feature for packet transmission and reception.
5.2. Supporting Multiple Interfaces in Transmission
For a mobile host to use multiple network interfaces simultaneously to send packets, we have devised
the following two techniques. The rst is to make use of the metric eld in the existing routing table entry
so that multiple routes through dierent interfaces can be associated with a certain destination specication.
Usually the normal default route has a metric of one. Routes through other interfaces with metrics greater
than one can coexist with the normal default route in the routing table. A route lookup that does not specify
a particular interface will thus nd the normal default route, maintaining backwards compatibility, since the
lookup always chooses the matching route with the smallest metric.
The second technique we provide is a \bind-to-device" socket option that applications can call to
associate a specic device with a certain socket. The route lookup routine has been modied so that when a
device selection is specied, only those route entries associated with the particular device will be considered.
Therefore, dierent applications running simultaneously can each choose dierent network interfaces to use
for sending packets.
5.3. Supporting Multiple Interfaces in Reception
5.3.1. Overview of the scheme
A mobile host with multiple interfaces running standard Mobile IP cannot receive dierent
ows of
packets on dierent interfaces simultaneously. With Mobile IP, a mobile host sends location updates that
indicate its current h home address, care-of address i binding to its home agent and possibly to its correspondent
hosts. As a result, all packets addressed to a mobile host are sent over the same interface or possibly
over multiple interfaces all at the same time if the \S" (simultaneous binding)
ag is set in the registration.
We have extended Mobile IP to allow a mobile host to use dierent interfaces to receive dierent
ows.
In our work, we dene a
ow as a triplet: h the mobile host's home IP address, the correspondent host's IP
address, the port number on the correspondent host i. Our denition is dierent from typical
ow denitions
in that we do not include certain elds such as the port number on the mobile host. This actually makes
certain
ows from the same mobile host indistinguishable from each other. However, since our current
goal is to treat tra-c
ows dierently based on whom a mobile host is talking to and the nature of the
communication (for example, whether it is interactive tra-c or bulk transfer), we believe that this triplet is
su-cient in capturing the essential tra-c
ow characteristics to serve this purpose.
In our framework, a mobile host may choose to receive the packets belonging to a given
ow on any
of its interfaces by sending a Flow-to-Interface binding to its home agent. This Flow-to-Interface binding
Correspondent
Host
Care-of
Mobile
Host
Care-of
GSM
Ricochet
Agent
Home
Figure
6. Supporting Multiple Incoming Interfaces. Packets belonging to any
ows addressed to the mobile host arrive on the
home network via standard IP routing. The home agent intercepts packets and tunnels them to the care-of address selected
based on the packets' destination addresses, source addresses and source ports. Packets of
are tunneled to Care-of
Address1, while packets of
are tunneled to Care-of Address2.
species the mobile host's care-of address that the home agent should use to forward packets belonging to
the
ow.
Upon reception of a Flow-to-Interface binding, a home agent updates its extended binding list, which
contains entries that associate a particular
ow specication with a mobile host's care-of address. Thereafter,
when the home agent receives a packet addressed to a mobile host it is serving, it searches in its extended
binding list for an entry matching the corresponding elds of the packet and forwards the packet to the
associated care-of address. If no entry is found, the packet is forwarded to the mobile host's default care-of
address.
Figure
6 illustrates the routing of datagrams to a mobile host away from home, once the mobile host
has registered several Flow-to-Interface bindings with its home agent. It is worth noting that since the
port number on a correspondent host is a factor in distinguishing between dierent
ows, the home agent
must treat already fragmented packets to mobile hosts specially. Currently, we reassemble these fragmented
packets at the home agent before forwarding them to mobile hosts. Since all packets pass through the home
agent, we do not suer from the problem normally associated with doing reassembly at intermediate routers
(wherein the routers are not guaranteed to see all the fragments). As we describe in Section 8, the use of
IPv6
ow label will eliminate the need for de-fragmentation.
The Flow-to-Interface bindings are registered to the home agent via two new extensions of the Mobile
IP registration messages. The following sections review brie
y the Mobile IP registration procedure and
detail the new extensions we have devised.
5.3.2. The Mobile IP Registration Procedure
A mobile host and its home agent exchange Mobile IP registration request and reply messages in
UDP packets. The format of the registration request packet is shown in Figure 7. The xed portion of
the registration request is followed by extensions, a general mechanism to allow optional information and
protocol extensibility. We use this general mechanism to extend the Mobile IP protocol so that dierent
ows to a mobile host can be associated with dierent network interfaces on that host.
5.3.3. The Flow-to-Interface Binding Extension
A mobile host may ask a home agent to register a number of Flow-to-Interface bindings by appending
to a registration request a list of Flow-to-Interface extensions, each dening a Flow-to-Interface binding to
be registered. The Flow-to-Interface binding extension format is shown in Figure 8.
SBDMGVTR
Type Lifetime
Home Agent
Home Address
Care-of Address
Extensions .
Figure
7. Mobile IP Registration Request. A registration request is used by a mobile host to create, on its home agent, a
mobility binding from the static IP address at its home network (i.e. its home address) to its current care-of address. A general
extension mechanism is dened for extending the protocol.
Type Length
CH Address
Flow Care-of Address
Figure
8. Flow-to-Interface Binding Extension. The CH (correspondent host) Address and the CH Port elds in this extension
together with the mobile host's home address dene the
ow that should now be associated with the care-of address specied
in the Flow Care-of Address eld.
Type Length Reserved
New Care-of Address
Old Care-of Address
Figure
9. Flow-to-Interface Binding Update Extension. All
ows previously bound to the old care-of address should now be
redirected to the new care-of address.
5.3.4. The Flow-to-Interface Binding Update Extension
A mobile host may ask a home agent to redirect certain existing
ow bindings to a dierent care-of
address by using the Flow-to-Interface binding update extension. The binding update extension format is
detailed in Figure 9. This extension is useful when a mobile host changes the point of attachment of one of
its interfaces and obtains a new care-of address for this interface.
5.3.5. Some Compatibility Issues
We want to ensure that unmodied mobile hosts are able to use our enhanced home agent, and that
our enhanced mobile hosts work properly with unmodied home agents. The rst scenario is not an issue,
since the enhanced home agent can process both regular registration messages as well as those with our new
extensions. However, the second scenario requires further consideration.
According to the Mobile IP specication [23], when an extension numbered within the range 0 through
127 is encountered but not recognized, the message containing that extension must be silently discarded.
When an extension numbered in the range 128 through 255 is encountered but not recognized, only that
particular extension is ignored, but the rest of the extensions and the message must still be processed. We
choose to number our new extensions within the range 128 to 255, since it is undesirable to have registration
packets silently discarded, causing the system to wait for timeouts instead.
When a mobile host sends a Flow-to-Interface binding registration to a home agent that does not
support Flow-to-Interface bindings, the packets will still be processed. This is safe, since the registration
message is a normal registration message excluding the Flow-to-Interface binding extension. However, to
distinguish these unsuspecting home agents from the enhanced home agents, we make the enhanced home
agents use dierent return codes in reply to registration requests with Flow-to-Interface extensions. If the
registration is accepted, the home agent returns the code 2 instead of 0. The return code 3 (instead of 1) will
be used if the registration is accepted and simultaneous mobility binding is granted. Therefore, if a mobile
host sends out registration requests with Flow-to-Interface binding extensions but gets a return code 0 or 1
in reply, it should stop sending Flow-to-Interface binding extensions to its home agent, since continuing to
send them will just waste bandwidth.
6. Implementation Status and Experiments
6.1. Implementation Status
We have implemented the support for multiple packet delivery methods and simultaneous use of multiple
network interfaces on top of our Mobile IP implementation [3] under Linux (currently kernel version 2.0.36).
Some core functions of our Mobile IP implementation reside within the kernel, such as packet encapsulation
and forwarding. Other functions, such as sending and receiving registration messages, are implemented in a
user-level daemon.
A mobile host can choose the use of either Mobile IP or regular IP and the use of either bi-directional
tunneling or triangular routing for dierent
ows of tra-c all at the same time. We also provide mobility-aware
applications with the
exibility to choose incoming and outgoing interfaces. We use two new socket
options to bind
ows to given interfaces:
SO BINDTODEVICE: 1 This option is used by an application to bind the outgoing
ows of a socket to
an interface.
SO BIND FLOWTODEVICE: This option is used by an application to bind the incoming
ows of a
socket to a given interface.
6.2. Experiments with Multiple Packet Delivery Methods
6.2.1. Benets
The choice of delivery method has a performance impact on tra-c
ows. We look at a mobile host
in one particular real scenario to illustrate the potential benets certain
ows will be able to receive when
they can use the more e-cient delivery methods made possible by our
exible mechanism. Note that other
scenarios could see very dierent results depending on the actual network latency between the mobile host,
the home agent and the correspondent host.
We evaluate the latency improvement resulting from using the most direct route possible under the
circumstances. For these experiments, the mobile host connects to a foreign network in one of our campus
residences, which is also on Ethernet, and registers its current care-of address with its home agent. The setup
of the test scenario is illustrated in Figure 10. The mobile host sends ping (ICMP echo request) packets
to the default gateway (acting as its correspondent host) of its local subnet and we measure the round-trip
latency using the following three delivery methods, switching between them by manipulating the Mobile
Policy Table:
Regular IP (no transparent mobility support);
1 The name SO BIND OUTFLOW TO INTERFACE would be more appropriate, and the next socket option should also be
named accordingly. Unfortunately, our original name SO BINDTODEVICE is already in the standard Linux distribution.
Home
Gateway
Home
Gateway
Residence
Gateway
Residence
Network
Mobile
Host
Department
Network
Network
Home
Agent
Department
Gateway
Figure
10. Setup of the Test Environment for Experiments on Multiple Packet Delivery Methods. The mobile host visits the
campus residence network as a foreign network. The mobile host is a Thinkpad 560, and the home agent is a 90MHz Pentium.
They are both running Linux.
Delivery Method Min (ms) Max (ms) Avg (ms) Standard Deviation
Bi-directional Tunneling 7.3 9.3 7.9 0.36
Triangular Route 4.5 6.2 5.0 0.35
Regular IP 2.0 4.0 2.4 0.35
Table
Latency Comparison When Using Small Packets. Using 64 bytes of ICMP data (i.e., the default ping
packet size), the test for each delivery method is repeated 100 times with an interval of 2 seconds.
Delivery Method Min (ms) Max (ms) Avg (ms) Standard Deviation
Bi-directional Tunneling 37.4 42.4 38.4 1.2
Triangular Route 24.6 30.9 25.4 1.0
Regular IP 11.5 14.5 12.4 0.6
Table
Latency Comparison When Using Large Packets. Using 1440 bytes of ICMP data, the test for each
delivery method is repeated 100 times with an interval of 2 seconds.
Triangular delivery (the default behavior in Mobile IP);
Bi-directional tunneling (for security-conscious boundary routers).
We collect the data by repeating the above test 100 times with an interval of two seconds for each
delivery method. The results are shown in Table 2 for small packets and Table 3 for large packets. For this
experimental setup, our
exible mechanism reduces the latency by one half for both small and large packets
when choosing regular IP over the default Mobile IP behavior. Even when mobility support is necessary,
this
exible mechanism still reduces the latency by about one third for small and large packets when we can
choose the triangular route over the robust but more costly bi-directional tunnel.
6.2.2. Cost
We measure both the time spent in doing regular route lookup and the time spent in doing route lookup
with a Mobile Policy Table lookup on a mobile host. We then examine the dierence to determine the added
latency in making policy decisions according to the Mobile Policy Table.
The results are shown in Table 4. The overhead of consulting the Mobile Policy Table in route lookups
is small, less than 20 s even when the entries are not currently cached. This is less than one percent of the
Cached Uncached
Experiment Time
Regular route lookup 18.5 0.8 100.1 0.9
Route lookup w/ MPT 23.5 0.7 116.5 1.2
Table
Cost to Support the Simultaneous Use of Multiple Packet Delivery Methods. This experiment measures
both the time needed for the regular route lookup and the time spent to do the modied route lookup
with Mobile Policy Table (MPT) consultation. Each measurement is repeated 10 times. We tested cases
for both a cache hit and a cache miss when looking up routes.
Home
Agent
Correspondent
Host
Mobile
Host
Foreign Network
Home Network
Router
Host
Mobile
Figure
11. Setup of the Test Environment for Experiments on Flow Demultiplexing. The Mobile Host is a Gateway2000
(486DX2-40 processor) and the Home Agent is a Toshiba Tecra (Pentium 133MHz), both of which have an Ethernet interface
as well as a Metricom radio interface. The Correspondent Host is a 90MHz Pentium. All machines are running Linux.
round-trip time between two hosts on the same Ethernet segment (typically around 2ms).
6.3. Experiments with Multiple Active Interfaces
The main possible source of overhead for supporting multiple interfaces is the
ow demultiplexing
processing on the home agent, which may aect both the latency and the throughput. The goal of the
experiments described in this section is to evaluate the latency cost of this demultiplexing, i.e. the extra time
it takes for a home agent to forward a packet addressed to a mobile host.
The setup of our test environment is illustrated in Figure 11. For these experiments, the mobile host
connects to a foreign network on one of our department's Ethernets and registers its current care-of address
with its home agent. The correspondent host sends ping packets to the mobile host. We measure the
ow
binding demultiplexing time at the home agent by monitoring the code using the Linux do gettimeofday
kernel function. We consider two cases: when the home agent has no Flow-to-Interface bindings, regular
Mobile IP is used; when the home agent contains Flow-to-Interface bindings, it must search the list of
bindings before forwarding a packet to the mobile host.
We collect the data by repeating the above test 10 times each as the number of bindings in the home
agent's list increases from 0 to 60. Table 5 displays the results.
These results for the
ow binding demultiplexing cost may seem large compared to the demultiplexing
time incurred with regular Mobile IP on the home agent (2.1 s), but the extra cost is a small additional
factor when compared to the round-trip time between the mobile host and the correspondent host, which is
approximately 5400 s in this experimental setup. The
ow binding demultiplexing cost varies from 2.3 s
for one binding to 9.2 s for bindings.
Note that the cost per
ow binding decreases as the number of
ow bindings in the list increases, with
the result converging to about 0.12 s. This result is explained by the structure of the
ow binding lookup
function. This function is composed of two parts. The rst part takes a xed amount of time to locate an
Packet Processing Demultiplexing by Flow
Flow Bindings Time (s) Standard Deviation Cost Per binding
5.3 0.46 3.2 0.11
Table
Flow Binding Demultiplexing Cost. The Packet Processing column displays the total time spent by the
home agent in deciding how to forward a packet to the mobile host. The Demultiplexing by Flow column
displays the cost of
ow demultiplexing which is part of the total packet processing time.
entry for the mobile host in the mobility binding table on the home agent by hashing on the mobile host's
home IP address. The second part searches into a list for a
ow binding corresponding to the incoming
packet. When the number of bindings in the list is small, the cost of the rst part dominates. On the other
hand, when the number of bindings in the list is large, the cost of the rst part is amortized, and its eect
becomes negligible.
7. Related Work
There are several projects focusing on providing
exibility support for mobile hosts at dierent layers
of the network software stack.
Work at Oregon Graduate Institute and Portland State University [11] concentrates on lower network
layers than ours. It addresses the
exibility needs of a mobile host in the face of physical media changes,
mainly dealing with IP reconguration issues. The focus is on a model that determines the set of available
network devices and dynamically recongures a mobile system in response to changes in the link-layer
environment.
The CMU Monarch project [12] aims at enabling mobile hosts to communicate with each other and with
stationary or wired hosts transparently and adaptively, making the most e-cient use of the best network
connectivity available to the mobile host at any time. It has an overall goal similar to ours, although the
project currently does not support the use of multiple packet delivery methods simultaneously. Monarch
also focuses on ad hoc networking, which we currently do not support.
The CMU Odyssey project [21] extends the Unix system call interface to support adaptation at the
application layer. Odyssey exposes resources to applications by allowing them to specify a bound of tolerance
for a resource. When the behavior of the resource moves outside the tolerance window, the application is
informed via an up-call.
Making simultaneous use of multiple interfaces is also an important goal of the work by Maltz and
Bhagwat [17]. Their approach provides transport layer mobility by splitting each TCP session between a
mobile host and a server into two separate TCP connections at the proxy, through which all packets between
the mobile host and the server will travel. It allows a mobile host to change its point of attachment to the
Internet by rebinding the mobile-proxy connection while keeping the proxy-server connection unchanged. It
can control which network interfaces are used for dierent kinds of data leaving from and arriving at the
mobile host. However, their work diers from ours in that it does not use Mobile IP and it selects the
interface by explicitly picking the corresponding IP addresses to use on a per session basis, while with our
enhancement to Mobile IP, we can select dierent interfaces to use for dierent
ows even if the mobile host
always assumes its static home address.
The BARWAN project [14] at UC Berkeley also provides network layer mechanisms to support mobile
hosts, but with a dierent focus. It aims at building mobile information systems upon heterogeneous wireless
overlay networks to support services allowing mobile applications to operate across a wide range of networks.
They concentrate on providing low-latency hand-os among multiple network devices.
The use of a Mobile Policy Table is similar to the Security Policy Database (SPD) in IPsec [15], which
needs to be consulted in processing all tra-c. While the Mobile Policy Table is used to direct the addressing
and routing decisions in sending packets, the SPD deals with the security aspects of tra-c control. Another
dierence is that the Mobile Policy Table is only in eect for outgoing tra-c. It aects the incoming tra-c
through the selection of the source IP address for the outgoing tra-c and/or through coordination with the
home agent. The SPD is in eect for both inbound and outbound tra-c all the time.
In summary, our approach focuses on the network layer, providing increased support for a mobile host
to control how it sends and receives packets. Our work diers from other work in the combination of the
following features:
1. Mobile IP is an integral part of our system. We address the issue of how Mobile IP can be used most
e-ciently and
exibly on mobile hosts.
2. Our system provides support for multiple packet delivery methods.
3. Our system enables the use of multiple active interfaces simultaneously.
8. IPv6 Considerations
With IPv6 [10] a mobile host will always be able to obtain a co-located care-of address in the network
it is visiting. However, the fact that a mobile host will always have both a home role (acting as a host
still connected to its home network) and a local role (acting as a normal host in the visited network)
simultaneously will still be an issue. An example is in multicast scoping. A mobile host needs to assume its
home role if it wants to join the multicast group scoped within its home domain, while it needs to assume
its local role to join the multicast group scoped within the visited domain. Therefore, because of the duality
of the roles a mobile host can assume, there is always the need for a mobile host to select dierent packet
delivery methods for dierent tra-c
ows.
With regard to multiple interface management under IPv6, we have the following observations:
1. Mobile IPv6 [24] provides route optimization. As a result, the correspondent hosts can send packets
directly to the mobile host without going through the home agent. Therefore, the Flow-to-Interface
binding extensions need to be sent to the correspondent hosts as well.
2. The IPv6
ow label eld can be very useful for providing multiple interface support. The
ow label is
a eld of the IPv6 header that is set by the source of the packet and used by routers to identify
ows.
This
ow label eld could be used by the Home Agent to demultiplex packets and forward them on the
appropriate interfaces without looking into the transport layer eld for the port information to identify
a
ow. This also simplies the handling of fragmented packets on the home agent by eliminating the
need to reassemble them before forwarding them to the mobile host.
3. IPv6 provides a priority (class) eld used by routers to provide dierent services to dierent types of
packets. This eld can potentially be used in our proposal by the home agent in determining the most
appropriate interfaces through which to forward packets addressed to a mobile host in the absence of
Flow-to-Interface binding registrations.
9. Conclusions and Future Work
Because of a mobile host's use of multiple interfaces and the dynamic environment in which it may
operate, a mobile host using Mobile IP demands
exibility in the way it sends and receives packets. Fine
granularity control of network tra-c on a per
ow basis is needed. Our experiments show that with reasonable
overhead we can address the
exibility needs of a mobile host by maintaining a balance between the robustness
and e-ciency needs of packet delivery.
It is desirable to choose dierent packet delivery methods according to the characteristics of dierent
tra-c
ows. We have devised a general-purpose mechanism at the network routing layer to make this possible
by coupling the consultation of a Mobile Policy Table with the regular routing table lookup.
It is also desirable for a mobile host to be able to make use of multiple network interfaces simultaneously
for dierent
ows of tra-c. For this, we have added two new socket options and have extended the Mobile
IP protocol with new registration extensions so that a mobile host has more control over which interface to
use to send and receive packets.
10.
Acknowledgements
We thank Petros Maniatis, Kevin Lai, Mema Roussopoulos, and Diane Tang of MosquitoNet Group
for discussions and feedback on the ideas presented in this paper. Ajay Bakre and Charlie Perkins provided
thoughtful comments and suggestions on an earlier version of the paper. We are also grateful to the
anonymous reviewers for their extremely detailed comments.
This work was supported in part by a student fellowship from Xerox PARC, a Terman Fellowship, a
grant from NTT DoCoMo, and a grant from the Keio Research Institute at SFC, Keio University and the
Information-technology Promotion Agency, Japan.
--R
IP Multicast Extensions for Mobile Internetworking.
Cellular Digital Packet Data Standards and Technology.
Supporting Mobility in MosquitoNet.
Computer Emergency Response Team (CERT).
Multicast Routing in Datagram Internetworks and Extended LANs.
Dynamic Host Con
Network Ingress Filtering: Defeating Denial of Service Attacks which Employ IP Source Address Spoo
Hypertext Transfer Protocol - HTTP/1.1
Mobile Multicast (MoM) Protocol: Multicast Support for Mobile Hosts.
The New Internet Protocol.
Dynamic Network Recon
Protocols for Adaptive Wireless and Mobile Networking.
Route Optimization in Mobile IP.
The Case for Wireless Overlay Networks.
Security Architecture for the Internet Protocol.
Experiences with a Mobile Testbed.
MSOCKS: An Architecture for Transport Layer Mobility.
Reverse Tunneling for Mobile IP.
Sun's SKIP Firewall Traversal for Mobile IP.
Agile Application-Aware Adaptation for Mobility
IP Encapsulation within IP.
IP Mobility Support.
Mobility Support in IPv6.
Internet Protocol.
Transmission Control Protocol.
An Introduction to GSM.
WaveLAN Support.
--TR
--CTR
Jun-Zhao Sun , Jukka Riekki , Jaakko Sauvola , Marko Jurmu, Towards connectivity management adaptability: context awareness in policy representation and end-to-end evaluation algorithm, Proceedings of the 3rd international conference on Mobile and ubiquitous multimedia, p.85-92, October 27-29, 2004, College Park, Maryland
Trevor Pering , Yuvraj Agarwal , Rajesh Gupta , Roy Want, CoolSpots: reducing the power consumption of wireless mobile devices with multiple radio interfaces, Proceedings of the 4th international conference on Mobile systems, applications and services, June 19-22, 2006, Uppsala, Sweden | multiple packet delivery methods;multiple network interfaces;mobility support;mobile IP;mobile policy table |
383761 | A distributed mechanism for power saving in IEEE 802.11 wireless LANs. | The finite battery power of mobile computers represents one of the greatest limitations to the utility of portable computers. Furthermore, portable computers often need to perform power consuming activities, such as transmitting and receiving data by means of a random-access, wireless channel. The amount of power consumed to transfer the data on the wireless channel os negatively affected by the channel congestion level, and significantly depends on the MAC protocol adopted. This paper illustrates the design and the performance evaluation of a new mechanism that, by controlling the accesses to the shared transmission channel of a wireless LAN, leads each station to an optimal Power Consumption level. Specifically, we considered the Standard IEEE 802.11 Distributed Coordination Function (DCF) access scheme for WLANS. For this protocol we analytically derived the optimal average Power Consumption levels required for a frame transmission. By exploiting these analytical results, we define a Power Save, Distributed Contention Control (PS-DCC) mechanism that can be adopted to enhance the performance of the Standard IEEE 802.11 DCF protocol from a power saving standpoint. The performance of an IEEE 802.11 network enhanced with the PS-DCC mechanism has been investigated by simulation. Results show that the enhanced protocol closely approximates the optimal power consumption level, and provides a channel utilization close to the theoretical upper bound for the IEEE 802.11 protocol capacity. In addition, even in low load situations, the enhanced protocol does not introduce additional overheads with respect to the standard protocol. | Introduction
The finite battery power of mobile computers represents one of the greatest limitations to the
utility of portable computers [10, 17]. Projections on the expected progress in battery technology
show that only a 20% improvement in the battery capacity is likely to occur over the next 10 years
[14]. Despite this fact, in Ad-Hoc network scenarios, portable computers often need to transmit and
receive data by means of a random-access, wireless transmission channel. These activities are among
some of the most power consuming operations to perform [11, 15]. If the battery capacity cannot
be improved, it is vital that power utilization is managed efficiently by identifying any way to use
less power preferably with no impact on the applications, nor on the resources' utilization. Several
studies have been carried out in order to define mechanisms useful for Power Saving (PS) in wireless
LANs [1, 11]. Strategies for power saving have been proposed and investigated at several layers
including the physical-layer, the MAC layer, the transport and the application levels [1, 11, 17].
Transmitter Power Control strategies to minimize power consumption, mitigating interference and
increasing the cell capacity have been proposed, and the design aspects of power-sensitive wireless
network architectures have been investigated [1, 13, 18]. The impact of network technologies and
interfaces on power consumption has been investigated in depth in [15]. The power saving features
of the emerging standards for wireless LANs have been analyzed in [12, 17].
In this paper we investigate a power saving strategy at the MAC level. Specifically, we focus
on the wireless ad-hoc network paradigm, and on the Carrier Sense Multiple Access with Collision
Avoidance (CSMA/CA) access mechanism adopted, for example, in the IEEE 802.11 WLANs
[9]. The IEEE 802.11 access scheme is based on two access methods: Distributed Coordination
Function (DCF) for asynchronous, contention-based, distributed accesses to the channel, and Point
Coordination Function (PCF) for centralized, contention-free accesses. We will concentrate our
analysis over DCF, which may be significantly affected by the congestion problem, due to distributed
random access characteristics.
For these CSMA/CA protocols the amount of power consumed by transmissions is negatively
affected by the congestion level of the network. Indeed, by increasing the congestion level, a
considerable power is wasted due to the collisions. To reduce the collision probability, the stations
perform a variable time-spreading of accesses, which results in additional power consumption, due
to Carrier Sensing 1 . Hence, these protocols suffer a power wastage caused both from transmissions
resulting in a collision and from the amount of Carrier Sensing (active reception time) introduced
by the time-spreading of the accesses. Furthermore, the transmission phase can be considered at
least twice more power-consuming than the reception (or idle) phase. This fact could introduce
some kind of transmission policy optimization, by evaluating the risks (i.e. the costs/benefits) of
transmission attempts to be performed, given the current congestion conditions, and the system's
Power Consumption parameters. As the reduction of the time-spreading periods generally produces
an increase in the number of collisions, to minimize the power wastage the access protocol should
balance these two costs. Therefore the power saving criterion we adopt throughout this paper is
based on balancing the power consumed by the network interface in the transmission and reception
(e.g. Physical Carrier Sensing) phases. Since these costs change dynamically, depending on the
network load, a power-saving protocol must be adaptive to the congestion variations in the system.
In addition, a Power Saving strategy should balance the need for high battery lifetime with the
need to maximize the channel utilization and the QoS perceived by the network users [11].
In this paper, we propose a Power-Save, Distributed Contention Control (PS-DCC) mechanism
which can be used on top of the IEEE 802.11 DCF access method to maximize the power saving
while accessing the wireless channel. The proposed mechanism dynamically adapts the behavior of
each station to the congestion level of the system and, beyond providing power saving, it contributes
to obtain an optimal channel utilization, see [5].
The work is organized as follows: in Section 2 we present the actual Standard 802.11 DCF
access scheme, and critical aspects connected to the contention level of the system; in Section 3 the
DCC mechanism proposed in [2] is summarized; in Section 4 the analytical model of the system is
presented; in Section 5 the new PS-DCC mechanism is defined an analyzed; in Section 6 simulation
results are presented and commented; in Section 7 conclusions and future researches are given.
For the detailed explanation of the IEEE 802.11 Standard we address interested readers to [9].
Throughout we denote as active station a station with at least a frame to transmit. The Basic
Access method in the IEEE 802.11 MAC protocol is the Distributed Coordination Function (DCF)
which is a CSMA/CA MAC protocol. Every active station is required to perform a Carrier Sensing
activity to determine the current state of the channel (idle or busy). If the medium is found busy, the
1 The latter point has been faced in some cases; for example, IEEE 802.11 networks try to reduce the amount of
Physical Carrier Sensing (CS), by exploiting Virtual CS based on Network Allocation Vectors' (NAV) updating [9].
station defers transmission. Whenever the channel becomes idle for at least a Distributed Interframe
Space time interval (DIFS), the station (re)starts its Basic Access mechanism. To avoid collisions
as soon as an idle DIFS is sensed on the channel, the Collision Avoidance mechanism adopted in the
Standard 802.11 DCF is based on a Binary Exponential Backoff scheme [6, 7, 9]. When the channel
is idle, time is measured in units of constant length indicated as slots in the following. Throughout
Slot Time will be used to denote the length of a slot. The Binary Exponential Backoff scheme is
implemented by each station by means of the Backoff Counter parameter, which maintains the
number of empty slots the tagged station must observe on the channel, before performing its own
transmission attempt. At the time a tagged station needs to schedule a new transmission, it selects
a particular slot among those of its own Contention Window, whose size is maintained in the local
parameter CW Size. Specifically, the Backoff value is defined by the following expression [9]:
Backoff Counter(CW
where Rnd() is a function returning pseudo-random numbers uniformly distributed in [0.1]. The
Backoff Counter is decreased as long as a slot is sensed idle, it is frozen when a transmission is
detected, and reactivated after the channel is sensed idle for at least a further DIFS. As soon as the
Backoff Counter reaches the value Zero the station transmits its own frame. Positive acknowledgements
are employed to ascertain a successful transmission. This is accomplished by the receiver
(immediately following the reception of the data frame) which initiates the transmission of an acknowledgement
frame (ACK) after a time interval Short InterFrame Space (SIFS), which is less
than DIFS. If the transmission generates a collision 2 , the CW Size parameter is doubled for the
scheduling of the retransmission attempt, thus obtaining a further reduction of the contention. The
Binary Exponential Backoff is then characterized by the expression which gives the dependency of
the CW Size parameter by the number of unsuccessful transmission attempts (Num
performed for a given frame. In [9] it is defined that CW Size is set equal to CW Size min
(assuming low contention), when a new frame is transmitted, and doubled each time a collision
occurs up to a maximum value CW Size MAX , as follows:
The increase of the CW Size parameter value after a collision is the reaction that the 802.11
Standard provides to make the access mechanism adaptive to channel conditions. This mechanism
could introduce a low utilization of the channel bandwidth, due to high collision rates in congested
scenarios [2]. The choice to initially assume a low level of congestion in the system avoids a great
A collision is assumed whenever the ACK from the receiver is missing [9].
access delay when the load is light. Unfortunately this also represents a problem in bursty arrival
scenarios, and in congested systems, because it introduces a concentration of accesses in a reduced
time window, and hence it causes a high collision probability. Moreover, Collision Detection (CD)
results not practicable in wireless channels, and this makes the cost of each collision proportional
to the maximum size among colliding frames.
3 The DCC mechanism
The reasons explained in the previous section, motivated the studies to improve the performance
of a random access scheme, by exploiting early and meaningful information concerning the actual
state of the channel congestion. The idea presented in [2] involves an estimate of the resource's
congestion level, which can be given by the utilization rate of the slots (Slot Utilization) observed
on the channel by each station. The estimate of the Slot Utilization has to be maintained and
updated by each active station, during the defer phase that precedes a transmission attempt, also
indicated as Backoff interval. Specifically, during its own Backoff interval, every active station
counts the number of transmission attempts it observes on the channel (Num Busy Slots), and
then divides this number by the total number of slots available for transmission observed on the
channel. Hence, a simple and intuitive definition of the Slot Utilization estimate is given by:
Slot U tilization = Num Busy Slots
Available Slots
The Slot Utilization provides a normalized lower bound for the actual contention level of the
channel. In fact, as some stations may transmit in the same slot, it provides a lower bound to the
effective number of stations trying to access the channel during the last observed Backoff interval.
By assuming a time-locality of contention, such an information is significant about the state of the
channel in immediately successive slots.
A station which knows there are few possibilities for a successful transmission, should defer its
transmission attempt. Such a behavior could be achieved in an IEEE 802.11 network by exploiting
the DCC mechanism proposed in [2]. According to DCC, each station controls its transmission
attempts via a new parameter named Probability of Transmission (P T (:::)) which is dependent on
the contention level (Slot Utilization) of the channel [2]:
where Num Att is the number of transmission attempts already performed for a given frame (in-
cluding the current one). Each station, after the Slot Utilization estimate, computes the Probability
of Transmission value, and with the same probability performs a real access to the shared channel.
If the station decides to defer the transmission, it reschedules a new attempt, as in the case of
a collision occurred. The Slot Utilization is interpreted here as the inhibition power of the DCC
mechanism on the accesses, depending on the contention level it represents. Num Att is used in
the DCC mechanism as an indicator of the dynamic level of privilege achieved by a station. The
lowest privilege is given to stations performing, for a given frame, the first transmission attempt,
while the privilege level linearly increases with the number of collisions experienced. The aim is to
privilege old transmission requests, obtaining a queue-emptying behavior for the system. To better
understand what it has been obtained, we can observe the Figure 1. The proposed DCC mechanism0.20.61
Probability
of
Transmission:
P_T(.)
Slot_Utilization
Probability of Transmission (Slot_Utilization, Num_Att)
Figure
1: Probability of Transmission
has the effect to asymptotically reduce the Slot Utilization level induced on the system, when the
congestion level grows indefinitely (Figure 1). If the Slot Utilization asymptotically approximates
the maximum value (One), then the mechanism's effect reduces the Probability of transmission to
Zero, for every station in the system. This means no accesses in the next slot. Hence, a Slot Utilization
equal to One can only be asymptotically obtained over a CW size MAX backoff interval.
This fact can be exploited by opportunely modifying the Probability of Transmission expression,
in order to induce an arbitrary upper limit (SU limit) to the Slot Utilization of the system:
Slot U tilization
SU limit
By adopting the expression (5), the DCC mechanism would perform asymptotically by reducing
Table
1: System parameters
Number of stations from 2 to 200
Max channel
Average Payload size 2.5 and 100 Slot Units
size 4 Slot Units
Header size 2.72 Slot Units
Slot Time 50 -S
SIFS 0.56 Slot Units (28 -S)
DIFS 2.56 Slot Units (128 -S)
Propagation Time (-) ! 1-S
all the probabilities of transmission in correspondence of Slot Utilization values equal to or greater
than the SU limit value. All remaining characteristics of the DCC mechanism are maintained.
4 Power Consumption Analysis
In this section the IEEE 802.11 DCF MAC power consumption is analytically estimated by developing
an approximated model with a finite number, M , of active stations operating in asymptotic
conditions (i.e. stations always have a packet ready for transmission). Once obtained the power
consumption expression, we minimize it to identify the optimal power consumption level for the
given network and load configurations. In the following we assume the physical and protocol pa-
rameters' values reported in Table 1.
Among the considered system's parameters, we also define PTX , indicating the power consumed
(in mW) by the Network Interface (NI) during the transmission phases, and PRX indicating the
power consumed (in mW) by the NI during the reception (or carrier sensing) phases.
Our model is based on the assumption that, for each transmission attempt, a station uses a
backoff interval sampled from a geometric distribution with parameter p, where
and E[B] is the average backoff time (in slot units). In the real IEEE 802.11 backoff algorithm, the
protocol behavior depends on the past history, however, results presented in [5] have shown that
a model of the protocol behavior based on the geometric assumption provides accurate estimates
(at least from a capacity analysis standpoint) of the IEEE 802.11 DCF protocol behavior. In our
model, we also assume that for all stations the payload length of transmitted frames is sampled
from a geometric distribution with parameter q, such that Pfframe payload
The general scenario would require to assume a general distribution of the payload length, with
the estimation of the mean and the variance (using techniques similar to the RTT estimation in [16]).
The assumption of a constant payload distribution implies a constant collision length distribution
characterized by the same parameter q. In order to obtain a more realistic approximated model
we consider the geometric distribution which allows to consider the additional length of a collision
with respect to the average payload. This introduces in the model a characterization of the collision
overhead, which could be easily substituted by the corresponding analytical definition related to
any other considered traffic distribution. The obtained model results robust for each distribution
type of the payload size, and it could be defined for a deterministic distribution, obtaining a close
form definition [3].
To perform our analysis we focus on a tagged station and we observe the system at the end of
each successful transmission of the tagged station. From the geometric backoff assumption, all the
processes which define the occupancy pattern of the channel (i.e. empty slots, collisions, successful
are regenerative with respect to the sequence of time instants corresponding to the
completion of tagged-station successful transmissions. By using the regenerative behavior of the
system, we derive a closed formula for the IEEE 802.11 DCF energy consumption. Specifically, by
defining as the th renewal period the time interval between the th and (j th successful
th virtual transmission time) of the tagged station, from renewal theoretical
arguments [8] it follows that:
where Energy is the energy required to a station to perform a successful transmission, and Energy j
is the tagged station energy consumption during the j-th renewal period (i.e., the j-th successful
transmission). By exploiting (6), the power consumption analysis can thus be performed by studying
the system behavior in a generic renewal period also referred to as virtual transmission time.
When more than one station is active, during a virtual transmission time the tagged station is
involved in a successful transmission and in some collisions (see Figure 2). Specifically, Figure 2
shows that before the tagged station performs a successful transmission, it may experience a number
of collisions and a number of not used slots. A not used slot is both a time interval in which
the transmission medium remains idle due to the backoff algorithm, and a time interval in which
the tagged station does not transmit but the transmission medium is busy due to the transmission
attempts of some of the other stations. During a not used slot the tagged station is in
the receiving state, hence to determine the tagged-station power consumption during this period
Virtual Transmission Time
Tagged_Collision DIFS Tagged_Success
Not_used_slot
Subinterval 1 Subinterval 2 Subint. Nc+1
Subinterval 3
Figure
2: Structure of a virtual transmission time
we need to estimate its length. By denoting with NC the number of collisions experienced by a
tagged station in a virtual transmission time, we can subdivide a virtual transmission time into
Figure 2. NC subintervals terminate with a transmission attempt of the
tagged station which results in a collision, while the last subinterval (i.e., (NC th subinterval)
terminates with a successful transmission of the tagged station. In each subinterval, there are a
number, N not used slot , of not used slots (from the tagged station standpoint) before the transmission
attempt of the tagged station. The number of not used slots in successive subintervals are
i.i.d. random variables sampled from a geometric distribution. Specifically, by considering that the
tagged station transmits in a slot with probability p, it can be shown that
E[N not used slot
To simplify the presentation it is useful to introduce the following random variable definitions,
which make reference to a general th virtual transmission time (we will omit the index j). We
denote with
number of consecutive not used slots during the k \Gamma th subinterval in which the
tagged station is in the receive state
consumption during the N nus k slots of the th subinterval in which
the tagged station is in the receive state
ffl Energy tagged collision k : power consumption experienced by the tagged station while it experiences
the th collision during the th virtual transmission time
ffl Energy tagged success : power consumption experienced by the tagged station when it obtains
the th successful transmission
By exploiting the above definitions, the tagged-station's power consumption (6) can be expressed
as:
(Energy nus n +Energy tagged collision
The above formula (8) can be re-written as:
Energy nus n
Energy tagged collision n ] +E[Energy tagged success
By exploiting the definition of conditioned averages:
Energy nus n
x
Energy nus n
x
Furthermore, by noting that the random variables Energy nus i have the same distribution,
which does not depend on the NC value, and by denoting with E[Energy nus ] their common average
value, it follows that:
Energy nus n
where E[Energy nus used slot ] \Delta E[Energy not used slot ], and E[Energy not used slot ] is the
average power consumption of the tagged station while it senses a sequence of not used slot.
In a similar way we can show that:
Energy tagged collision n
where E[Energy tagged collision ] is the average power consumption of the tagged station while it
experiences a collision during a transmission.
Hence, by exploiting (11) and (12) the formula (9) can be written as:
used slot ] \Delta E[Energy not used slot ]+
where E[NC ]; E[Energy not used slot ]; E[Energy tagged collision ]; E[Energy tagged success ] are derived in
the following lemmas 3 .In this work we don't consider the energy consumed in on/off, Rx/Tx and Tx/Rx transitions of the network
interface. However, our analytical model can be easily extended including these costs in the Formula (13).
Lemma 4.1 Assuming that for each station the backoff interval is sampled from a geometric
distribution with parameter p:
It must be noted that some protocol overheads follow both a collision and a successful trans-
mission: i) the maximum propagation delay - which is required (in the worst case) to sense the
channel idle after a transmission, and ii) the DIFS idle interval to be sensed after a transmission.
Let us denote with Collision and with S the average length of a collision (not involving the tagged
station) and of a successful transmission, including their overheads. Hence, by denoting with - the
maximum propagation delay between two WLAN stations
tagged
where E[Coll not tagged ] (i.e. the average length of a collision not involving the tagged station) has
been derived in [5]:
E[Coll not tagged
A successful transmission in the IEEE 802.11 WLAN includes the time interval between the start
of a transmission which does not experience a collision and the end of the first DIFS which follows
the successful transmission, hence [5]:
where, by denoting with q the parameter of the geometric distribution for the payload size, the
average length of the transmission (payload) in slot units, can be expressed as [5]:
As - is a bound on the propagation delay, it follows the - in Collision and S formulas.
Lemma 4.2 By denoting with p the parameter of the geometric distribution which defines the
backoff interval, the tagged station average power consumption during a not used slot is:
E[Energy not used slot
The analytical study of the average energy required for the frame transmission of a tagged station
does not consider the energy consumption required for the transmission of acknowledgements.
This is acceptable given the fact that acknowledgements are not subject to contention because they
are transmitted immediately after the SIFS following the end of the received frame, see Section 2.
Lemma 4.3 By denoting with p the parameter of the geometric distribution which defines the
backoff interval, the tagged station average power consumption, when it performs a successful
transmission in a slot, is:
E[Energy tagged success
Lemma 4.4 By denoting with p the parameter of the geometric distribution which defines the
backoff interval, and with q the parameter of the geometric distribution which defines the payload
size, the tagged station average power consumption, when it experiences a collision while
is:
E[Energy tagged collision
By assuming that all the times are expressed in SlotTime units, and that the PTX and PRX values
are expressed in mW/SlotTime units, the average energy requirement for a frame transmission
is given (in mJ units) by:
The proofs of previous lemmas can be found in [3]. We define p opt as the value of p (i.e. the average
backoff value) which minimizes the Energy consumption, fixed M , q (i.e. the average payload size)
and the PTX , PRX system's parameters. The p opt values for various combinations of the proposed
parameters have been calculated. In Section 5 we will exploit the results of the analytical model,
to define the PS-DCC mechanism.
5 The PS-DCC Mechanism
In Section 3 we presented a mechanism, named DCC, which could be used to control the contention
level in an IEEE 802.11-like network. Furthermore, we have shown that the DCC approach can
guarantee, given a threshold value SU limit, that the Slot Utilization in the network is bounded
by the threshold value. In Section 4 we defined an approximated analytical model for an IEEE
802.11-like protocol, to study the tuning of the protocol parameters, in order to minimize the power
consumption required to successfully transmit a message. In this section, by putting together the
above results, we define a mechanism named Power Save Distributed Contention Control (PS-DCC),
that can be used to enhance an IEEE 802.11 network from the Power Consumption standpoint.
Specifically, we first show that the optimal parameter setting for power consumption can be mapped
in a boundary value for the network Slot Utilization, named AsymptoticalContentionLimit(ACL).
Then, we define the PS-DCC mechanism by adequately inserting the ACL value into the DCC
mechanism.
In Section 5.1 we show that a strong relationship exists between the Slot U tilization and the
values. Such a relationship is useful in the definition of the PS-DCC mechanism.
5.1 The relationship between Slot U tilization and M \Delta p opt
In the following we investigate the meaning of the (M \Delta p opt ) values, and its relationship with the
Slot U tilization, in a considered system. As we seen in Section 4, we assume M active stations
scheduling their respective transmission attempts in a slot selected following a geometric distribution
with parameter p. Under the optimality assumption it has been obtained the expression which
defines the optimal value p opt of the p parameter. Let us now assume that each station uses the
optimal backoff value p opt . The (M \Delta p opt ) value is an upper bound of the Slot Utilization in the
system, in fact:
Slot U tilization
when p opt is small. Moreover, the negative second order
that M \Delta p opt is a tight upper bound of the Slot U tilization in a system operating with the optimal
p value from the power consumption viewpoint.0.10.30.50.72 20 40
M*Popt
Number of active stations (M)
The M*Popt values with respect to M and Payload
2.5 slots, PTX/PRX=2
100 slots, PTX/PRX=2
2.5 slots, PTX/PRX=100
100 slots, PTX/PRX=100
Standard 802.11 Slot Utilization
DCC Slot Utilization
Figure
3: The M \Delta p opt values and the steady-state Slot Utilization
Given the system's parameters of Table 1, in Figure 3 we compare the optimal Slot U tilization
values (represented by the (M \Delta p opt ) values), with the steady-state Slot U tilization level estimated
(via simulation) in a IEEE 802.11 network with or without the DCC mechanism. The number
of active stations M is variable, representing the variable contention level of the channel. The
considered systems are characterized by the average payload parameter (considered values for the
average payload size: 2.5 and 100 SlotTimes) and the PTX=PRX ratio (considered values: 2
and 100). By adopting the IEEE 802.11 protocol the Slot U tilization level does not depend on
the payload parameter value, while it is connected only to the number of active stations, M .
Results reported in Figure 3 show that in the standard protocol the Slot U tilization values are
generally greater than the optimal values. This result is marked when the mean frame size is large
because the standard protocol produces a Slot U tilization level which
does not depend on the frame size. On the other hand, in the optimal case, the increase of the
frame size (which means an increase in the collision-cost) is balanced by a decrease of the collision
probability, achieved by decreasing the Slot U tilization value. Even if the DCC mechanism [2]
correctly reduces, with respect to Standard 802.11, the Slot U tilization (i.e. the contention level)
under high-load conditions, the results presented in Figure 3 indicate that DCC does not produce
the optimal contention level. Furthermore, these results indicate that an algorithm which wishes
to drive the system to the optimal power consumption must take into consideration the average
value of the payload parameter.
5.2 Theoretical Power Saving limits: the ACL invariant figures
The M \Delta p opt values allow us to define the optimal contention limits to guarantee the power-saving
optimality in the considered systems. Looking at Figure 3 and fixed a given value of payload,
it can be observed that: i) if the PTX/PRX ratio is low (e.g. 2), then the (M \Delta p opt ) product
results quasi-constant for M greater than 2, and ii) if the PTX/PRX ratio is high (e.g. 100), then
the (M \Delta p opt ) values are significantly affected by the value of M . Observation i) (PTX/PRX=2)
means that, fixed the system's parameters of Table 1, it is possible to define a single quasi-optimal
value for the product (M \Delta p opt ), as a function of the payload parameter. Hence, fixed the value
2 for the PTX=PRX ratio, it is possible to define an Asymptotical Contention Limit function
ACL PTX=PRX (payload); payload 2 [2::MPDU ] 5 such that ACL 2 (payload) - (M \Delta p opt ), for the
considered system. This function can be defined off-line depending on the system's parameters,
and represents the indication of the optimal level of Slot Utilization the system should obtain under
the given access scheme, to guarantee the power consumption optimality. This definition does not
depend on M . In the scenario where PTX=PRX is high (e.g. 100), the optimal definition of a
function ACL 100 (payload) - (M \Delta p opt ) is not possible given the significant influence of M . Anyway,
it is possible to define a function ACL 100 (payload) considering only the high values of M (e.g. 100),
and evaluating the overhead introduced by using such a definition in a scenario with few stations.
This analysis has been done [3] and the results have shown that the overhead introduced (with
respect to the optimal power consumption level) is low. This fact is justified because we potentially
introduce a Slot Utilization limit higher than the optimal limit in a scenario with few stations, but
in this case the collision probability is low. This encourages us to assume that such an analytical
definition of the function ACL PTX=PRX (payload) would give results near to the optimality in terms
of power consumption, for each considered system.Here we assume that the payload size of transmitted frames is - 2 Slot Times, and - than the maximum protocol
data unit (MPDU) expressed in slot units.
5.3 The PS-DCC Mechanism
The previous considerations allow us to set the limit for the Slot Utilization in the considered
system, which guarantees a reduced influence of collisions on the power consumption. Specifically,
it results possible to incorporate the optimality in the reaction performed by the DCC mechanism,
by limiting the Slot Utilization by its optimal upper bound Asymptotic Contention Limit (ACL).
Assuming the ACL PTX=PRX (payload) function to be known, in Section 3 we defined the new
Probability of Transmission (P T ) definition which realizes the expected effect:
Slot U tilization
The PS-DCC mechanism requires the payload and the Slot U tilization estimations to determine
the value of the transmission probability (24). Details regarding the implementation
of the estimation algorithms can be found in [3, 4]. The Figure 4 shows an example of the
ACL PTX=PRX (payload) functions obtained for the system considered, for
ACLptx/prx(payload)
Payload size (slot units)
The functions ACLptx/prx(payload)
Analytical
Analytical
Figure
4: The ACL PTX=PRX (payload) functions
6 Simulation results
The behavior of the proposed PS-DCC mechanism has been studied from the Power Consumption
viewpoint, by means of discrete-event simulation. Our interest was to investigate the advantages
of the new PS-DCC mechanism, with respect to the actual Standard 802.11 DCF access scheme
(STND) and the optimal values (OPT). Specifically, we focused our investigation on the average
Power Consumption for a frame transmission, and on the channel Utilization level, when varying
the Contention Level on the transmission channel. The implementation of the Contention Level has
been obtained in each simulation by means of a variable number of active stations with continuous
transmission requirements. The number of stations has been set variable from 2 to 200, in order to
i) stress the system, ii) show the system's scalability, and iii) point out the absence of overheads
introduced. The physical characteristics and parameters of the investigated system are reported
in
Table
1. Simulations consider only two extreme power consumption scenarios with PTX=PRX
According to the studies of TCP traffic [16] we consider either systems with "long
messages" (average payload length of 100 SlotTimes, 1250 Bytes), or systems with "short messages"
(average payload length of 2.5 SlotTimes, Bytes). The RTS/CTS option of the IEEE 802.11
protocol [9] has not been explicity considered in our analysis and simulations because the contention
phase is restricted to the RTS transmission. After the RTS/CTS transmission all stations not
involved in the transmission are frozen. The payload transmission is optimized from the Power
Saving point of view because the contention has been solved. Given these consideration, the
RTS/CTS option is optimized when short messages transmission is optimized. Simulations will
show that the PS-DCC gives advantages with short messages (32 Bytes) that are comparable with
RTS messages. Since the RTS threshold is usually greater than 32 Bytes, the PS-DCC mechanism
is effective also implementing the RTS/CTS option. The effect of Hidden Terminals [9] or any
other spatial inequities have not been considered here. We assume that the only cause of collision
between two or more transmissions is given by the selection of the same slot. This allowed us to
study the problem of the optimal power consumption in random access schemes, with respect to
the contention level influence. If spatial inequities or Hidden Terminals are present in the system,
all the enhancements obtained with the proposed PS-DCC mechanism would be in any case lower
bounded by the Standard 802.11 DCF results. Simulations performed with a deterministic payload
distribution have shown a reduction of the Energy parameters of 8 \Gamma 10% with long messages and
no significant modifications with short messages. All the investigated parameters presented show
their confidence intervals, with a confidence level of 95%. To derive these statistics we adopted the
independent replication techniques.
6.1 Energy required
Figures
5-8 show the average energy required for a successful frame transmission, in a system
characterized by the payload parameter values 2:5 and 100 slots, and two (extreme) values for
the network interface power consumption ratio PTX=PRX (equal to 2 in Figures 5-6, and equal
to 100 in
Figures
7-8). The energy required has been investigated as a function of the variable
number of active stations M , i.e. the congestion level. The analytically estimated optimal values
for the energy consumption are also reported. These optimal values are compared with the steady-state
values obtained by the simulations of the Standard 802.11 DCF access scheme, and by the
simulations of the same access scheme enhanced with the PS-DCC mechanism.200060001000014000
Energy
required
Number of active stations
Energy required with respect to optimum values
OPT: payload=2.5, PTX/PRX=2
PS-DCC: payload=2.5, PTX/PRX=2
STND: payload=2.5, PTX/PRX=2
Figure
5: Energy required: Payload=2.5,
Energy
required
Number of active stations
Energy required with respect to optimum values
OPT: payload=100, PTX/PRX=2
PS-DCC: payload=100, PTX/PRX=2
STND: payload=100, PTX/PRX=2
Figure
Results show that: i) the power consumption in the Standard 802.11 DCF access scheme is
negatively affected by the congestion level, and ii) the PS-DCC mechanism counterbalances the
congestion growth by maintaining the optimality in the power consumption. The energy saving
achieved by PS-DCC is significant and increases with the average frame size. No overheads are
introduced from the power consumption viewpoint in the considered systems, and the PS-DCC
results are very well approximating the optimum values analytically defined (the corresponding
Energy
required
Number of active stations
Energy required with respect to optimum values
OPT: payload=2.5, PTX/PRX=100
PS-DCC: payload=2.5, PTX/PRX=100
STND: payload=2.5, PTX/PRX=100
Figure
7: Energy required: Payload=2.5, PTX/PRX=10010002000300040005000
Energy
required
Number of active stations
Energy required with respect to optimum values
OPT: payload=100, PTX/PRX=100
PS-DCC: payload=100, PTX/PRX=100
STND: payload=100, PTX/PRX=100
Figure
8: Energy required: Payload=100, PTX/PRX=100
lines reported in the figures can not be easily distinguished).
6.2 99-th percentile of Energy required
Figures
show the 99-th percentile of the Energy considered above. This index is significant
about the power-consumption obtained in the "worst case" for a frame transmission in the
considered system. The PS-DCC mechanism allows a significant reduction of this performance in-
dex, with respect to the Standard. This fact also indicates the enhanced queue-emptying behavior
obtained in the considered system, which is related to the priority obtained by old transmission
requirements, and connected to the Num Att parameter.
99-th
percentile
of
the
Energy
required
Number of active stations
Energy: 99-th percentile with
PS-DCC: payload=2.5, PTX/PRX=2
STND: payload=2.5, PTX/PRX=2
PS-DCC: payload=100, PTX/PRX=2
STND: payload=100, PTX/PRX=2
Figure
9: 99-th percentile of Energy
99-th
percentile
of
the
Energy
required
Number of active stations
Energy: 99-th percentile with
PS-DCC: payload=2.5, PTX/PRX=100
STND: payload=2.5, PTX/PRX=100
PS-DCC: payload=100, PTX/PRX=100
STND: payload=100, PTX/PRX=100
Figure
10: 99-th percentile of Energy required: PTX/PRX=100
6.3 Channel Utilization level
We are interested in investigating the effect of the PS-DCC mechanism on the channel utilization
of the considered system. Specifically, we want to compare the channel utilization level when PS-
DCC is adopted in the system, with respect to the optimal channel utilization level defined in
[5].
Figure
11 shows the utilization level of the Standard 802.11 DCF with respect to the optimal
values analytically calculated in [5]. It is immediate to verify that the 802.11 Standard DCF access
scheme suffers the high contention situations that could arise in the system, leading to a low channel
Utilization with respect to the channel Capacity. The use of the PS-DCC mechanism shows its
effectiveness. The utilization curves obtained with PS-DCC, for the different values of PTX=PRX ,
are almost not distinguished, so only one is reported. It results clear that the proposed mechanism
leads to a quasi-optimal behavior of the system also from the Utilization viewpoint. This makes the
power consumption mechanism adopted a conservative method, leading to the quasi-optimality both
for the power saving and for the channel utilization. Only a 2% utilization reduction (overhead) is
introduced in the system with few stations and short frames, with respect to the actual Standard,
as we can see in the direct comparison.0.10.30.50.70.9
Channel
Utilization
Number of active stations
Channel Utilization: PS-DCC, Standard and optimal values
times
times
times
times
times
times
Figure
11: Channel utilization: optimal, Standard 802.11 and PS-DCC values
6.4 The Average Access Delay
About the user's performance indexes investigation, the average MAC access delay has been re-
ported. The average MAC access delay represents the time required between the first transmission
scheduling of a given frame, and the completion of its successful transmission. In the Figure 12
it has been reported the simulation data of the Mean Access Delay, with respect to the variable
contention level (M) and the average payload size of transmitted frames. It results that the new
PS-DCC mechanism leads to a reduction of the Mean Access Delay with respect to the actual
Standard access scheme. No overheads are introduced, confirming the adaptive behavior of the
proposed mechanism. The PS-DCC mechanism, with the implemented priority effect connected to
the Num Att value, makes stations with high Num Att values transmitting with high probability
of success, enhancing the fairness and the queue-emptying behavior of the system. Moreover, the
PS-DCC leads to a linear growth of the access delay, with respect to the number of active stations
in the system, which outperforms the Standard access scheme.
MAC
access
delay
(Slot
units)
Number of active stations
Average MAC access delay: PS-DCC vs. Standard values
times
times
times
times
Figure
12: Average MAC access delay
6.5 The dynamic Power Consumption
It is interesting to observe how the system behaves in transient scenarios. Figure 13 reports the
simulation traces of the average energy required for a frame transmission with and without the
PS-DCC mechanism. Simulation data are obtained by considering a system in steady-state, with
100 stations initially active. After some time, a burst of additional 100 stations activates, causing
the contention level to grow up, then it disappear. The additional burst occurs twice during the
simulation.10003000500070000 50 100 150 200 250
Average
Energy
Consumed
Time (1024 slot units)
Dynamic Power Consumption
PS-DCC, 100+100 stations
optimal energy required with 100 stations
optimal energy required with 200 stations
STND, 100+100 stations
Figure
13: Dynamic Energy requirements vs. contention level
The figure shows how the systems react to the contention variations under the power consumption
viewpoint. The Standard system enhanced with the PS-DCC mechanism obtains a lower Energy
requirement for the frame transmissions, with respect to the Standard system. The PS-DCC
mechanism is fast to adapt to new contention scenarios. Moreover, as expected from previous
results, the PS-DCC mechanism maintains an average Energy requirement which is close to the
optimal value in all contention conditions analyzed.
7 Conclusions and future research
We propose and evaluate a Power Save, Distributed Contention Control (PS-DCC) mechanism
which is effective in implementing a distributed and adaptive contention control, to guarantee the
optimal power consumption of a random-access MAC protocol for a wireless network. Specifically,
we considered the IEEE 802.11 DCF access scheme [9] as a testbed for our analysis. The PS-DCC
mechanism requires no additional hardware with respect to the Standard, but given the scarce
flexibility of the Standard implementations, it will be implementable only as a new protocol, in a
new release of the Standard devices or in devices with software radio technologies. The proposed
mechanism is completely distributed, and it can be tuned, for the power consumption optimality,
on the basis of an analytical model definition. The proposed mechanism dynamically adapts the
system to the optimal contention level by means of the Slot Utilization and the average Payload size
estimates. Each estimate is obtainable with low cost and with no overheads. The system maintains
a stable behavior, both during the steady-state congested phases and during bursty arrivals of
transmission requirements. The priority effect implemented by the PS-DCC mechanism, connected
to the Num Att parameter, allows a fair reduction of contention, and enhances the queue-emptying
behavior of the system. Simulations show that the proposed mechanism introduces a quasi-optimum
channel utilization and power consumption in the considered systems, without significant affection
of the contention level. Other system parameters like the Mean Access Delay, the 99-th percentile
of the power consumed, the risk of starvation and the transient behavior have been analyzed and
resulted enhanced by our mechanism, without overheads introduced. Future research involves the
study of further optimization connected to different classes of frame sizes. It also results possible
to obtain an external priority mechanism [2], which would allow each station to self-determine a
greater priority in the access right to the channel, without any kind of negotiation. The external
priority mechanism could be useful if adopted with RTS/CTS messages.
acknowledgements
The authors would like to thanks the anonymous referees that with their comments contribute to
improve the quality of the paper.
--R
"Toward Power-Sensitive Network Architectures in Wireless Communications: Concepts, Issues and Design Aspects"
"Design and Performance Evaluation of a Distributed Contention Control (DCC) Mechanism for IEEE 802.11 Wireless Local Area Networks"
"A Distributed Mechanism for Power Saving in IEEE 802.11 Wireless LANs"
"Design and Performance Evaluation of an Asymptotically Optimal Backoff Algorithm for IEEE 802.11 Wireless LANs"
''IEEE 802.11 Wireless LAN: Capacity Analysis and Protocol Enhancement
"Stability of Binary Exponential Backoff "
"Analysis of Backoff Protocols for Multiple Access Chan- nels"
"Stochastic models in operations research"
Wireless MAC and Physical Layer Working Group
"Wireless Computing"
"Software Strategies for Portable Computer Energy Management"
"A Short Look on Power Saving Mechanisms in the wireless LAN Standard Draft IEEE 802.11"
"Mobile Power Management for Wireless Communication Networks"
"A Portable Multimedia Terminal"
"Measuring and Reducing Energy Consumption of Network Interfaces in Hand-Held Devices"
"Power-Saving Mechanisms in Emerging Standards for Wireless LANs: The MAC Level Perspective"
"Energy Management in Wireless Communication"
--TR
On the stability of the Ethernet
TCP/IP illustrated (vol. 1)
Mobile wireless computing: challenges in data management
Analysis of Backoff Protocols for Mulitiple AccessChannels
Mobile power management for wireless communication networks
Design and performance evaluation of a distributed contention control(DCC) mechanism for IEEE 802.11 wireless local area networks
Design and Performance Evaluation of an Asymptotically Optimal Backoff Algorithm for IEEE 802.11 Wireless LANs
--CTR
V. Baiamonte , C.-F. Chiasserini, Saving energy during channel contention in 802.11 WLANs, Mobile Networks and Applications, v.11 n.2, p.287-296, April 2006
Shihong Zou , Haitao Wu , Shiduan Cheng, Adaptive power saving mechanisms for DCF in IEEE 802.11, Mobile Networks and Applications, v.10 n.5, p.763-770, October 2005
Arun Avudainayagam , Wenjing Lou , Yuguang Fang, DEAR: a device and energy aware routing protocol for heterogeneous ad hoc networks, Journal of Parallel and Distributed Computing, v.63 n.2, p.228-236, February
G. Anastasi , M. Conti , E. Gregori , A. Passarella, Performance comparison of power-saving strategies for mobile web access, Performance Evaluation, v.53 n.3-4, p.273-294, August
Raffaele Bruno , Marco Conti , Enrico Gregori, Optimization of Efficiency and Energy Consumption in p-Persistent CSMA-Based Wireless LANs, IEEE Transactions on Mobile Computing, v.1 n.1, p.10-31, January 2002
Yu-Chee Tseng , Ting-Yu Lin, Power-conservative designs in ad hoc wireless networks, The handbook of ad hoc wireless networks, CRC Press, Inc., Boca Raton, FL,
G. Anastasi , M. Conti , E. Gregori , A. Passarella, A performance study of power-saving polices for Wi-Fi hotspots, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.45 n.3, p.295-318, 21 June 2004
Xiaodong Wang , Jun Yin , Dharma P. Agrawal, Analysis and optimization of the energy efficiency in the 802.11 DCF, Mobile Networks and Applications, v.11 n.2, p.279-286, April 2006 | power saving;IEEE 80211;analytical modelling;simulation;wireless networks |
383762 | Transmission-efficient routing in wireless networks using link-state information. | The efficiency with which the routing protocol of a multihop packet-radio network uses transmission bandwidth is critical to the ability of the network nodes to conserve energy. We present and verify the source-tree adaptive routing (STAR) protocol, which we show through simulation experiments to be far more efficient than both table-driven and on-demand routing protocols proposed for wireless networks in the recent past. A router in STAR communicates to its neighbors the parameters of its source routing tree, which consists of each link that the router needs to reach every destination. To conserve transmission bandwidth and energy, a router transmits changes to its source routing tree only when the router detects new destinations, the possibility of looping, ot the possibility of node failures or network partitions. Simulation results show that STAR is an order of magnitude more efficient than any topology-broadcast protocol proposed to date and depending on the scenario is up to six times more efficient than the Dynamic Source Routing (DSR) protocol, which has been shown to be one of the best performing on-demand routing protocols. | Introduction
Multi-hop packet-radio networks, or ad-hoc networks, consist of mobile hosts interconnected by routers
that can also move. The deployment of such routers is ad-hoc and the topology of the network is very
dynamic, because of host and router mobility, signal loss and interference, and power outages. In addition,
the channel bandwidth available in ad-hoc networks is relatively limited compared to wired networks,
and untethered routers may need to operate with battery-life constraints.
Routing algorithms for ad-hoc networks can be categorized according to the way in which routers
obtain routing information, and according to the type of information they use to compute preferred
paths. In terms of the way in which routers obtain information, routing protocols have been classified
as table-driven and on-demand. In terms of the type of information used by routing protocols, routing
protocols can be classified into link-state protocols and distance-vector protocols. Routers running a
link-state protocol use topology information to make routing decisions; routers running a distance-vector
protocol use distances and, in some cases, path information, to destinations to make routing decisions.
In an on-demand routing protocol, routers maintain path information for only those destinations that
they need to contact as a source or relay of information. The basic approach consists of allowing a
router that does not know how to reach a destination to send a flood-search message to obtain the path
information it needs. The first routing protocol of this type was proposed to establish virtual circuits in
the MSE network [17], and there are several more recent examples of this approach (e.g., AODV [21],
ABR [28], DSR [13], TORA [19], SSA [6], ZRP [11]). The Dynamic Source Tree (DSR) protocol has
been shown to outperform many other on-demand routing protocols [4]. All of the on-demand routing
protocols reported to date are based on distances to destinations, and there have been no on-demand
link-state proposals to date. On-demand routing protocols differ on the specific mechanisms used to
disseminate flood-search packets and their responses, cache the information heard from other nodes'
searches, determine the cost of a link, and determine the existence of a neighbor.
In a table-driven algorithm, each router maintains path information for each known destination in the
network and updates its routing-table entries as needed. Examples of table-driven algorithms based on
distance vectors are the routing protocol of the DARPA packet-radio network [14], DSDV [20], WRP [25],
WIRP [9], and least-resistance routing protocols [22]. Prior table-driven approaches to link-state routing
in packet-radio networks are based on topology broadcast. However, disseminating complete link-state
information to all routers incurs excessive communication overhead in an ad-hoc network because of the
dynamics of the network and the small bandwidth available. Accordingly, all link-state routing approaches
for packet-radio networks have been based on hierarchical routing schemes [23, 24, 27].
A key issue in deciding which type of routing protocol is best for ad-hoc networks is the communication
overhead incurred by the protocol. Because data and control traffic share the same communication
bandwidth in the network, and because untethered routers use the same energy source to transmit
data and control packets, computing minimum-cost (e.g., least interference) paths to all destinations at
the expense of considerable routing-update traffic is not practical in ad-hoc networks with untethered
nodes and dynamic topologies. The routing protocol used in an ad-hoc network should incur as little
communication overhead as possible to preserve battery life at untethered routers and to leave as much
bandwidth as possible to data traffic.
To date, the debate on whether a table-driven or an on-demand routing approach is best for wireless
networks has assumed that table-driven routing necessarily has to provide optimum (e.g., shortest-path)
routing, when in fact on-demand routing protocols cannot ensure optimum paths. In this paper, we
introduce and analyze the source-tree adaptive routing (STAR) protocol, which is the first example of a
table-driven routing protocol that is more efficient than any on-demand routing protocol by exploiting
link-state information and allowing paths taken to destinations to deviate from the optimum in order to
save bandwidth.
The intuition behind the approach used in STAR can be stated as follows: In an on-demand routing
protocol, every source polls all the destinations to find paths to a given destination; conversely, in a
table-driven routing protocol, every destination polls all the sources in the sense that they obtain paths
to a destination resulting from updates originated by the destination. Therefore, given that some form
of flooding occurs in either approach, it should be possible to obtain a table-driven protocol that needs
to poll as infrequently as on-demand routing protocols do to limit the overhead of the routing protocol.
In STAR, a router sends updates to its neighbors regarding the links in its preferred paths to destina-
tions. The links along the preferred paths from a source to each desired destination constitute a source
tree that implicitly specifies the complete paths from the source to each destination. Each router computes
its source tree based on information about adjacent links and the source trees reported by its neighbors,
and reports changes to its source tree to all its neighbors incrementally or atomically. The aggregation of
adjacent links and source trees reported by neighbors constitutes the partial topology known by a router.
Unlike any of the hierarchical link-state routing schemes proposed to date for packet-radio networks [27],
STAR does not require backbones, the dissemination of complete cluster topology within a cluster, or the
dissemination of the complete inter-cluster connectivity among clusters. Furthermore, STAR can be used
with distributed hierarchical routing schemes proposed in the past for both distance-vector or link-state
routing [16, 27, 26, 2].
Prior proposals for link-state routing using partial link-state data without clusters [8, 10] require routers
to explicitly inform their neighbors which links they use and which links they stop using. In contrast,
because STAR sends only changes to the structure of source trees, and because each destination has a
single predecessor in a source tree, a router needs to send only updates for those links that are part of
the tree and a single update entry for the root of any subtree of the source tree that becomes unreachable
due to failures. Routers receiving a STAR update can infer correctly all the links that the sender has
stopped using, without the need for explicit delete updates.
Section 2 describes two different approaches that can be used to update routing information in wireless
networks: the optimum routing approach (ORA) and the least-overhead routing approach (LORA), and
elicit the reasons why STAR is the first table-driven routing protocol that can adopt LORA. Section 3
describes STAR and how it supports ORA and LORA. Section 4 demonstrates that routers executing
STAR stop disseminating link-state updates and obtain shortest paths to destinations within a finite
time after the cost of one or more links changes. Section 5 compares STAR's performance against
the performance of other table-driven and on-demand routing protocols using simulation experiments.
These experiments use the same methodology described by Broch et al. to compare on-demand routing
protocols [4], and our simulation code is the same code that runs in embedded wireless routers; the code we
used for DSR was ported from the ns2 code for DSR available from CMU [30]. The simulation results show
that STAR is four times more bandwidth-efficient than the best-performing link-state routing protocol
previously proposed, an order of magnitude more bandwidth-efficient than topology broadcasting, and
far more bandwidth-efficient than DSR, which has been shown to incur the least overhead among several
on-demand routing protocols [4]. To our knowledge, this is the first time that any table-driven routing
protocol has been shown to be more efficient than on-demand routing protocols in wireless networks.
Updating Routes in Wireless Networks
We can distinguish between two main approaches to updating routing information in the routing protocols
that have been designed for wireless networks: the optimum routing approach (ORA) and the least-
overhead routing approach (LORA). With ORA, the routing protocol attempts to update routing tables
as quickly as possible to provide paths that are optimum with respect to a defined metric. In contrast,
with LORA, the routing protocol attempts to provide viable paths according to a given performance
metric, which need not be optimum, to incur the least amount of control traffic.
For the case of ORA, the routing protocol can provide paths that are optimum with respect to different
types of service (TOS), such as minimum delay, maximum bandwidth, least amount of interference,
maximum available battery life, or combinations of metrics. Multiple TOS can be supported in a routing
protocol; however, this paper focuses on a single TOS to address the performance of routing protocols
providing ORA, and uses shortest-path routing as the single TOS supported for ORA. We assume that
a single metric, which can be a combination of parameters, is used to assign costs to links.
On-demand routing protocols such as DSR follow LORA, in that these protocols attempt to minimize
control overhead by: (a) maintaining path information for only those destinations with which the router
needs to communicate, and (b) using the paths found after a flood search as long as the paths are valid,
even if the paths are not optimum. On-demand routing protocols can be applied to support multiple
TOS; an obvious approach is to obtain paths of different TOS using separate flood searches. However, we
assume that a single TOS is used in the network. ORA is not an attractive or even feasible approach in
on-demand routing protocols, because flooding the network frequently while trying to optimize existing
paths with respect to a cost metric of choice consumes the available bandwidth and can make the paths
worse while trying to optimize them.
We can view the flood search messages used in on-demand routing protocols as a form of polling of
destinations by the sources. In contrast, in a table-driven routing protocol, it is the destinations who poll
the sources, meaning that the sources obtain their paths to destinations as a result of update messages
that first originate at the destinations. What is apparent is that some form of information flooding occurs
in both approaches.
Interestingly, all the table-driven routing protocols reported to date for ad-hoc networks adhere to
ORA, and admittedly have been adaptations of routing protocols developed for wired networks. A
consequence of adopting ORA in table-driven routing within a wireless network is that, if the topology of
the network changes very frequently, the rate of update messages increases dramatically, consuming the
bandwidth needed for user data. The two methods used to reduce the update rate in table-driven routing
protocols are clustering and sending updates periodically. Clustering is attractive to reduce overhead due
to network size; however, if the affiliations of nodes with clusters change too often, then clustering itself
introduces unwanted overhead. Sending periodic updates after long timeouts reduces overhead, and it
is a technique that has been used since the DARPA packet-radio network was designed [14]; however,
control traffic still has to flow periodically to update routing tables.
A nice feature of such routing protocols as DSR [13] and WIRP [9] is that these protocols remain quiet
when no new update information has to be exchanged; they have no need for periodic updates. Both
protocols take advantage of promiscuous listening of any packets sent by router's neighbors to determine
the neighborhood of the router. A key difference between DSR and WIRP is that DSR follows LORA
while WIRP follows ORA, which means that WRP may incur unnecessary overhead when the network
topology is unstable.
Given that both on-demand and table-driven routing protocols incur flooding of information in one
way or another, a table-driven routing protocol could be designed that incurs similar or less overhead
than on-demand routing protocols by limiting the polling done by the destinations to be the same or less
than the polling done by the sources in on-demand routing protocols. However, there has been no prior
description of a table-driven routing protocol that can truly adhere to LORA, i.e., one that has no need
for periodic updates, uses no clustering, and remains quiet as long as the paths available at the routers are
valid, even if they are not optimum. The reason why no prior table-driven routing protocols have been
reported based on LORA is that, with the exception of WIRP and WRP, prior protocols have used either
distances to destinations, topology maps, or subsets of the topology, to obtain paths to destinations, and
none of these types of information permits a router to discern whether the paths it uses are in conflict
with the paths used by its neighbors. Accordingly, routers must send updates after they change their
routing tables in order to avoid long-term routing loops, and the best that can be done is to reduce the
control traffic by sending such updates periodically. In the next section, we describe STAR, which is the
first table-driven routing protocol that implements LORA.
3 STAR Description
3.1 Network Model
In STAR, routers maintain a partial topology map of their network. In this paper we focus on flat
topologies only, i.e., there is no aggregation of topology information into areas or clusters.
To describe STAR, the topology of a network is modeled as a directed graph
the set of nodes and E is the set of edges connecting the nodes. Each node has a unique identifier and
represents a router with input and output queues of unlimited capacity updated according to a FIFO
policy. In a wireless network, a node can have connectivity with multiple nodes in a single physical radio
link. For the purpose of routing-table updating, a node A can consider another node B to be adjacent
(we call such a node a "neighbor") if there is link-level connectivity between A and B and A receives
update messages from B reliably. Accordingly, we map a physical broadcast link connecting multiple
nodes into multiple point-to-point bidirectional links defined for these nodes. A functional bidirectional
link between two nodes is represented by a pair of edges, one in each direction and with a cost associated
that can vary in time but is always positive.
An underlying protocol, which we call the neighbor protocol, assures that a router detects within a
finite time the existence of a new neighbor and the loss of connectivity with a neighbor. All messages,
changes in the cost of a link, link failures, and new-neighbor notifications are processed one at a time
within a finite time and in the order in which they are detected. Routers are assumed to operate correctly,
and information is assumed to be stored without errors.
3.2
Overview
In STAR, each router reports to its neighbors the characteristics of every link it uses to reach a destination.
The set of links used by a router in its preferred path to destinations is called the source tree of the router.
A router knows its adjacent links and the source trees reported by its neighbors; the aggregation of a
router's adjacent links and the source trees reported by its neighbors constitute a partial topology graph.
The links in the source tree and topology graph must be adjacent links or links reported by at least one
neighbor. The router uses the topology graph to generate its own source tree. Each router derives a
routing table specifying the successor to each destination by running a local route-selection algorithm on
its source tree.
Under LORA, a router running STAR sends updates on its source tree to its neighbors only when it
loses all paths to one ore more destinations, when it detects a new destination, or when it determines
that local changes to its source tree can potentially create long term routing loops. Because each router
communicates its source tree to its neighbors, the deletion of a link no longer used to reach a destination
is implicit with the addition of the new link used to reach the destination and need not be sent explicitly
as an update; a router makes explicit reference to a failed link only when the deletion of a link causes the
router to have no paths to one or more destinations, in which case the router cannot provide new links
to make the deletion of the failed link implicit.
The basic update unit used in STAR to communicate changes to source trees is the link-state update
(LSU). An LSU reports the characteristics of a link; an update message contains one or more LSUs.
For a link between router u and router or destination v, router u is called the head node of the link in
the direction from u to v. The head node of a link is the only router that can report changes in the
parameters of that link. LSUs are validated using sequence numbers, and each router erases a link from
its topology graph if the link is not present in the source trees of any of its neighbors. The head of a link
does not periodically send LSUs for the link, because link-state information never ages out.
3.3 Information Stored and Exchanged
We assume in the rest of the paper that a single parameter is used to characterize a link in one of its
directions, which we will call the cost of the directed link. Furthermore, although any type of local route
selection algorithm can be used in STAR, we describe STAR assuming that Dijkstra's shortest-path first
(SPF) algorithm is used at each router to compute preferred paths.
An LSU for a link (u; v) in an update message is a tuple (u; v; l; sn) reporting the characteristics of the
link, where l represents the cost of the link and sn is the sequence number assigned to the LSU.
A router i maintains a topology graph routing table, the set of neighbors N i ,
the source trees ST i
x
reported by each neighbor x 2 N i , and the topology graphs
x
reported by each
neighbor x 2 N i . The record entry for a link (u; v) in the topology graph of router i is denoted
and is defined by the tuple (u; v; l; sn; del), and an attribute p in the tuple is denoted by v):p. The
same notation applies to a link (u; v) in ST i , ST i
x
, and
x
. v):del is set to TRUE if the link is
not in the source tree of any neighbor.
A vertex v in
is denoted
(v). It contains a tuple (d; pred; suc; d whose values are
used on the computation of the source tree.
(v):d reports the distance of the path
(v):pred
is v's predecessor in i ; v,
(v):suc is the next hop along the path towards v, suc 0 holds the address
of the previous hop towards v, d 0 corresponds to the previous distance to v reported by suc 0 , and nbr is
a flag used to determine if an update message must be generated when the distance reported by the new
successor towards v increases. The same notation applies to a vertex v in ST i , ST i
x
, and
x
The source tree ST i is a subset of TG i . The routing table contains record entries for destinations
in ST i , each entry consists of the destination address, the cost of the path to the destination, and the
address of the next-hop towards the destination.
The topology graph
x
contains the links in ST i
x
and the links reported by neighbor x in a message
being processed by router i, after processing the message
x
x
A router i running LORA also maintains the last reported source tree ST i
The cost of a failed link is considered to be infinity. The way in which costs are assigned to links is
beyond the scope of this specification. As an example, the cost of a link could simply be the number of
hops, or the addition of the latency over the link plus some constant bias.
We refer to an LSU that has a cost infinity as a RESET,
, and ST i
3.4 Validating Updates
Because of delays in the routers and links of an internetwork, update messages sent by a router may
propagate at different speeds along different paths. Therefore, a given router may receive an LSU from
a neighbor with stale link-state information, and a distributed termination-detection mechanism is necessary
for a router to ascertain when a given LSU is valid and avoid the possibility of LSUs circulating
forever. STAR uses sequence numbers to validate LSUs. A sequence number associated with a link consists
of a counter that can be incremented only by the head node of the link. For convenience, a router i
needs to keep only a counter SN i
for all the links for which it is the head node, which simply means that
the sequence number a router gives to a link for which it is the head node can be incremented by more
than one each time the link parameters change value.
A router receiving an LSU accepts the LSU as valid if the received LSU has a larger sequence number
than the sequence number of the LSU stored from the same source, or if there is no entry for the link in
the topology graph and the LSU is not reporting an infinite cost. Link-state information for failed links
are the only LSUs erased from the topology graph due to aging (which is in the order of an hour after
having processed the LSU). LSUs for operational links are erased from the topology graph when the links
are erased from the source tree of all the neighbors.
We note that, because LSUs for operational links never age out, there is no need for the head node
of a link to send periodic LSUs to update the sequence number of the link. This is very important,
because it means that STAR does not need periodic update messages to validate link-state information
like OSPF [18] and every single routing protocol based on sequence numbers or time stamps does!
To simplify our description, the specification in the rest of this paper describes STAR as if unbounded
counters were available to keep track of sequence numbers.
Exchanging Update Messages
How update messages are exchanged depends on the routing approach used (ORA or LORA) and the
services provided by the link layer. The rest of this section describes how LORA and ORA can be
supported in STAR and describes the impact of the link layer on the way in which update messages are
exchanged.
3.5.1 Supporting LORA and ORA in STAR
In an on-demand routing protocol, a router can keep using a path found as long as the path leads to
the destination, even if the path does not have optimum cost. A similar approach can be used in STAR,
because each router has a complete path to every destination as part of its source tree. To support LORA,
router i running STAR should send update messages according to the following three rules, which inform
routers of unreachable destinations, new destinations, and update topology information to prevent permanent
routing loops. Router i implements these rules by comparing its source tree against the source
trees it has received from its neighbors.
finds a new destination, or any of its neighbors reports a new destination.
Whenever a router hears from a new neighbor that is also a new destination, it sends an update
message that includes the new LSUs in its source tree. Obviously, when a router is first initialized or
after a reboot, the router itself is a new destination and should send an update message to its neighbors.
Link-level support should be used for the router to know its neighbors within a short time, and then report
its links to those neighbors with LSUs sent in an update message. Else, a simple way to implement an
initialization action consists of requiring the router to listen for some time for neighbor traffic, so that it
can detect the existence of links to neighbors.
LORA-2: At least one destination becomes unreachable to router i or any of its neighbors.
When a router processes an input event (e.g., a link fails, an update message is received) that causes
all its paths through all its neighbors to one or more destination to be severed, the router sends an
update message that includes an LSU specifying an infinite cost for the link connecting to the head of
each subtree of the source tree that becomes unreachable. The update message does not have to include
an LSU for each node in an unreachable subtree, because a neighbor receiving the update message has
the sending node's source tree and can therefore infer that all nodes below the root of the subtree are
also unreachable, unless LSUs are sent for new links used to reach some of the nodes in the subtree.
LORA-3: This rule has three parts:
1. A path implied in the source tree of router i leads to a loop.
2. The new successor chosen to a given destination has an address larger than the address of
router i.
3. The reported distance from the new chosen successor n to a destination j is longer than the
reported distance from the previous successor to the same destination. However, if the link
is a neighbor of j, no update message is needed regarding j or any destination
whose path from i involves j.
Each time a router processes an update message from a neighbor, it updates that neighbor's source
tree and traverses that tree to determine for which destinations its neighbor uses the router as a relay in
its preferred paths. The router then determines if it is using the same neighbor as a relay for any of the
same destinations. A routing loop is detected if the router and neighbor use each other as relay to any
destination, in which case the loop must be broken and the router must send an update message with
the corresponding changes.
To explain the need for the second part of LORA-3, we observe that, in any routing loop among routers
with unique addresses, one of the routers must have the smallest address in the loop; therefore, if a router
is forced to send an update message when it chooses a successor whose address is larger than its own,
then it is not possible for all routers in a routing loop to remain quiet after choosing one another, because
at least one of them is forced to send an update message, which causes the loop to break when routers
update their source trees.
The last part of LORA-3 is needed when link costs can assume different values in different directions,
in which case the second part of LORA-3 may not suffice to break loops because the node with the
smallest address in the loop may not have to change successors when the loop is formed. The following
example illustrates this scenario. Consider the six-node wireless network shown in Figure 1 and assume
that the third part of LORA-3 is not in effect at the routers running STAR. In this example, nodes are
given identifiers that are lexicographically ordered, i.e., a is the smallest identifier and f is the largest
a
c
f
d
e
(c)
a
c
f
d
e
(d)
a
c
f
d
e
a
c
f
d
e
Figure
1: An example of a six node wireless network with routers running STAR without the third part
of LORA-3 being in effect.
identifier in the graph. All links and nodes are assumed to have the same propagation delays, and all the
links but link (b; c) have unit cost. Figures 1(b) through 1(d) show the source trees according to STAR
at the routers indicated with filled circles for the network topology depicted in Figure 1(a). Arrowheads
on solid lines indicate the direction of the links stored in the router's source tree. Figure 1(e) shows c's
new source tree after processing the failure of link (c; d); we note that c does not generate an update
message, because c ? b by assumption. Suppose link (b; e) fails immediately after the failure of (c; d),
node b computes its new source tree shown in Figure 1(f) without reporting changes to it because a is its
new successor to destinations d, e, and f , and a ! b. A permanent loop forms among nodes a, b, and c.
Figure
2 depicts the sequence of events triggered by the execution of the third part of LORA-3 in
the same example introduced in Figures 1, after the failures of links (c; d) and (b; e). The figure shows
the LSUs generated by the node with filled circle transmitted in an update message to the neighbors,
and shows such LSUs in parentheses. The third element in an LSU corresponds to the cost of the link
(a RESET has cost infinity). Unlike in the previous example, node c transmits an update message
after processing the failure of link (c; d) because of the third part of LORA-3; the distance from the new
successor b to d and f is longer than from the previous successor d. When link (b; e) fails, node b realizes
that the destinations d, e, and f are unreachable and generates an update message reporting the failure
of the link connecting to the head of the subtree of the source tree that becomes unreachable. The update
message from b triggers the update messages that allow nodes a, b, and c to realize that there are no
paths to d, e, and f . A similar sequence of events takes place at the other side of the network partition.
(b, e, 1) (e, d, 1) (e, f, 1)
(b)
a
c
f
d
e
(e, d, 1) (e, f, 1)
(c)
a
c
e
(b, e, infinity)
(d)
a
c
(b, e, infinity)
a
c
(b, e, infinity)
Figure
2: Example of a six-node wireless network with routers running STAR with the third part of
LORA-3 being in effect.
a
c
f
d
e
a
c
f
d
e
(b)
Figure
3: The third part of LORA-3 not always triggers the generation of an update message: (a) network
topology, and (b) the new source tree of node c after processing the failure of link (c; b).
The example shown in Figure 3 illustrates the scenario in which a router that chooses a new successor
to a destination with a larger distance to it does not need to send an update message. For this example,
the source trees of nodes a, b, and c are depicted in Figures 2(c), 1(c), and 2(b), respectively. Figure 3(b)
shows the new source tree of node c after the failure of link (c; b). In this case, c does not need to send
an update message because the parent node of the subtree headed by b is a neighbor of c and therefore
no permanent loop can be formed.
To ensure that the above rules work with incremental updates specifying only changes to a source tree,
a router must remember the source tree that was last notified to its neighbors. If any of LORA-1 to
LORA-3 are satisfied, the router must do one of two things:
ffl If the new source tree includes new neighbors than those present in the source tree that was last
updated, then the router must send its entire source tree in its update, so that new neighbors learn
about all the destinations the router knows.
ffl If the two source trees imply the same neighbors, the router sends only the updates needed to obtain
the new tree from the old one.
To ensure that STAR stops sending update messages, a simple rule can be used to determine which
router must stop using its neighbor as a relay, such a rule can be, for example, "the router with the
smaller address must change its path."
The above rules are sufficient to ensure that every router obtains loopless paths to all known destina-
tions, without the routers having to send updates periodically. In addition to the ability for a router to
detect loops in STAR, the two key features that enable STAR to adopt LORA are: (a) validating LSUs
without the need of periodic updates, and (b) the ability to either listen to neighbors' packets or use a
neighbor protocol at the link layer to determine who the neighbors of a router are.
If ORA is to be supported in STAR, the only rule needed for sending update messages consists of a
router sending an update message every time its source tree changes.
The rules for update-message exchange stated above assume that an update message is sent reliably
to all the neighbors of a router. As the performance analysis of Section 5 shows, this is a very realistic
assumption, because STAR working under LORA generates far fewer update messages than the topology
changes that occur in the network. However, if preserving bandwidth is of utmost importance and the
underlying link protocol is contention-based, additional provisions must be taken, which we describe next.
3.5.2 Impact of The Link Layer
If the link layer provides efficient reliable broadcast of network-level packets, then STAR can rely on
sending an update message only once to all neighbors, with the update message specifying only incremental
changes to the router's source tree. The link layer will retransmit the packet as needed to reach
all neighbors, so that it can guarantee that a neighbor receives the packet unless the link is broke.
A reliable broadcast service at the link layer can be implemented very efficiently if the MAC protocol
being used guarantees collision-free transmissions of broadcast packets. A typical example of MAC
protocols that can support collision-free broadcasts is TDMA, and there are several recent proposals that
need not rely on static assignments of resources (e.g., FPRP [29]).
Unfortunately, reliable broadcasting from a node to all its neighbors is not supported in the collision-avoidance
MAC protocols that have been proposed and implemented for ad-hoc networks [1, 7, 12, 15],
and IEEE 802.11 [12] appears to be the only commercial alternative at the MAC layer in ISM bands
today. Furthermore, any link-level or network-level strategy for reliable exchange of broadcast update
messages over a contention-based MAC protocol will require substantial retransmissions under high-load
conditions and rapid changes to the connectivity of nodes.
Therefore, if the underlying MAC protocol does not provide collision-free broadcasts over which efficient
reliable broadcasting can be built, then STAR, and any table-driven routing protocol for that matter, is
better off relying on the approach adopted in the past in the DARPA packet-radio network. For STAR
this means that a router broadcasts unreliably its update messages to its neighbors, and each update
message contains the entire source tree. For STAR to operate correctly with this approach under LORA,
routers must prevent the case in which permanent loops are created because an update message is not
received by a neighbor. A simple example is a two-node loop between two neighbor routers, A and B, in
which the neighbor with the smaller address A sends an update to its neighbor B specifying that A is
using B to get to at least one destination D, but the message does not reach B, which then starts using
A to reach D.
An additional simple rule to send an update message can be used to eliminate permanent looping due
to lost packets using unreliable broadcasting:
LORA-4: A data packet is received from a neighbor who, according to its source tree, is in the path
to the destination specified in the data packet. This rule is needed to eliminate permanent looping
under unreliable broadcasting.
3.6 Details on The Processing of Input Events
Figures
4 and 5 specify the main procedures of STAR (for both LORA and ORA approaches) used to
update the routing table and the link-state database at a router i. Procedure NodeUp is executed when
a router i starts up. The neighbor set of the router is empty initially, and the sequence number counter
is set to zero.
If the neighbor protocol reports a new link to a neighbor k (procedure NeighborUp), the router then runs
Update with the appropriate message as input; the LSU in the message gets a current sequence number.
The same approach is used for link failures (NeighborDown) and changes in link cost (LinkCostChange).
When a router establishes connectivity to a new neighbor, the router sends its complete source tree to the
neighbor (much like a distance vector protocol sends its complete routing table). The LSUs that must
be broadcast to all neighbors are inserted into MSG i
The procedure Update is executed when router i receives an update message from neighbor k or when
the parameters of an outgoing link have changed. First, the topology graphs
and
are updated,
then the source trees ST i
and ST i
are updated, which may cause the router to update its routing table
and to send its own update message.
The state of a link in the topology graph TG i is updated with the new parameters for the link if the
link-state update in the received message is valid, i.e., if the LSU has a larger sequence number than the
sequence number of the link stored in TG i .
The parameters of a link in
are always updated when processing an LSU sent by a neighbor
k, even if the link-state information is outdated, because they report changes to the source tree of the
neighbor. A node in a source tree ST i
can have only one link incident to it. Hence, when i receives an
LSU for link (u; v) from k the current incident link to v, u 6= u 0 , is deleted from
The information of an LSU reporting the failure of a link is discarded if the link is not in the topology
graph of the router.
A shortest-path algorithm (SPF) based on Dijkstra's SPF (procedure BuildShortestPathTree) is run
on the updated topology graph
to construct a new source tree ST i
, and then run on the topology
graph
to construct a new source tree ST i
The incident link to a node v in router's i new source tree is different from the link in the current
source tree ST i only if the cost of the path to v has decreased or if the incident link in ST i was deleted
from the source trees of all neighbors.
A new source tree newST for a neighbor k, including the router's new source tree, is then compared
to the current source tree ST i
(procedure UpdateNeighborTree), and the links that are in ST i
but not
in newST are deleted from
. After deleting a link (u; v) from
the router sets v):del to
TRUE if the link is not present in the topology graphs
x
If a destination v becomes unreachable, i.e., there is no path to v in the new source tree newST , then
LSUs will be broadcast to the neighbors for each link in the topology graph
that have v as the tail
node of the link and a link cost infinity.
This specification assumes that the Link Layer provides reliable broadcast of network-level packets and
consequently update messages specify only incremental changes to the router's source tree instead of the
complete source tree.
The new router's source tree newST is compared to the last reported source tree (ST i
for LORA and
(procedure ReportChanges), and an update message that will be broadcast to the neighbors
is constructed from the differences of the two trees. An LSU is generated if the link is in the new source
tree but not in the current source tree, or if the parameters of the link have changed. For the case of
a router running LORA, the source trees are only compared with each other if at least one of the three
conditions (LORA-1, LORA-2, and LORA-3) described in Section 3.5.1 is met, i.e., M i
TRUE.
If the new router's source tree was compared against the last reported source tree then the router
removes from the topology graph all the links that are no longer used by any neighbor in their source
trees.
Finally, the current shortest-path tree ST i
is discarded and the new one becomes the current source
tree. The router's source tree is then used to compute the new routing table, using for example a
depth-first search in the shortest-path tree.
3.7 Example
The following example illustrates the working of STAR based on LORA. Consider the seven node wireless
network shown in Figure 6. All links and nodes are assumed to have the same propagation delays, and
all the links have unit cost. The figure shows only the LSUs with new information transmitted in update
messages to the neighbors, and shows such LSUs in parentheses located next to the node that generates
them. The third element in an LSU corresponds to the cost of the link (a RESET has cost infinity).
Figures
6(b) through 6(d) show the source trees according to STAR at the routers indicated with filled
circles for the network topology depicted in Figure 6(a). Arrowheads on solid lines indicate the direction
of the links stored in the router's source tree. When the link (f; g) fails (Figure 6(e)), the neighbor
protocol at node f triggers the execution of procedure NeighborDown, the link (d; g) is inserted into f 's
source tree and no update message is generated because f 's new successor towards g has an address
smaller than f and destination g is a neighbor of the new successor. Figure 6(f) shows the new source
tree of node d after the failure of link (d; g), since d has an address smaller than the new successor towards
g it is forced to send an update message reporting the new link added to the source tree. Nodes c, e,
and f do not generate any update message after processing d's message because there exist a path to all
description
f
description
Neighbor protocol reports connectivity to neighbor k
f
sendST / TRUE;
f
sendST / FALSE;
f
for each ( link (u; v) 2 ST i )
description
Neighbor protocol reports link failure to neighbor k
f
description
Neighbor protocol reports link cost change to neighbor k
f
description
Process update message msg sent by router k
f
description
Update
k from LSUs in msg
f
for each ( LSU (u; v; l; sn) 2 msg )
f
else
description
Update topology graphs
k from LSU (u; v; l; sn)
f
f
else
f
f
else
f
description
Update topology graphs
k from LSU
reporting link failure
f
f
f
f
f
Broadcast message MSG
f
for each ( vertex v 2
f
(v):d
(v):pred / null;
(k):d / 0;
Figure
4: STAR Specification
description
Construct ST i
f
set of vertices in
newST / ;;
while ( u 6= null and
f
(u):pred 6= null and
f
newST
f
path w u cost /
else
path w u cost / 0;
cost ?
f
for each ( vertex v 2 adjacency list of
f
f
else if (
else
else
f
else suc / k;
else
else u / null;
UpdateNeighborTree(k; newST );
f
f
else if ( ORA )
for each ( link (u; v) 2
f
k (u; v):l or
f
(v):d /
(v):pred /
f
was an unknown destination
else
description
Generate LSUs for new links in the router's source tree
f
for each ( link (u; v) 2 newST )
2 oldST or newST (u; v):sn 6= oldST (u; v):sn or NS i )
description
Delete links from
k and report failed links
f
for each ( link (u; v) 2 ST i
f
f
removed (u; v) from its source tree
f
// LORA-2 rule: k has no path to destination v
for each ( link
f
// i has no path to destination v or i is the head node
for each ( link
else if (
Needs to report failed link
f
f
Figure
5: STAR Specification (cont.)
a
c
f
d
e
(a)
a
c
f
d
e
(b)
a
c
f
d
e
(c)
a
c
f
d
e
(d)
a
c
f
d
e
a
c
f
d
e
(e, g, 1)
Figure
An example topology with links having unit cost, solid lines represent the links that are part
of the source tree of the node with filled circle, dashed lines represent links that are part of the network
topology, (a) network topology, (b) node's f source tree, (c) node's e source tree, (d) node's d source
tree, (e) node f does not generate any update message after adding link (d; g) to its source tree due to
the failure of link (f; g), (f) node d generates the update message with LSUs for links (e; g) and (d; g)
after detecting the failure of link (d; g).
destinations in the network and no routing loop was formed.
This example illustrates how link failures may not cause the generation of update messages by nodes
that have the failed link in their source trees as long as the nodes have a path to all destinations.
This section addresses STAR correctness for the case in which ORA is applied. The proof of correctness
under LORA is similar and is omitted for brevity. For simplicity, we assume that all links are point-to-
point and that shortest-path routing is implemented. Let t 0
be the time when the last of a finite number
of link-cost changes occur, after which no more such changes occurs. The network E) in which
STAR is executed has a finite number of nodes (j V j) and links (j E j), and every message exchanged
between any two routers is received correctly within a finite time. According to STAR's operation, for
each direction of a link in G, there is a router that detects any change in the cost of the link within a
finite time.
The following theorems assume the use of arbitrarily large sequence numbers in order to avoid having
to include cases in the proofs for which links are deleted due to aging.
Theorem 1: The dissemination of LSUs in STAR stops a finite time after t 0
Proof: A router that detects a change in the cost of any outgoing link must update its topology graph,
update its source tree as needed, and send an LSU if the link is added to or is updated in the source tree.
Let l be the link that last experiences a cost change up to t 0
, and let t l
be the time when the head of
the link l originates the last LSU of the sequence of LSUs originated as a result of the link-cost change
occurring up to t 0
. Any router that receives the LSU for link l originated at t l
must process the LSU
within a finite time, and decides whether or not to forward the LSU based on its updates to its source
tree. A router can accept and propagate an LSU only once because each LSU has a sequence
accordingly, given that G is finite, there can only be a finite chain of routers that propagate the LSU for
link l originated at t l , and the same applies to any LSU originated from the finite number of link-cost
changes that occur up to t 0
. Therefore, STAR stops the dissemination of LSUs a finite time after t 0
. 2
Because of Theorem 1, there must be a time t s ? t 0 when no more LSUs are being sent.
Theorem 2: In a connected network, and in the absence of link failures, all routers have the most
up-to-date link-states they need to compute shortest paths to all destinations within a finite time after t s .
Proof: The proof is by induction on the number of hops of a shortest path to a destination (the origin
of an LSU), and is similar to the proof for
Consider the shortest path from router s 0
to a destination j at time t s
, and let h be the number of
hops along such a path. For the path from s 0
to j consists of one of the router's outgoing links.
By assumption, an underlying protocol informs the router with the correct parameter values of adjacent
links within a finite time; therefore, the Theorem is true for
must have link (s 0
its
source tree and must have sent its neighbors an LSU for that link. Assume that the Theorem is true for
hops, i.e., that any router with a path of n or fewer hops to j has the correct link-state information
about all the links in the path and has sent LSUs to its neighbors with the most up-to-date state of each
such link, and consider the case in which the path from s 0
to j at time t s is
be the
next hop along the shortest path selected by s 0
to j at t s . The sub-path from s 1
to j has h hops and, by
the inductive assumption, such a path must be in the source tree of s 1 at time t s , which implies that all
the links in the path from s 1
to j are in its source tree, and that s 0
received and processed an LSU for
each link in the path from s 1
to j with the most recent link-state information. Because it is also true that
has the most recent link-state information about link (s
has the most recent
information about all the links in its chosen path to j. The Theorem is therefore true, because the same
argument applies to any chosen destination and router. 2
Theorem 3: All the routers of a connected network have the most up-to-date link-state information
needed to compute shortest paths to all destinations.
Proof: The result is immediate from Theorem 2 in the absence of link failures. Consider the case in
which the only link that fails in the network by time t 0
is link (s; d). Call this time t f - t 0
. According
to STAR's operation, router s sends an LSU reporting an infinite cost for (s; d) within a finite time after
furthermore, every router receiving the LSU reporting the infinite cost of (s; d) must forward the LSU
if the link exists in its topology graph, i.e., the LSU gets flooded to all routers in the network that had
heard about the link, and this occurs within a finite time after t 0
moreover, links with infinite cost are
not erased from the topology graph. It then follows that no router in the network can use link (s; d) for
any shortest path within a finite time after t 0
. Accordingly, within a finite time after t 0
all routers must
only use links of finite cost in their source trees; together with Theorem 2, this implies that the Theorem
is true. 2
Theorem 4: If destination j becomes unreachable from a network component C at t 0 ; the topology graph
of all routers in C includes no finite-length path to j.
Proof: STAR's operation is such that, when a link fails, its head node reports an LSU with an infinite
cost to its neighbors, and the state of a failed link is flooded through a connected component of the
network to all those routers that knew about the link. Because a node failure equals the failure of all
its adjacent links, it is true that no router in C can compute a finite-length path to j from its topology
graph after a finite time after t 0
. 2
Note that, if a connected component remains disconnected from a destination j all link-state information
corresponding to links for which j is the head node is erased from the topology graph of the routers
in the network component.
The previous theorems show that STAR sends correct routing tables within a finite time after link
costs change, without the need to replicate topology information at every router (like OSPF does) or use
explicit delete updates to delete obsolete information (like LVA and ALP do).
Performance Evaluation
5.1 Simulation Experiments
STAR has the same communication, storage, and time complexity than ALP [10] and efficient table-driven
distance-vector routing protocols proposed to date (e.g., WRP [25]). However, worst-case performance is
not truly indicative of STAR's performance; accordingly, we ran a number of simulation experiments to
compare STAR's average performance against the performance of table-driven and on-demand routing
protocols.
The protocol stack implementation in our simulator runs the very same code used in a real embedded
wireless router and IP (Internet Protocol) is used as the network protocol.
The link layer implements a medium access control (MAC) protocol similar to the IEEE 802.11 standard
and the physical layer is based on a direct sequence spread spectrum radio with a link bandwidth of 1
Mbit/sec. The neighbor protocol is configured to report loss of connectivity to a neighbor if the probation
of the link fails in a period of about 10 seconds.
The simulation experiments use 20 nodes forming an ad-hoc network, moving over a flat space (5000m
x 7000m), and initially randomly distributed at a density of one node per square kilometer. Nodes move
in the simulation according to the "random waypoint" model [4]. Each node begins the simulation by
remaining stationary for pause time seconds. It then selects a random destination and moves to that
destination at a speed of 20 meters per second for a period of time uniformly distributed between 5 and
seconds. Upon reaching the destination, the node pauses again for pause time seconds, selects another
destination, and proceeds there as previously described, repeating this behavior for the duration of the
simulation.
The simulation study was conducted in the C++ Protocol Toolkit (CPT) simulator environment. Two
sets of simulations were run. First, STAR based on ORA is compared against two other table-driven
routing protocols. Secondly, STAR based on LORA is compared with DSR, which has been shown to be
one of the best-performing on-demand routing protocols.
5.2 Comparison with Table-Driven Protocols
We chose to compare STAR against the traditional link-state approach and ALP [10]. The traditional
link-state approach (denoted by TOB for topology broadcast) corresponds to the flooding of link states
in a network, or within clusters coupled with flooding of inter-cluster connectivity among clusters. ALP
is a routing protocol based on partial link-state information that we have previously shown to outperform
prior table-driven distance-vector and link-state protocols. For these simulations STAR uses ORA since
both ALP and TOB attempt to provide paths that are optimum with respect to a defined metric. The
three protocols rely on the reliable delivery of broadcast packets by the link layer.
We ran our simulations with movement patterns generated for 5 different pause times: 0, 30, 45, 60,
and 90 seconds. A pause time of 0 seconds corresponds to continuous motion. The simulation time in all
the simulation scenarios is of 900 seconds.
As the pause time increases, we expect the number of update packets sent to decrease because the
number of link connectivity changes decreases. Because STAR and ALP generate LSUs for only those
links along paths used to reach destinations, we expect STAR and ALP to outperform any topology
broadcast protocol.
A router running ALP does not report to its neighbors the deletion of a link from its preferred paths
if the cost of the link has not increased (the state of the link transitions from 1 to 2). Consequently, all
the routers that have a link in state 2 in their topology graphs have to forward to their neighbors an LSU
that announces the failure of the link. Unlike ALP, routers running STAR only have in their topology
graphs link-state information for those links that are in the preferred paths of their neighbors, i.e., the
failure of a link will only make a router to send an update message reporting the failure if the link is
in the preferred paths of the router. Because of the dynamics in link connectivity change in a wireless
mobile network we expect STAR to outperform ALP.
Figures
7 and 8 depict the performance of STAR, ALP, and TOB in terms of the number of update
packets generated as a function of simulation time for four different pause times. The ordinates represent
the simulation time. Table 1 summarizes the behavior of the three protocols according to the pause time
of the nodes. The table shows the number of link connectivity changes and the total number of update
packets generated by the routing protocols; ALP generates on average more than 4 times update packets
than STAR, and topology broadcast generates more than 10 times packets than STAR.
Pause Connectivity Packets Generated
Time Changes STAR ALP TOB
90 50 138 623 1811
Table
1: Average performance of STAR, ALP, and the topology broadcast routing protocol for different
pause times.
Figure
9 shows the small number of update packets transmitted by STAR when routers are in continuous
motion; for 1090 changes in link connectivity the routers generated 2542 packets. The performance
of ALP and TOB for pause time 0 could not be assessed because the amount of update packets generated
by the routers lead to congestion at the link layer.
Because STAR can be used in combination with any clustering scheme proposed in the past for packet-radio
networks, it is clear from this study that STAR should be used instead of ALP and topology
broadcast for the provision of QoS routing in packet radio networks, given that any overhead traffic
associated with clustering would be equivalent for STAR, ALP, and topology broadcast.
5.3 Comparison with On-Demand Routing Protocols
We compare STAR using LORA with DSR, because DSR has been shown to be one of the best-performing
on-demand routing protocols [4].
As we have stated, our simulation experiments use the same methodology used recently to evaluate
DSR and other on-demand routing protocols [4]. To run DSR in our simulation environment, we ported
the ns2 code available from [30] into the CPT simulator. There are only two differences in our DSR
implementation with respect to that used in [4]: (1) in the embedded wireless routers and simulated
protocol stack we used there is no access to the MAC layer and cannot reschedule packets already
scheduled for transmission over a link (however, this is the case for all the protocols we simulate), and (2)
routers cannot operate their network interfaces in promiscuous mode because the MAC protocol operates
over multiple channels and a router does not know on which channels its neighbors are transmitting,
unless the packets are meant for the router. Both STAR and DSR can buffer 20 packets that are awaiting
discovery of a route through the network.
The overall goal of the simulation experiments was to measure the ability of the routing protocols to
react to changes in the network topology while delivering data packets to their destinations. To measure
this ability we applied to the simulated network three different communication patterns corresponding to
8, 14, and 20 data flows. The total workload in the three scenarios was the same and consisted of data
packets/sec, in the scenario with 8 flows each continuous bit rate (CBR) source generated 4 packets/sec,
in the scenario with 20 sources each CBR source generated 1.6 packets/sec, and in the scenario with 14
flows there were 7 flows from distinct CBR sources to the same destination D generating an aggregate of
4 packets/sec and 7 flows having D as the CBR source and the other 7 sources of data as destinations.
In all the scenarios the number of unique destinations was 8 and the packet size was 64 bytes. The data
flows were started at times uniformly distributed between 20 and 120 seconds (we chose to start the flows
after 20 seconds of simulated time to give some time to the Link Layer for determining the set of nodes
that are neighbors of the routers).
The protocol evaluations are based on the simulation of 20 wireless nodes in continuous motion
(pause time seconds of simulated time.
Tables
2 and 3 summarize the behavior of STAR and DSR according to the simulated time. The tables
show the total number of update packets transmitted by the nodes and the total number of data packets
delivered to the applications for the three simulated workloads. The total number of update packets
transmitted by routers running STAR varies with the number of changes in link connectivity while DSR
generates control packets based on both variation of changes in connectivity and the type of workload
inserted in the network. Routers running STAR generated less update packets than DSR in all simulated
scenarios, the difference increased significantly when data traffic was inserted in the network for 1800
seconds: routers running DSR sent from 100% to 600% more update packets than STAR when nodes
were moving during 1800 seconds of simulated time, and from 35% to 400% more update packets when
nodes were moving during 900 seconds. Both STAR and DSR were able to deliver about the same number
of data packets to the applications in the simulated scenarios with 8 and 14 flows. When we increased
the number of sources of data from 8 to 20 nodes, while inserting the same number of data packets in the
network (32 packets/sec), we observed that STAR was able to deliver as much as three times more data
packets than DSR during 1800 seconds of simulated time, and almost twice the amount of data packets
delivered by DSR during 900 seconds of simulated time.
Number Update Pkts Sent Data Pkts Delivered Data Pkts
of Flows STAR DSR STAR DSR Generated
Table
2: Average performance of STAR and DSR for nodes moving during 900 seconds of simulated time,
total changes in link connectivity was 1464.
Number Update Pkts Sent Data Pkts Delivered Data Pkts
of Flows STAR DSR STAR DSR Generated
Table
3: Average performance of STAR and DSR for nodes moving during 1800 seconds of simulated
time, total changes in link connectivity was 2714.
Number Protocol Number of Hops
of Flows 1
DSR 64.9 31.2 2.6 1.3
14 STAR 82.0 16.0 1.7 0.3
DSR 64.1 26.9 4.0 4.5 0.5
DSR 61.9 32.4 5.1 0.3 0.3
Table
4: Distribution of DATA packets delivered according to the number of hops traversed from the
source to the destination for nodes moving during 900 seconds of simulated time.
Number Protocol Number of Hops
of Flows 1
DSR 64.3 29.2 5.9 0.6
DSR 71.4 21.8 3.0 3.6 0.2
Table
5: Distribution of DATA packets delivered according to the number of hops traversed from the
source to the destination for nodes moving during 1800 seconds of simulated time.
The MAC layer discards all packets scheduled for transmission to a neighbor when the link to the
neighbor fails, which contributes to the high loss of data packets seen by nodes. In DSR, each packet
header carries the complete ordered list of routers through which the packet must pass and may be
updated by nodes along the path towards the destination. The low throughput achieved by DSR for
the case of 20 sources of data is due to the poor choice of source routes the routers make, leading to a
significant increase in the number of ROUTE ERROR packets generated. Data packets are also discarded
due to lack of routes to the destinations because the network may become temporarily partitioned or
because the routing tables have not converged in the highly dynamic topology we simulate.
Figures
10(a) through 10(c) show the cumulative distribution of packet delay experienced by data packets
during 900 seconds of simulated time, for a workload of 8, 14, and 20 flows respectively. Figures 11(a)
through 11(c) show the cumulative distribution of packet delay during 1800 seconds of simulated time.
The higher delay introduced by DSR when relaying data packets is not directly related with the number
of hops traversed by the packets (as shown in Tables 4 and 5) but with the poor choice of source routes
when the number of flows increase from 8 to 20.
In all the simulation scenarios the number of destinations was set to just 40% of the number of nodes in
the network in order to be fair with DSR. For the cases in which all the nodes in the network receive data,
STAR would introduce no extra overhead while DSR could be severely penalized. It is also important
to note the low ratio of update messages generated by STAR compared to the number of changes in link
connectivity (Tables 2 and 3).
We note that in cases where routers fail or the network becomes partitioned for extended time periods
the bandwidth consumed by STAR is much the same as in scenarios in which no router fails, because all
that must happen is for updates about the failed links to unreachable destinations to propagate across
the network. In contrast, DSR and several other on-demand routing protocols would continue to send
flood-search messages trying to reach the failed destination, which would cause a worst-case bandwidth
utilization for DSR. To illustrate the impact the failure of a single destination has in DSR we have re-run
the simulation scenario with 8 flows present in the network for 1800 seconds making one of the destinations
fail after 900 seconds of simulated time, routers running STAR sent 942 update packets while routers
running DSR sent 3043 update packets. The existence of a single flow of data to a destination that was
unreachable for 900 seconds made DSR to generate 55% more update packets while STAR generated
about the same number of updates (see Table 3).
6 Conclusions
We have presented STAR, a link-state protocol that incurs the smallest communication overhead of any
prior table-driven routing protocol, and also incurs less overhead than on-demand routing protocols.
STAR accomplishes its bandwidth efficiency by: (a) disseminating only that link-state data needed for
routers to reach destinations; (b) exploiting that information to ascertain when update messages must be
transmitted to detect new destinations, unreachable destinations, and loops; and (c) allowing paths to
deviate from the ideal optimum while not creating permanent loops. The bandwidth efficiency achieved
in STAR is critical for ad-hoc networks with energy constraints, because it permits routers to preserve
battery life for the transmission of user data while avoiding long-term routing loops or the transmission
of data packets to unreachable destinations.
Our simulation experiments show that STAR is an order of magnitude more efficient than the traditional
link-state approach, and more than four times more efficient than ALP (which has been shown
to outperform prior topology-driven protocols), in terms of the number of update packets sent. The
results of our experiments also show that STAR is more bandwidth efficient than DSR, which has been
shown to be one of the most bandwidth-efficient on-demand routing protocols. Because STAR can be
used with any clustering mechanism proposed to date, these results clearly indicate that STAR is a very
attractive approach for routing in packet-radio networks. Perhaps more importantly, the approach we
have introduced in STAR for least-overhead routing opens up many research avenues, such as developing
similar protocols based on distance vectors and determining how route aggregation works under LORA.
--R
"Hierarchical Routing Using Link Vectors,"
Data Networks
"A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Proto- cols,"
"The Dynamic Source Routing Protocol for Mobile Ad Hoc Networks,"
"Signal Stability-Based Adaptive Routing (SSA) for Ad-Hoc Mobile Networks,"
"Solutions to Hidden Terminal Problems in Wireless Networks,"
"Distributed, scalable routing based on vectors of link states,"
"Wireless Internet Gateways (WINGS),"
"Scalable Link-State Internet Routing,"
"The Zone Routing Protocol for Highly Reconfigurable Ad-Hoc Net- works,"
"Protocols for Adaptive Wireless and Mobile Networking,"
"The DARPA Packet Radio Network Protocols,"
"MACA - a new channel access method for packet radio,"
"Hierarchical Routing for Large Networks: Performance Evaluation and Optimization,"
"Proposed routing algorithms for the US Army mobile subscriber equipment (MSE) network,"
"OSPF Version 2,"
"A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks,"
"Highly Dynamic Destination-Sequenced Distance-Vector Routing (DSDV) for Mobile Computers,"
"Ad-Hoc On Demand Distance Vector (AODV) Routing,"
"Routing in Frequency-Hop Packet Radio Networks with Partial-Band Jamming,"
"Hierarchically-organized, Multihop Mobile Wireless Networks for Quality-of-Service Support,"
"An Adaptive Hierarchical Routing Algorithm,"
"An Efficient Routing Protocol for Wireless Networks,"
"Loop-Free Internet Routing Using Hierarchical Routing Trees,"
Routing in Communication Networks
"Wireless ATM & Ad-Hoc Networks,"
"A Five Phase Reservation Protocol (FPRP) for Mobile Ad-Hoc Networks,"
"Wireless and Mobility Extensions to ns-2 - Snapshot 1.0.0-beta,"
--TR
Data networks (2nd ed.)
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers
Routing in communications networks
Solutions to hidden terminal problems in wireless networks
An efficient routing protocol for wireless networks
Hierarchically-organized, multihop mobile wireless networks for quality-of-service support
Location-aided routing (LAR) in mobile ad hoc networks
A distance routing effect algorithm for mobility (DREAM)
A performance comparison of multi-hop wireless ad hoc network routing protocols
Wireless ATM and Ad-Hoc Networks
Ad-hoc On-Demand Distance Vector Routing
A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks
Loop-Free Internet Routing Using Hierarchical Routing Trees
Source-Tree Routing in Wireless Networks
--CTR
Zhenjiang Li , J. J. Garcia-Luna-Aceves, Finding multi-constrained feasible paths by using depth-first search, Wireless Networks, v.13 n.3, p.323-334, June 2007 | mobile wireless networks;source routing tree;link-state routing protocol |
383773 | ad hoc multicast routing with mobility prediction. | An ad hoc wireless network is an infrastructureless network composed of mobile hosts. The primary concerns in ad hoc networks are bandwidth limitations and unpredictable topology changes. Thus, efficient utilization of routing packets and immediate recovery of route breaks are critical in routing and multicasting protocols. A multicast scheme, On-Demand Multicast Routing Protocol (ODMRP), has been recently proposed for mobile ad hoc networks. ODMRP is a reactive (on-demand) protocol that delivers packets to destination(s) on a mesh topology using scoped flooding of data. We can apply a number of enhancements to improve the performance of ODMRP. In this paper, we propose a mobility prediction scheme to help select stable routes and to perform rerouting in anticipation of topology changes. We also introduce techniques to improve transmission reliability and eliminate route acquisition latency. The impact of our improvements is evaluated via simulation. | Introduction
An ad hoc network is a dynamically reconfigurable
wireless network with no fixed infrastructures. Each
host acts as a router and moves in an arbitrary man-
ner. Ad hoc networks are deployed in applications such
as disaster recovery and distributed collaborative com-
puting, where routes are mostly multihop and network
hosts communicate via packet radios. In a typical ad
hoc environment, network hosts work in groups to carry
out the given task. Hence, multicast plays an important
role in ad hoc networks. Multicast routing protocols
used in static networks such as Distance Vector Multicast
Routing Protocol (DVMRP) [7], Multicast Open
Shortest Path First (MOSPF) [19], Core Based Trees
(CBT) [3], and Protocol Independent Multicast (PIM)
[8], however, do not perform well in ad hoc networks.
Multicast tree structures are fragile and must be readjusted
continuously as connectivity changes. Further-
more, multicast trees usually require a global routing
substructure such as link state or distance vector. The
frequent exchange of routing vectors or link state tables,
triggered by continuous topology changes, yields exces-
This work was funded in part by the Defense Advanced Re-search
Projects Agency (DARPA) under contract DAAB07-97-
C-D321, as a part of the Global Mobile Information Systems
(GloMo) program.
Now with eWings Technologies, Plano, TX.
sive channel and processing overhead. Limited band-
width, constrained power, and mobility of network hosts
make the multicast protocol design particularly challenging
To overcome these limitations, several multicast protocols
have been proposed [4,9,16,13,22,23,27]. In
this study, we will use On-Demand Multicast Routing
Protocol (ODMRP) [16,17] as the starting scheme.
ODMRP applies on-demand routing techniques to
avoid channel overhead and improve scalability. It uses
the concept of forwarding group [5], a set of nodes which
is responsible for forwarding multicast data on shortest
paths between any member pairs, to build a forwarding
mesh for each multicast group. By maintaining and
using a mesh, ODMRP avoids drawbacks of multicast
trees in mobile wireless networks (for example, intermittent
connectivity, traffic concentration, frequent tree
reconfiguration, non-shortest path in a shared tree).
ODMRP takes a soft-state approach to maintain multi-cast
members. No explicit control message transmission
is required to leave the group.
The major strengths of ODMRP are its simplicity
and scalability. We can further improve its performance
by several enhancements. In this paper, we propose new
techniques to enhance the effectiveness and efficiency of
ODMRP. Our primary goals are the following:
R
R
R
R
R
Join Request
Join
Table
Figure
1. On-demand procedure for membership setup and maintenance
Improve adaptivity to node movement patterns
ffl Transmit control packets only when necessary
ffl Reconstruct routes in anticipation of topology changes
ffl Improve hop-by-hop transmission reliability
ffl Eliminate route acquisition latency
ffl Select stable routes
The remainder of the paper is organized as fol-
lows. Section 2 overviews the basic mechanism of
ODMRP. Section 3 describes new enhancements applied
to ODMRP. Section 4 follows with the simulation
results and concluding remarks are made in Section 5.
2. ODMRP
Overview
ODMRP establishes and maintains group membership
and multicast routes by the source on demand.
Similar to on-demand unicast routing protocols, a query
phase and a reply phase comprise the protocol (see Figure
1). While a multicast source has packets to send, it
periodically broadcasts to the entire network a member
advertising packet, called Join Request. This periodic
transmission refreshes the membership information
and updates the routes as follows. When a node receives
a non-duplicate Join Request, it stores the upstream
node address in its route table (i.e., backward learning)
and rebroadcasts the packet. When the Join Request
packet reaches a multicast receiver, the receiver creates
or updates the source entry in its Member Table. While
valid entries exist in the Member Table, Join Tables
are broadcasted periodically to the neighbors. When a
node receives a Join Table, it checks if the next node
Forwarding Group
Multicast Member Nodes
Forwarding Group Nodes
Figure
2. The forwarding group concept.
address of one of the entries matches its own address.
If matched, the node realizes that it is on the path to
the source and thus is a part of the forwarding group.
It then sets the FG FLAG and broadcasts its own Join
Table built upon matched entries. Each forwarding
group member hence propagates the Join Table until
the packet reaches the multicast source via the shortest
path. This process constructs (or updates) the routes
from sources to receivers and builds a mesh of nodes,
the forwarding group.
We visualize the forwarding group concept in Figure
2. The forwarding group is a set of nodes which
is in charge of forwarding multicast packets. It supports
shortest paths between any member pairs. All
nodes inside the "bubble" (multicast members and forwarding
group nodes) forward multicast data packets.
Note that a multicast receiver also can be a forwarding
group node if it is on the path between a multicast
source and another receiver. The mesh provides richer
connectivity among multicast members compared with
trees. Flooding redundancy among forwarding group
helps overcome node displacements and channel fading.
Hence, unlike trees, meshes do not require frequent reconfigurations
An example in Figure 3 illustrates the robustness of
a mesh configuration. Three sources (S 1
send multicast data packets to three receivers (R 1 , R 2 ,
and R 3
forwarding group nodes (A, B, and
C). Suppose the route from S 1 to R 2 is (S 1 -A-B-R 2 ).
In a tree configuration, if the link between nodes A and
breaks or fails, R 2
cannot receive any packets from
until the tree is reconfigured. ODMRP, on the other
hand, already has a redundant route (S 1
) to
R
R
R
A
Links
Multicast Routes
Sources:
Receivers: R R , R
Forwarding Nodes: A, B, C
Figure
3. Why a mesh?
deliver packets without going through the disconnected
link between nodes A and B.
Let us consider Figure 4 as an example of a Join
Table forwarding process. Nodes S 1 and S 2 are multicast
sources, and nodes R 1 , R 2 , and R 3 are multicast
receivers. Nodes R 2
and R 3
send their Join Tables to
both S 1 and S 2 via I 2 . R 1 sends its packet to S 1 via I 1
and to S 2 via I 2 . When receivers send their Join Tables
to next hop nodes, an intermediate node I 1 sets
the FG FLAG and builds its own Join Table since there
is a next node address entry in the Join Table received
from R 1 that matches its own address. Note that the
Join Table built by I 1 has an entry for sender S 1 but
not for S 2 because the next node for S 2 in the received
Join Table is not I 1 . In the meantime, node I 2 sets
the FG FLAG, constructs its own Join Table and sends
the packet to its neighbors. Even though I 2 receives
three Join Tables from the receivers, it broadcasts the
Join Table only once because the second and third table
arrivals carry no new source information. Channel
overhead is thus reduced dramatically in cases where
numerous multicast receivers share the same links to
the source.
After this group establishment and route construction
process, a multicast source can transmit packets to
receivers via selected routes and forwarding groups. Periodic
control packets are sent only when outgoing data
packets are still present. When receiving a multicast
data packet, a node forwards the packet only if it is not
a duplicate and the setting of the FG FLAG for the multicast
group has not expired. This procedure minimizes
traffic overhead and prevents sending packets through
stale routes.
In ODMRP, nodes do not need to send explicit con-
SIIRRR1212123Sender Next Node
Join Table of Node R
I
I
Join Table of Node I111
Figure
4. An example of a Join Table forwarding.
trol packets to leave the group. If a multicast source
wants to leave the group, it simply stops sending Join
Request packets since it does not have any multicast
data to send to the group. If a receiver no longer wants
to receive from a particular multicast group, it removes
the corresponding entries from its Member Table and
does not send the Join Table for that group. Nodes
in the forwarding group are demoted to non-forwarding
nodes if not refreshed (no Join Tables received) before
they timeout.
Unicast routing capability is one of the major
strengths of ODMRP. Not only can ODMRP coexist
with any unicast routing protocol, it can function as
both multicast and unicast. Thus, ODMRP can run
without any underlying unicast protocol. Other ad hoc
multicast protocols such as Adhoc Multicast Routing
Protocol (AMRoute) [4], Core Assisted Mesh Protocol
(CAMP) [9], Reservation-Based Multicast (RBM) [6],
and Lightweight Adaptive Multicast (LAM) [13] must
be run on top of a unicast routing protocol. CAMP,
RBM, and LAM in particular, only work with certain
underlying unicast protocols.
3. Enhancements
3.1. Adapting the Refresh Interval via Mobility
Prediction
ODMRP requires periodic flooding of Join Requests
to build and refresh routes. Excessive flooding,
however, is not desirable in ad hoc networks because
of bandwidth constraints. Furthermore, flooding often
causes congestion, contention, and collisions. Finding
the optimal refresh interval is critical in ODMRP per-
formance. Here we propose a scheme that adapts the
route refresh interval to mobility patterns and speeds
[24]. By utilizing the location and mobility information
provided by GPS (Global Positioning System) [15], we
predict the duration of time routes will remain valid. 1
With the predicted time of route disconnection, Join
Requests are only flooded when route breaks of ongoing
data sessions are imminent.
In our prediction method, we assume a free space
propagation model [21], where the received signal
strength solely depends on its distance to the trans-
mitter. We also assume that all nodes in the network
have their clocks synchronized (for example, by using
the NTP (Network Time Protocol) [18] or the GPS
clock itself). 2 Therefore, if we know the motion parameters
of two neighbors (such as speed, direction, radio
propagation range), we can determine the duration of
time these two nodes will remain connected. Assume
two nodes i and j are within the transmission range r
of each other. Let
) be the coordinate of mobile
host i and be that of mobile host j. Also let v i
be the speeds, and ' i
(0
be the moving directions of nodes i and j, respectively.
Then, the amount of time that they will stay connected,
, is predicted by:
where
sin
sin
, and
Note that when v is set to 1
without applying the above equation.
To utilize the information obtained from the predic-
tion, Join Request and Join Table packets must add
extra fields. When a source sends Join Requests, it
appends its location, speed, and direction. It sets the
MIN LET (Minimum Link Expiration Time) field to
the MAX LET VALUE since the source does not have any
previous hop node. The next hop neighbor, upon receiving
a Join Request, predicts the link expiration
1 We can obtain mobility speed and heading information from
GPS or the node's own instruments and sensors (for example,
campus, odometer, speed sensors).
Time synchronization of the nodes is done only at the boot time.
Once nodes have powered up and their clocks are synchronized,
it is not required to perform periodic updates (although we can
still perform periodic updates in large intervals).
time between itself and the previous hop using the equation
(3.1). The minimum between this value and the
MIN LET indicated by the Join Request is included
in the packet. The rationale is that as soon as a single
link on a path is disconnected, the entire path is
invalidated. The node also overwrites the location and
mobility information field written by the previous node
with its own information. When a multicast member
receives the Join Request, it calculates the predicted
LET of the last link of the path. The minimum between
the last link expiration time and the MIN LET value
specified in the Join Request is the RET (Route Expiration
Time). This RET value is enclosed in the Join
Table and broadcasted. If a forwarding group node receives
multiple Join Tables with different RET values
(i.e., lies in paths from the same source to multiple re-
ceivers), it selects the minimum RET among them and
sends its own Join Table with the chosen RET value
attached. When the source receives Join Tables, it
selects the minimum RET among all the received Join
Tables
. Then the source builds new routes by flooding
a Join Request before the minimum RET approaches
(i.e., route breaks). Note that multicast receivers need
not periodically transmit Join Tables. Since sources
flood Join Requests only when needed, receivers only
send Join Tables after receiving Join Requests.
In addition to the estimated RET value, we need
to consider other factors when choosing the route refresh
interval. If the node mobility rate is high and the
topology changes frequently, routes will expire quickly
and often. The source may propagate Join Requests
excessively and this excessive flooding can cause collisions
and congestion, and clogs the network with control
packets. Thus, the MIN REFRESH INTERVAL should
be enforced to avoid control message overflow. On the
other hand, if nodes are stationary or move slowly and
link connectivity remains unchanged for a long duration
of time, routes will hardly expire and the source
will rarely send Join Requests. A few problems arise
in this situation. First, if a node in the route suddenly
changes its movement direction or speed, the
predicted RET value becomes obsolete and we cannot
reconstruct routes in time. Second, when a non-member
node located remotely from multicast members
wants to join the group, it cannot inform the new
membership or receive data until it receives a Join Re-
quest. Hence, the MAX REFRESH INTERVAL should be
set. The selection of the MIN REFRESH INTERVAL and
the MAX REFRESH INTERVAL should be adaptive to net-work
situations (among others, traffic type, traffic load,
mobility pattern, mobility speed, channel capacity).
3.1.1. Alternative Method of Prediction
Since GPS may not work properly in certain situations
(for instance, indoor, fading), we are not always
able to accurately predict the link expiration time for
a particular link. Nevertheless, there is an alternative
method to predict the LET. This method is based on a
more realistic propagation model and is proposed in [1]
and [20]. Basically, a node periodically measures transmission
power samples from packets received from its
neighbor. From this information, the node computes
the change rate for a particular neighbor's transmission
power level. Therefore, it can predict the time when the
transmission power level will drop below the acceptable
value (hysteresis region).
3.2. Route Selection Criteria
In the basic ODMRP, a multicast receiver selects
routes based on the minimum delay (i.e., the route
taken by the first received Join Request). We can
apply a different route selection method when using
the mobility prediction. The idea is inspired by the
Associativity-Based Routing (ABR) protocol [25] which
chooses associatively stable routes. In our new algo-
rithm, instead of using the minimum delay path, we
choose a route that is the most stable (the one with the
largest RET). To select a route, a multicast receiver
must wait for an appropriate amount of time after receiving
the first Join Request so that it will know
all possible routes and their RETs. The receiver then
chooses the most stable route and broadcasts a Join
Table
. Route breaks will occur less often and the number
of Join Request propagation will reduce because
we use stable routes. An example that shows the difference
between two route selection algorithms is presented
in Figure 5. Two routes are available from the
source S to the receiver R. Route 1 has a path of (S-
A-B-R) and route 2 has a path of (S-A-C-R). If we
use the minimum delay as the route selection metric,
the receiver node R selects route 1. Route 1 has a delay
of seven (3 route 2 has a delay
of nine (3 9). Since the Join Request that
takes route 1 reaches the receiver first, node R chooses
R
(3,
(1,
(4,
(3,
(i,
link expiration time j
Path S-A-B-R S-A-C-R
Delay 7
Route 1 Route 2
Figure
5. Route selection example.
route 1. If we select the stable route instead, the receiver
chooses route 2. The route expiration time of
route 1 is two (min(5; 2; and that of route 2 is
four (min(5; 5; 4). The receiver selects the route
with the maximum RET, and hence selects route 2. We
evaluate different route selection methods by simulation
in Section 4.
3.3. Reliability
The reliable transmission of Join Tables plays an
important role in establishing and refreshing multicast
routes and forwarding groups. If Join Tables are
not properly delivered, ODMRP cannot achieve effective
multicast routing. The IEEE 802.11 MAC protocol
[11], which is the emerging standard in wireless
networks, performs reliable transmission by retransmitting
the packet if no acknowledgment is received. If the
packet is broadcasted, however, no acknowledgments or
retransmissions are sent. In ODMRP, the transmission
of Join Tables are broadcasted when there are multiple
entries. Thus, ODMRP must perform the hop-by-
hop Join Table delivery verification and retransmission
We adopt a scheme that was used in [14]. Figure 6
illustrates the mechanism. When node B transmits a
packet to node C after receiving a packet from node A,
node A can hear the transmission of node B if it is still
within B's radio propagation range. The packet transmission
by node B to node C is hence used as a passive
acknowledgment to node A. We can utilize this passive
acknowledgment to verify the delivery of a Join
Table
. Multicast sources must send active acknowledgments
to the previous hops since they do not have
any next hops to send Join Tables to unless they are
forwarding group nodes. When the node does not receive
any acknowledgment within the timeout interval,
Transmission
Passive Ack
Transmission
Figure
6. Passive acknowledgments.
it retransmits the message. If the node cannot verify the
packet delivery after an appropriate number of retrans-
missions, it considers the route to be invalidated. The
node then broadcasts a message to its neighbors specifying
that the next hop to the source cannot be reached.
Upon receiving this packet, each neighbor builds and
unicasts the Join Table to its next hop if it has a
route to the multicast source. If no route is known, it
simply broadcasts the packet specifying the next hop is
not available. In both cases, the node sets its FG FLAG.
The FG FLAG setting of every neighbor may create excessive
redundancy, but most of these settings will expire
because only necessary forwarding group nodes will be
refreshed in the next Join Table propagation phase.
3.4. Elimination of Route Acquisition Latency
The major drawback of on-demand routing protocols
is the delay required to obtain a route. This route acquisition
latency makes on-demand protocols less attractive
in networks where real-time traffic is exchanged.
In the basic ODMRP, when the source does not have
any multicast route information, it postpones the data
transmission for a certain period of time. In contrast to
unicast routing, the selection of the waiting time is not
straightforward. In unicast, the source can send data as
soon as it receives a Route Reply. In ODMRP, how-
ever, sources cannot transmit data immediately after
receiving the first Join Table since routes to receivers
that are farther away may not yet have been established.
To eliminate these problems, when a source has data
to send but does not know any multicast route, it floods
the data instead of the Join Request. The data packet
also replaces the periodic transmission of Join Re-
quests. 3 Basically, Join Data becomes a Join Request
with data payload attached. The flooding of
Join Data achieves data delivery in addition to constructing
and refreshing the routes. Although the size
of the flooded packet is larger compared with Join Re-
quests, route acquisition latency is eliminated.
4. Performance Evaluation
4.1. Simulation Environment
We implemented the simulator within the Global
Mobile Simulation (GloMoSim) library [26]. The GloMoSim
library is a scalable simulation environment for
wireless network systems using the parallel discrete-event
simulation capability provided by PARSEC [2].
Our simulation modeled a network of 50 mobile hosts
placed randomly within a 1000m \Theta 1000m area. Radio
propagation range for each node was 250 meters and
channel capacity was 2 Mbits/sec. Each simulation executed
for 600 seconds of simulation time. We conducted
multiple runs with different seed numbers for each scenario
and averaged collected data over those runs.
We used a free space propagation model [21] with a
threshold cutoff in our experiments. In the free space
model, the power of a signal attenuates as 1=d 2 where d
is the distance between radios. In the radio model, we
assumed the ability of a radio to lock on to a sufficiently
strong signal in the presence of interfering signals, i.e.,
capture. If the capture ratio (the minimum ratio
of an arriving packet's signal strength relative to those
of other colliding packets) [21] was greater than the
predefined threshold value, the arriving packet was received
and other interfering packets were dropped. We
used the IEEE 802.11 Distributed Coordination Function
(DCF) [11] as the medium access control proto-
col. The scheme used was Carrier Sense Multiple Ac-
cess/Collision Avoidance (CSMA/CA) with acknowl-
edgments. We developed a traffic generator to simulate
constant bit rate sources. The size of data payload was
512 bytes. Each node moved constantly with the pre-defined
speed. Moving direction was selected randomly,
and when nodes reached the simulation terrain bound-
ary, they bounced back and continued to move. We sim-
3 To differentiate between the flooded data that performs the Join
Request role and the ordinary data, we term the flooded data
packet as Join Data.
ulated one multicast group with one source. The multicast
members and the source were chosen randomly
with uniform probabilities. Members joined the group
at the start of the simulation and remained as members
throughout the simulation.
4.2. Methodology
To investigate the impact of our enhancements, we
simulated the following three schemes:
1. Scheme A: the basic ODMRP as specified in [10],
2. Scheme B: the enhanced ODMRP that uses the
minimum delay as the route selection metric, and
3. Scheme C : the enhanced ODMRP that uses the
route expiration time as the route selection metric.
Both enhanced schemes included reliable transmission
and route acquisition latency elimination features. We
evaluate the protocols as a function of speed and multicast
group size. In the first set of experiments, we set
the size of the multicast group constant to ten and vary
the speed from 0 km/hr to 72 km/hr. In the second set
of simulations, we set the node mobility speed constant
at km/hr and vary the multicast group size from two
(unicast) to twenty. The metrics of interest are:
ffl Packet delivery ratio: The number of data packets
actually received by multicast members over the
number of data packets supposed to be received by
multicast members.
ffl End-to-end delay: The time elapsed between the
instant when the source has data packet to send and
the instant when the destination receives the data.
Note that if no route is available, the time spent
in building a route (route acquisition latency) is included
in the end-to-end delay.
ffl Control overhead: The total control bytes trans-
mitted. We calculate bytes of data packet and Join
Data headers in addition to bytes of control packets
(Join Requests, Join Tables, active acknowledg-
ments) as control overhead.
ffl Number of total packets transmitted per data
packet delivered: The number of all packets (data
and control packets) transmitted divided by data
packet delivered to destinations. This measure shows
the efficiency in terms of channel access and is very
important in ad hoc networks since link layer protocols
are typically contention-based.0.60.81
Packet
Delivery
Mobility Speed (km/hr)
scheme A
scheme B
scheme C
Figure
7. Packet delivery ratio as a function of speed.
4.3. Simulation Results
4.3.1. Packet Delivery Ratio
The packet delivery ratio as a function of the mobility
speed and the multicast group size is shown in Figures
7 and 8, respectively. We can see from Figure 7 that
as speed increases, the routing effectiveness of scheme
A degrades rapidly compared with schemes B and C.
Both schemes B and C have very high delivery ratios of
over 96% regardless of speed. As they reconstruct the
routes in advance of topology changes, most data are
delivered to multicast receivers without being dropped.
Scheme A, however, periodically transmits Join Requests
and Join Tables (every 400 ms and 180 ms,
respectively) without adapting to mobility speed and
direction. Frequent flooding resulted in collisions and
congestion, leading to packet drops even in low mobility
rates. At high speed, routes that are taken at the Join
Request phase may already be broken when Join Tables
are propagated. In scheme A, nodes do not verify
the reception of transmitted Join Tables. Most Join
Tables failed to reach the source and establish the forwarding
group. Thus, when the source sends the data,
the multicast route is not properly built and packets can
not be delivered. Both schemes B and C enforce reliable
Join Table transmissions. The schemes appropriately
establish and refresh the routes and forwarding group
nodes even in high mobility situations and they proved
to be robust to the mobility speed.
In
Figure
schemes B and C outperform scheme A
again. The result shows that our enhanced protocols
are robust to multicast group size in addition to mobil-0.60.81
Packet
Delivery
Multicast Group Size
scheme A
scheme B
scheme C
Figure
8. Packet delivery ratio as a function of number of multi-cast
End-to-End
Delay
(millisecond)
Mobility Speed (km/hr)
scheme A
scheme B
scheme C
Figure
9. End-to-end delay as a function of speed.
ity speed. Scheme A's performance improves as the size
becomes larger. As the number of receivers increases,
the number of forwarding group nodes increases accord-
ingly. Hence, the connectivity of the multicast mesh
becomes richer and the redundancy of the paths helps
delivering data to destinations.
4.3.2. End-to-End Delay
Figures
show the end-to-end delay of each
scheme. Schemes B and C have shorter delays compared
with that of scheme A. In scheme A, sources flood
Join Requests and must wait for a certain amount of
time to send data until routes are established among
multicast members. In schemes B and C, on the con-
trary, sources flood Join Data immediately even before
they construct routes and forwarding group. The5155 10 15 20
End-to-End
Delay
(millisecond)
Multicast Group Size
scheme A
scheme B
scheme C
Figure
10. End-to-end delay as a function of number of multicast
Control
(Bytes)
Mobility Speed (km/hr)
scheme A
scheme B
scheme C
Figure
11. Control overhead as a function of speed.
route acquisition latency is eliminated and packets are
delivered to receivers in shorter delays. One may be
surprised to see that the delay of scheme B which uses
the minimum delay route is larger than that of scheme
C which uses the stable (and possibly longer delay)
route. Even though the route taken by Join Data
is the shortest delay route at that instant, it may not
be the minimum delay route later on as nodes move.
In addition, compared with stable routes, the minimum
delay routes disconnect more frequently which results
in data packets traversing through alternate and longer
routes formed by forwarding group nodes.
4.3.3. Control Overhead
Figure
11 shows the control byte overhead as a function
of mobility speed for each protocol. Remember
2e+064e+06
Control
(Bytes)
Multicast Group Size
scheme A
scheme B
scheme C
Figure
12. Control overhead as a function of number of multicast
members.
that the transmission of control packets in scheme A is
time triggered only without adapting to mobility speed.
Hence, the amount of control overhead does not increase
as the mobility speed increases. In fact, control overhead
decreases as nodes move faster. As Join Tables
are less likely to reach the target nodes in a highly mobile
environment, the Join Table propagations by the
next nodes are triggered fewer. Furthermore, data packets
(whose header is calculated as control overhead), are
transmitted fewer because forwarding group nodes and
routes are not established or refreshed appropriately as
the speed increases. On the other hand, the overhead of
schemes B and C goes up as mobility speed increases.
Since they use mobility prediction to adapt to mobility
speed, they send more Join Data and Join Tables
when mobility is high. In addition, Join Table
retransmission and active acknowledgment propagation
also increase with mobility and add to the control over-
head. It is important to observe that the overhead of
schemes B and C are both significantly less than that
of scheme A in low mobility cases because schemes B
and C transmit control packets only when necessary.
The enhanced schemes have more overhead when nodes
move fast, but the extra control packets are used efficiently
in delivering data (see Figure 7). When comparing
scheme B with scheme C, we can see that scheme
yields more overhead in low mobility although both
schemes produce nearly equal amount of overhead in
high mobility. Since scheme C chooses a stable route,
Join Data are flooded less often. When nodes move
relatively fast (for example, 72 km/hr in our simula-1525
Avg.
of
Total
Packets
Transmitted
Data
Packet
Delivered
Mobility Speed (km/hr)
scheme A
scheme B
scheme C
Figure
13. The Number of Total Packets Transmitted per Data
Packet Delivered as a Function of Speed.
tion), however, routes are broken often and links will
remain connected for a short duration of time. Sources
are thus likely to use MIN REFRESH INTERVAL and the
overheads incurred by schemes B and C become almost
identical.
In
Figure
12, control overhead of all schemes increases
when the number of multicast group increases.
As there are more multicast receivers, more Join Tables
are built and propagated. Schemes B and C have
much less overhead than that of scheme A. Scheme A
periodically sends Join Requests and Join Tables,
but enhanced schemes send Join Data and Join Tables
only in advance of topology changes. As expected,
scheme C further improves scheme B. The number of
control packet transmissions are less as scheme C uses
stable routes.
4.3.4. Number of Total Packets Transmitted per Data
Packet Delivered
The number of total packets (Join Requests, Join
Tables
, Join Data, Data, and active acknowledg-
ments) transmitted per data packet delivered is presented
in Figures 13 and 14. We have mentioned previously
that this measure indicates the channel access
efficiency. We can see the improvements made by enhanced
schemes from the results. In Figure 13, the number
for scheme A remains relatively constant to mobility
speed. As shown in Figures 7 and 11, the number of
data packets delivered and the amount of control bytes
transmitted both decrease as mobility increases. The
number for scheme A thus remains almost unchanging.
Avg.
of
Total
Packets
Transmitted
Data
Packet
Delivered
Multicast Group Size
scheme A
scheme B
scheme C
Figure
14. The Number of Total Packets Transmitted per Data
Packet Delivered as a Function of Number of Multicast Members.
The measures for schemes B and C gradually increase
with mobility speed. Both schemes deliver a high portion
of the data to destinations regardless of speed (see
Figure
7) and the number of data packets delivered remains
similar. Nevertheless, more control packets must
be sent in order to adapt to node mobility speed, and
thus the total number of transmitted packets increases
with speed.
In
Figure
14, the number of all packets transmitted
per data packet delivered decreases as the group size becomes
larger for all schemes. This result is expected as
the number of multicast members increases, the number
of data packets received by members increases accord-
ingly. Again, schemes B and C have greatly improved
the efficiency of scheme A.
5. Conclusion
We presented new techniques to improve the performance
of ODMRP. By using the mobility and link connectivity
prediction, we reconstruct routes and forwarding
groups in anticipation of topology changes. This
adaptive selection of the refresh interval avoids the un-necessary
control packet transmissions and the resulting
bandwidth wastage. We applied a new route selection
algorithm to choose routes that will stay valid for the
longest duration of time. The usage of stable routes
further reduces the control overhead. We used passive
acknowledgments and retransmissions to improve the
reliable Join Table delivery. The improved reliability
plays a factor in protocol enhancement since the
delivery of Join Tables is critical in establishing the
routes and forwarding group nodes. We also introduced
a method to eliminate the route acquisition latency.
Simulation results showed that our new methods improved
the basic scheme significantly. More data packets
were delivered to destinations, fewer control packets
were produced in low mobility, control packets were utilized
more efficiently in high mobility, and end-to-end
delay was shorter. The enhanced ODMRP is scalable,
robust to host mobility, and efficient in channel access.
Acknowledgements
Authors thank Dr. Ching-Chuan Chiang and Guangyu
Pei for their contributions.
--R
"Optimal Prioritization of Handovers in Mobile Cellular Networks,"
"PARSEC: A Parallel Simulation Environment for Complex Systems,"
"Core Based Trees (CBT) - An Architecture for Scalable Inter-Domain Multicast Routing,"
"AM- Route: Adhoc Multicast Routing Protocol,"
"Forwarding Group Multicast Protocol (FGMP) for Multihop, Mobile Wireless Networks,"
"A Reservation-Based Multicast (RBM) Routing Protocol for Mobile Networks: Initial Route Construction Phase,"
"Multicast Routing in Datagram Internetworks and Extended LANs,"
"The PIM Architecture for Wide-Area Multicast Routing,"
"A Multi-cast Routing Protocol for Ad-Hoc Networks,"
"On-Demand Multicast Routing Protocol (ODMRP) for Ad Hoc Net- works,"
IEEE Computer Society LAN MAN Standards Committee
Internet Engineering Task Force (IETF) Mobile Ad Hoc Networks (MANET) Working Group Charter
"A Lightweight Adaptive Multicast Algorithm,"
"The DARPA Packet Radio Net-work Protocols,"
Understanding the GPS: Principles and Applications
"On-Demand Multicast Routing Protocol,"
"On-Demand Multi-cast Routing Protocol (ODMRP) for Ad Hoc Networks,"
"Internet Time Synchronization: the Network Time Protocol,"
"Multicast Routing Extensions for OSPF,"
"Minimiz- ing Cellular Handover Failures Without Channel Utilization Loss,"
Principles and Practice
"Multicast Operation of the Ad-hoc On-Demand Distance Vector Routing Protocol,"
"MCEDAR: Multicast Core-Extraction Distributed Ad hoc Routing,"
"Mobility Prediction and Routing in Ad Hoc Wireless Networks,"
"Associativity-Based Routing for Ad-Hoc Mobile Networks,"
UCLA Parallel Computing Laboratory and Wireless Adaptive Mobility Laboratory
"AMRIS: A Multicast Protocol for Ad hoc Wireless Networks,"
--TR
Multicast routing in datagram internetworks and extended LANs
Core based trees (CBT)
Multicast routing extensions for OSPF
The PIM architecture for wide-area multicast routing
A preservation-based multicast (RBM) routing protocol for mobile networks
Multicast operation of the ad-hoc on-demand distance vector routing protocol
Mobility prediction and routing in <I>ad hoc</I> wireless networks
Communications
Forwarding Group Multicast Protocol (FGMP) for multihop, mobile wireless networks
Associativity-Based Routing for Ad Hoc Mobile Networks
Parsec
--CTR
Jianping Pan , Lin Cai , Y. Thomas Hou , Yi Shi , Sherman X. Shen, Optimal Base-Station Locations in Two-Tiered Wireless Sensor Networks, IEEE Transactions on Mobile Computing, v.4 n.5, p.458-473, September 2005
Caixia Chi , Dawei Huang , David Lee , XiaoRong Sun, Lazy flooding: a new technique for information dissemination in distributed network systems, IEEE/ACM Transactions on Networking (TON), v.15 n.1, p.80-92, February 2007
Xiaojiang Du , Dapeng Wu, Joint design of routing and medium access control for hybrid mobile ad hoc networks, Mobile Networks and Applications, v.12 n.1, p.57-68, January 2007
Jianping Pan , Y. Thomas Hou , Lin Cai , Yi Shi , Sherman X. Shen, Topology control for wireless sensor networks, Proceedings of the 9th annual international conference on Mobile computing and networking, September 14-19, 2003, San Diego, CA, USA
Vinod Namboodiri , Manish Agarwal , Lixin Gao, A study on the feasibility of mobile gateways for vehicular ad-hoc networks, Proceedings of the 1st ACM international workshop on Vehicular ad hoc networks, October 01-01, 2004, Philadelphia, PA, USA
Leslie D. Fife , Le Gruenwald, Research issues for data communication in mobile ad-hoc network database systems, ACM SIGMOD Record, v.32 n.2, p.42-47, June
Meng-Yen Hsieh , Yueh-Min Huang , Tzu-Chinag Chiang, Transmission of layered video streaming via multi-path on ad hoc networks, Multimedia Tools and Applications, v.34 n.2, p.155-177, August 2007
Alejandro Quintero, A User Pattern Learning Strategy for Managing Users' Mobility in UMTS Networks, IEEE Transactions on Mobile Computing, v.4 n.6, p.552-566, November 2005 | ad hoc networks;mobile computing;multicast and routing protocols;mobility prediction |
383780 | On knowledge-based programming with sensing in the situation calculus. | We consider a class of knowledge-based Golog programs with sense actions. These programs refer explicitly to an agent's knowledge, and are designed to execute on-line, and under a dynamic closed-world assumption on knowledge. On-line execution of sense actions dynamically updates the background axioms with sentences asserting knowledge of the sense actions' outcomes. We formalize what all this might mean, and show that under suitable assumptions the knowledge modality in such programs can be implemented by provability. This leads to an on-line Golog interpreter for such programs, which we demonstrate on a knowledge-based program with sensing for the blocks world. | INTRODUCTION
Our concern will be with knowledge-based programs, specically, Golog programs
[Levesque et al. 1997] that appeal to knowledge and actions, including sense actions.
As an example, we consider the blocks world, and a program that explicitly refers to
an agent's knowledge, and to the sense actions she can perform to gain information
about the world. We imagine that initially the agent is positioned in front of a
table of blocks, with no prior knowledge about which blocks are where. The only
information she is given in advance is an enumeration of all the (nitely many)
blocks. Of course, the agent needs to know that this is a blocks world, so we
include suitable successor state and action precondition axioms. But the agent
denitely knows nothing about the initial conguration of blocks. For that matter,
neither do we, the program designers. Our program will allow the agent to gain the
information she needs, and to carry out the actions required to place all the blocks
onto the table:
Author's address: Department of Computer Science, University of Toronto, Toronto, Canada M5S
Permission to make digital/hard copy of all or part of this material without fee for personal
or classroom use provided that the copies are not made or distributed for prot or commercial
advantage, the ACM copyright/server notice, the title of the publication, and its date appear, and
notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to republish,
to post on servers, or to redistribute to lists requires prior specic permission and/or a fee.
c
ACM 1529-3785/ c
ACM Transactions on Computational Logic, Vol. x, No.
proc allT oT able(b)
if :KWhether(clear(x)) then sense clear (x) endIf ;
if Knows(:clear(x)) then allT oT able(fxg [ b)
else if :KWhether(ontable(x)) then sense ontable (x) endIf ;
if Knows(ontable(x)) then allT oT able(fxg [ b)
else moveToT able(x) ; allT oT able(f g) endIf
endIf
This procedure appeals to Golog operators that should be intuitively clear, with the
possible exception of that for nondeterministic choice, ( x)(x), whose reading is
\Nondeterministically choose an x, and for that choice, do the program (x)." The
parameter b in allT oT able(b) stands for the set of those blocks that the agent has
considered, and rejected, as candidates for moving to the table since her previous
moveToT able action. The test action
prevents her from ever reconsidering
such a block. Thus, the initial call to this procedure is with allT oT able(f g). The
procedure assumes two knowledge-producing actions,
sense ontable (x) and sense clear (x);
whose action preconditions are always true. This program attempts to minimize
the number of sense actions the agent performs by rst checking that she doesn't
currently know the truth value of the sensed
uent. For example, the program
fragment if :KWhether(clear(x)) then sense clear (x) endIf instructs the agent
to sense whether x is clear only when she does not already know whether x is clear.
While on the face of it this program seems perfectly intuitive, there are a number
of technical problems lurking behind the scenes, particularly what it would take to
implement it:
(1) Knowledge and lack of knowledge. The background axiomatization must
characterize the agent's knowledge about the initial situation, and also her lack
of knowledge. So, for example, her ignorance about which blocks are where can
be represented by
Representing lack of knowledge can be problematic when an agent has complex
knowledge about the initial situation:
Assuming that the above characterizes all that the agent knows, what does the
agent not know in this example? Whatever that might be, it must somehow
be axiomatized because it does represent a truth about the agent's knowledge.
This is a very commonly occurring problem. We begin with a collection K of
axioms about what an agent does know, and we want to make a closed-world
assumption about knowledge to the eect that K captures everything that the
agent knows; any knowledge sentences not following logically from K are taken
On Knowledge-Based Programming with Sensing in the Situation Calculus 3
to be false. The problem here, of course, is to somehow capture this closed-world
assumption in a way that relieves the designer from having to gure out
the relevant lack of knowledge axioms when given what the agent does know.
(2) On-line execution. Because the program appeals to sense actions, it is designed
for on-line execution. This means that it must never backtrack over a
sense action; once such an action is performed, it cannot be undone. Knowledge-based
programs that invoke information-gathering actions must be carefully designed
to prevent execution traces that include sense actions but that eventually
lead to dead-ends. The above program has this property.
(3) Implementing sense actions. What should be done when such a program
encounters a sense action? The program is executing on-line, so that each action
in an execution trace is meant to be performed as it is generated during a
program run. Performing an ordinary action, like moveToT able(A), is unprob-
lematic; in a setting where the program is controlling a robot, the robot would
simply perform the action, and the action term would be added to the situation
history being constructed by the program interpreter. Performing a sense action
on-line is a dierent matter. Consider the robot receiving the on-line program
request sense clear (A) when the current action log is S. It will respond with one
of \yes" or \no", depending on the outcome of its sense action. If \yes" then the
robot now knows that A is clear: Knows(clear(A); do(sense clear (A); S)). If
\no" then Knows(:clear(A); do(sense clear (A); S)). Normally, neither of these
facts will be logical consequences of the underlying axioms; they provide new
information about the robot's world. Therefore, to provide an on-line implementation
of the sense action sense clear (A), dynamically add one of
do(sense clear (A); S));
do(sense clear (A); S));
to the background axioms, depending on the sense action's actual outcome.
knowledge-based Golog interpreter. For knowledge-based programs
like allT oT able above, we cannot depend on a conventional Golog interpreter
[Levesque et al. 1997], which relies on a closed initial database, and which,
in any case, would have to be augmented with the ability to reason about
knowledge.
For the above reasons, it is problematic to directly execute knowledge-based
programs like allT oT able using a closed-world Golog interpreter. The stance we
shall take here is to view such programs as specications for agent behaviours, and
seek an alternative representation for them that can be executed under standard
closed-world Golog.
2. FORMAL PRELIMINARIES
We rely on the description of the situation calculus of [Pirri and Reiter 1999], with
specic reference to the successor state axioms and action precondition axioms of
basic action theories, and we refer the reader to that paper for the details. We
rely also on the approach of [Scherl and Levesque 1993] for representing knowledge
and sensing actions in the situation calculus, and we assume that the reader is
familiar with their approach. We remind the reader that, following [Moore 1980;
4 Ray Reiter
1985], Scherl and Levesque introduce an accessibility relation K into the situation
calculus; the intended reading of K(s is that situation s 0 is accessible from
situation s. With the accessibility relation K in hand, one then denes knowledge
as an abbreviation:
Here, is a situation-suppressed expression, and [s 0 ] is that situation calculus
formula obtained from by restoring the situation argument s 0 to all predicate and
function symbols of that take situation arguments.
2.1 Two Simplifying Assumptions
In the interest of simplifying the presentation of this paper, we shall make two
notationally undesirable but otherwise inessential assumptions about the underlying
language of the situation calculus:
(1) The language has no functional
uents, which are function symbols that take
situation arguments. Non
uent function symbols are permitted. To represent
a functional
uent, e.g., numberOfBlocksOnT able(s), the axiomatizer should
use a relational
uent, e.g., numberOfBlocksOnT able(n; s), and should en-
force, via its truth value in the initial database and via its successor state
axiom, that n must always exist and be unique.
(2) Except for the equality predicate, < and P oss, the language has no non
u-
ent predicate symbols. To represent such \eternal" relation, for example,
isP rimeNumber(n), the axiomatizer is required to use a relational
uent, e.g.,
isP rimeNumber(n; s), and to assign it the successor state axiom
isP rimeNumber(n; do(a; s)) isP rimeNumber(n; s):
Moreover, any assertion about isP rimeNumber(n) in the initial database must
be made in terms of isP rimeNumber(n; S 0 ).
2.2 Basic Action Theories with Knowledge and Sensing
Based on Scherl and Levesque's proposal, we can dene a basic action theory taking
the form
where,
(1) consists of the following foundational axioms for the situation calculus with
knowledge:
Uniqueness of Situations
Subhistories
:s
Induction
On Knowledge-Based Programming with Sensing in the Situation Calculus 5
Here, we have introduced the abbreviation:
Any model of these axioms will consist of a forest of isomorphic trees, one
rooted at S 0 , the others rooted at the other initial situations in the model. All
these roots can serve in specifying a K relation over initial situations.
Accessible Initial Situations
(2) D ss is a set of successor state axioms including the following axiom for the
accessibility relation K:
(2)
Here, each sense action sense i (~x i associated with a condition
whose truth value in situation s the action is designed to de-
termine. Scherl and Levesque also treat read actions, whose purpose is to
determine the denotations of functional
uents, but since we are assuming no
functional
uents in our situation calculus language, we do not consider these.
It remains only to pin down the permissible syntactic forms that may take
in sense actions sense (~x).
Definition 2.1. (Objective situation-suppressed expressions).
(a) If F ( ~ t; ) is a relational
uent atom, then F ( ~ t) is an objective situation-
suppressed expression.
(b) If t 1 and t 2 are terms not of sort situation, then t is an objective
situation-suppressed expression.
(c) If and are objective situation-suppressed expressions, so are :, _ ,
and (9v) , where v is not a situation variable.
Objective expressions are statements only about the world, not the agent's
mental state; they do not involve expressions of the form Knows(). An
objective situation-suppressed sentence is an objective situation-suppressed expression
In what follows, we shall simply say \ is
objective" in place of the long-winded \ is an objective situation-suppressed
sentence."
We require that every sense action sense (~x) be such that (~x) is an objective
situation-suppressed expression. So the idea is that an agent can sense objective
sentences|truths about the external world|but not, reasonably enough,
truths about his own knowledge.
The No Side-Eects Assumption for Sense Actions. For
uents other
than K, we suppose that they are provided with successor state axioms in the
usual way. But in the presence of sense actions, there is always the possibility
that such actions can aect these
uents, as in, for example
6 Ray Reiter
closeEyes:
However, we will not allow knowledge-producing actions to have such side-
eects on ordinary
uents; in the formal story we develop here, such actions
are only permitted to aect the K
uent. In other words, for each sense action
sense , and each relational
uent R, we require that R's successor state axiom,
together with the other background axioms, will entail
do(sense (~y); s)) R(~x; s):
This no side-eects condition is needed to obtain the above successor state
axiom for K. It also guarantees the intuitively necessary property that by
virtue of performing a knowledge-producing action, an agent will come to know
the outcome of that action. We shall make extensive use of this assumption in
what follows.
One might argue that the no side-eects assumption is unreasonable, that sense
actions often produce a change in the state of ordinary
uents, as in the above
senseF orObstacle example. The counter-argument is that indeed certain pre-conditions
(e.g., eyesOpen) may be necessary for sense actions to occur, but
then separate actions|not sense actions|should be provided by the axioms to
achieve these preconditions (e.g., openEyes). Then to perform a sense action,
one must rst perform the appropriate state-changing actions to establish that
sense action's preconditions. This is the perspective we adopt here.
(3) D ap is a set of action precondition axioms.
una is the set of unique-names axioms for actions.
(5) K Init is any set of initial accessibility axioms specifying the K relation in the
initial situation; these must have the property that, by virtue of the successor
state axiom for K, they will be true in all situations. In particular, using induc-
tion, this property can be shown to hold for the following standard accessibility
relations in initial situations.
(a) Re
exive in initial situations:
(b) Symmetric in initial situations:
(c) Transitive in initial situations:
(d) Euclidean in initial situations:
For example, with reference to the symmetry property, the following is a consequence
of the foundational axioms and the [Scherl and Levesque 1993] solution
to the frame problem for K:
(6) DS0 is a set of rst-order sentences describing the initial state of the world
being axiomatized. To pin these down, we need a couple of denitions.
On Knowledge-Based Programming with Sensing in the Situation Calculus 7
Definition 2.2. (Formulas about ). Let be a term of sort situation. The
formulas about are dened inductively:
(a) A relational
uent atom F ( ~ t; ) is a formula about .
(b) If t 1 and t 2 are terms not of sort situation, then t
.
(c) If is an admissible situation-suppressed expression (dened below), then
Knows(;) is a formula about .
(d) If and are formulas about , so are :, _ , and (9v) , where v is
not a situation variable.
Definition 2.3. (Admissible situation-suppressed expressions). These are
inductively dened as follows:
(a) If F ( ~ t; ) is a relational
uent atom, then F ( ~ t) is an admissible situation-
suppressed expression.
(b) If t 1 and t 2 are terms not of sort situation, then t is an admissible
situation-suppressed expression.
(c) If and are admissible situation-suppressed expressions, so are :, _ ,
Knows(), and (9v) , where v is not a situation variable.
Compare this with Denition 2.1 of the objective situation-suppressed expres-
sions. The objective expressions are always admissible, but unlike the former,
the latter are permitted to express properties of the agent's mental state, i.e.,
they may mention expressions of the form Knows().
We can now dene DS0 to be any set of sentences about S 0 .
Example 2.1. Here, we give the action precondition, and successor state axioms
for the blocks world that underlies the allT oT able knowledge-based program.
Action Precondition Axioms
Successor State Axioms
clear(x; do(a; s))
2.3 Satisability of Basic Action Theories with Knowledge
The main result about a basic action theory D with knowledge is the following:
Theorem 2.1. (Relative Satisability).
8 Ray Reiter
consists of any subset of the accessibility relations
Re
exive, Symmetric, Transitive, Euclidean, D is satisable
K Init is satisable.
Proof. Very much like the proof of relative satisability for basic action theories
without knowledge of [Pirri and Reiter 1999]. Essentially, one shows how a model
of DS0 [ D una [ K Init can be extended to a model of D. We omit the lengthy
details.
3. REDUCING KNOWLEDGE TO PROVABILITY FOR THE INITIAL SITUATION
The starting point for our implemention of knowledge-based programs is the observation
that, in a certain important special case, knowledge is reducible to provabil-
ity. Here, we describe what we mean by this, and we give suitable conditions under
which it will be true. Specically, we shall be concerned with characterizing entailments
of the form D terms of provability of knowledge-free
sentences.
First we need a simple consequence of the Relative Satisability Theorem 2.1:
Theorem 3.1. Suppose that Knows(;S 0 ) is a sentence about S 0 . Then,
consists of any subset of the accessibility relations
Re
exive, Symmetric, Transitive, Euclidean, then
Proof. D
that because :Knows(;S 0 ) is a syntactically legal sentence to include in an initial
database, then D[f:Knows(;S 0 )g is a basic action theory with knowledge whose
initial database is DS0 [ f:Knows(;S 0 )g. Now use Theorem 2.1.
Next, we introduce a special class of initial databases. Suppose the sentences of
DS0 are all of the form Knows( In
other words, the initial database consists exclusively of sentences declaring what
the agent knows about the world he inhabits, but there are no sentences declaring
what is actually true of the world, or what he knows about what he knows. So for
objective and since this
is logically equivalent to Knows( simply suppose that DS0
consists of a single sentence of the form Knows(;S 0 ); where is objective, and
that is what we shall do from here on.
Lemma 3.1. Suppose that is objective, that
objective, and that K init consists of any subset of the accessibility relations Re
exive,
Symmetric, Transitive, Euclidean. Then,
Proof. The ( direction follows from the fact that, because the axioms of D una
are situation independent, they are known in S 0 , and the fact that all logical consequences
of what is known are also known.
()). By Theorem 3.1, it is su-cient to prove:
On Knowledge-Based Programming with Sensing in the Situation Calculus 9
If
By hypothesis, K init [
Then it must continue to be so with K(s; s 0 ) taken to be . With this choice
for K, all sentences of K init become tautologies, Knows(;S 0 ) simplies to [S 0 ],
and :Knows(;S 0 ) simplies to :[S 0 ]. Therefore, D una [ f[S 0 ]; :[S 0 ]g is un-
satisable, and therefore, D una [ f[S 0
Therefore, for the initial situation, we have reduced the entailment problem for
knowledge sentences to that of knowledge-free sentences. This result relies on the
stated assumptions that:
(1) DS0 consists of a sentence of the form Knows(;S 0 ), where is objective.
Therefore, the following would
not:
(2) The sentence to be proved has the form Knows(;S 0 ), where is objective.
Lemma 3.1 gives us a provability story for entailments from D of the form
What about entailments of the form :Knows(;S 0 )? We defer
treating negative knowledge until Section 5 below, where we shall introduce the
closed-world assumption on knowledge, whose eect will be that entailments of
negative knowledge will reduce to non provability of knowledge-free sentences.
4. ON-LINE EXECUTION OF KNOWLEDGE-BASED PROGRAMS
As discussed earlier, we have in mind executing knowledge-based programs like
allT oT able on-line. This means that each time a program interpreter adds a new
program action to its action history, the robot also physically performs this action.
Some of these actions will be sense actions; since these normally increase the robot's
knowledge of its world, this means that its axioms must be augmented by knowledge
about the outcomes of its on-line sense actions. 1 To capture this idea formally, we
need some notation for describing this incrementally growing set of axioms. Initially,
before it has performed any actions on-line, the robot's background consists of a
basic action theory D, as dened in Section 2.2. Suppose that is the current
situation recording all the actions performed on-line by the robot. We can suppose
that mentions no variables, since it makes no sense to perform a non ground action
on-line. We want to dene the result of augmenting D with knowledge about the
outcomes of all the sense actions occurring in .
Definition 4.1. (Sense Outcome Function). A sense outcome function is any
mapping
from ground situation terms to sets of knowledge sentences, such that:
(1)
(2) If is not a sense action,
1 In this respect, our work has much in common with prior approaches to on-line sensing, for
example, [Pirri and Finzi 1999; Lakemeyer 1999].
(3) If is a sense action sense (~g);
do(sense (~g); ))g or
do(sense (~g); ))g:
In general, we shall be interested in D [
namely, the original basic action
theory D, augmented by knowledge about the outcomes of all sense actions in
the action history , according to the sense outcome
function
To analyze the
properties of this dynamically growing theory, we need the concept of regression.
4.1 Regression
Regression [Waldinger 1977; Pednault 1994; Pirri and Reiter 1999] is the principal
mechanism in the situation calculus for answering queries about hypothetical fu-
tures. The intuition underlying regression is this: Suppose we want to prove that
a sentence W is entailed by some basic action theory. Suppose further that W
mentions a relational
uent atom F ( ~ t; do(; )), where F 's successor state axiom
is F (~x; do(a; s)) F (~x; a; s). Then we can easily determine a logically equivalent
sentence W 0 by substituting F ( ~ t; ; ) for F ( ~ t; do(; )) in W . After doing so, the
uent atom F ( ~ t; do(; )), involving the complex situation term do(; ), has been
eliminated from W in favour of F ( ~ t; ; ), and this involves the simpler situation
term . In this sense, W 0 is \closer" to the initial situation S 0 than was W . More-
over, this operation can be repeated until the resulting goal formula mentions only
the situation term S 0 , after which, intuitively, it should be su-cient to establish
this resulting goal using only the sentences of the initial database. Regression is
a mechanism that repeatedly performs the above reduction starting with a goal
ultimately obtaining a logically equivalent goal W 0 whose only situation term
is S 0 . In [Pirri and Reiter 1999], the soundness and completeness of regression is
proved for basic action theories without knowledge and sensing actions. [Scherl and
Levesque 1993] denes regression for formulas involving knowledge, but we shall not
need that notion in this paper; our denition will be for knowledge-free formulas,
and is really a simpler version of that in [Pirri and Reiter 1999].
Definition 4.2. (The Regressable Formulas). A formula W is regressable wrt a
situation term i:
(1) W is a
uent atom F ( ~ t; ), or an equality atom t are
not situation terms, or
(2) W has the form :W 1 or W 1 _W 2 or (9x)W 1 where x is not a situation variable,
and W 1 and W 2 are regressable wrt .
W is regressable i it is regressable wrt for some ground situation term . 2
We can now dene the regression operator for regressable formulas.
Definition 4.3. (The Regression Operator). The regression operator R when
applied to a regressable formula W is determined relative to a basic theory of actions
This denition is less general than the denition of the regressable formulas given in [Pirri and
Reiter 1999]; it has the virtue of being simpler, and it will be su-cient for the purposes of this
paper.
On Knowledge-Based Programming with Sensing in the Situation Calculus 11
that serves as a background axiomatization. In what follows, ~ t is a tuple of terms,
is a ground term of sort action, and is a ground term of sort situation.
(1) Suppose W is an atom. Since W is regressable, we have two possibilities:
(a) W is an equality atom between non situation terms, or W is a
uent atom
of the form F ( ~ t; S 0 ). In these cases,
(b) W is a relational
uent atom of the form F ( ~ t; do(; )). Let F 's successor
state axiom in D ss be
Without loss of generality, assume that all quantiers (if any) of F (~x; a; s)
have had their quantied variables renamed to be distinct from the free
variables (if any) of F ( ~ t; do(; )). Then
In other words, replace the atom F ( ~ t; do(; )) by a suitable instance of the
right-hand side of the equivalence in F 's successor state axiom, and regress
this formula. The above renaming of quantied variables of F (~x; a; s)
prevents any of these quantiers from capturing variables in the instance
(2) For the remaining cases, regression is dened inductively.
We shall also need to regress certain kinds of situation-suppressed expressions.
Definition 4.4. (Regression for situation-suppressed expressions). Recall that
when is a situation-suppressed expression, and a situation term, then [] is
that situation calculus formula obtained from by restoring the situation argument
to every predicate of that takes a situation argument. Let be an objective
situation-suppressed expression. Therefore, is the result of suppressing the situation
argument in some regressable formula of the situation calculus. Introduce a
\one-step" regression operator, R 1 (; ), whose role is to perform a single regression
step for through the action as follows: R 1 (; ) is that situation-suppressed
expression obtained from by rst replacing all
uents in [do(; s)] with situation
argument do(; s) by the corresponding right-hand sides of their successor state
axioms (renaming quantied variables if necessary), and next, suppressing the situation
arguments in the resulting formula. Clearly, R 1 (; )[s] is logically equivalent
to [do(; s)] relative to the successor state axioms.
Finally, we dene the \multi step" regression operator on situation-suppressed
expressions. Let be a ground situation term. R(; ) is the regression of the
situation-suppressed expression through the actions of , dened by
We are here overloading the R notation. The one-argument version, R[W ] regresses
regressable formulas W of the situation calculus (to obtain a formula about S 0 ); the
two-argument version R(; ) regresses situation-suppressed expressions through
the actions of . Clearly, R(; )[S 0 ] and R[[]] are logically equivalent, relative
to the background basic action theory.
4.2 Reducing Knowledge To Provability for On-Line Programs
Here, we focus on conditions under which the theory D [
consisting of the
original theory D augmented by the outcomes of all sense actions in , entails
sentences of the form Knows(;):
Lemma 4.1. Let (~x) be an objective, situation-suppressed expression. Then:
is not a sense action,
do(sense (~y); s))
Proof. First use the successor state axiom (2) for K. For 1, use the fact that,
when (~x) is an objective, situation-suppressed expression,
For 2, use the fact that, when (~x) is an objective, situation-suppressed expression,
do(sense (~y); s)) (~x; s)
by the no side-eects assumption for sense actions.
Corollary 4.1.
Proof. Take to be in item 2 of Lemma 4.1.
Corollary 4.2. When K Init includes the re
exivity axiom,
Proof. Use Corollary 4.1, and the fact that when re
exivity holds in the initial
situation, it holds in all situations, so that what is known in s is true in s.
Lemma 4.2. When K Init includes the re
exivity axiom,
do(sense (~y); s))
do(sense (~y); s)) Knows( (~y) (~x); s):
Proof. By item 2 of Lemma 4.1 and Corollary 4.2.
We shall need the following bit of notation:
Definition
4.5.
R ():
Suppose
is a sense outcome function, and is a
ground situation term.
3 The notation occurring in a sentence means that two sentences are being expressed: one in
which is uniformly replaced by in the sentence, the other in which it is replaced by : .
On Knowledge-Based Programming with Sensing in the Situation Calculus 13
)g:
So, for example, if
then
Notice
that
R () is a situation-suppressed sentence. By
convention,
when
Lemma 4.3. If K Init includes the re
exivity axiom,
then
and
are logically equivalent, 4 relative to D. Recall that the
notation
stands
for the result of restoring situation argument S 0 back into the situation-suppressed
sentence
R ():
Proof. Corollary 4.2.
Lemma 4.4. Suppose is objective, is a ground situation term, and K Init
includes the re
exivity axiom. Then
Knows(
Proof. Induction on the number of actions in . When this is 0, is S 0 , and
the result is immediate. For the inductive step, there are two cases:
Case 1. Suppose is not a sense action. Then by item 1 of Lemma 4.1,
do(;
By induction hypothesis,
Knows(
The result now follows from the fact that when is not a sense
action,
R (do(; )) and the fact that R(R 1 (;
Case 2. is a sense action, say sense (~g). Without loss of generality, assume that
a sense outcome is (~g), so
by Lemma 4.2,
do(;
By induction hypothesis,
Knows(
The result now follows
because
R () R( (~g) ; ) is the same
as
which is the same
as
R (do(;
Definition 4.6. (Deciding equality sentences). Suppose T is any situation calculus
theory. We say that T decides all equality sentences i for any sentence
over the language of T whose only predicate symbol is equality, T
4 We are slightly abusing terminology here; strictly speaking, we should say that the conjunction
of the sentences in
logically equivalent
to
R ()[S0 ].
14 Ray Reiter
The following is a purely technical lemma that we shall nd very useful in establishing
our principal results.
Lemma 4.5. Suppose that is objective, that D una [
decides all equality sentences, and K init consists of any subset of the accessibility
relations Re
exive, Symmetric, Transitive, Euclidean. Suppose further that
are objective, and that
Then for some 0 i n; D
Proof. This takes a bit of proof theory. By hypothesis,
is unsatisable. Therefore, by item 2 of Theorem 2.1,
is unsatisable. Therefore, after expanding the Knows notation, we have that
is unsatisable. Therefore, after skolemizing the existentials, we get that
is unsatisable. It remains unsatisable if we take K(s; s 0 ) to be the complete graph
on
With this choice for K, all sentences of K init become tautologies, and we have that
must be unsatisable. Let
be a minimal subset of
such that
is unsatisable. If unsatisable, so by
Lemma 3.1, the result is immediate. Therefore, we can suppose that m 1. By
the above minimal subset property,
is satisable. By the Craig Interpolation Theorem, there exists a sentence I in the
intersection language of (3) and D una [f[m such that (3) [f:Ig and
are both unsatisable. But the intersection language
On Knowledge-Based Programming with Sensing in the Situation Calculus 15
consists of equality sentences only, 5 so I is such a sentence, and since D una [f[S 0
decides all such sentences, either D una [ f[S 0 I or D una [ f[S 0
latter case is impossible by the satisability of (3). We earlier concluded that
is unsatisable. It remains unsatisable when S 0 is
uniformly substituted for m , so that D una [
This, together with the fact that D una [ f[S 0 I implies the unsatisability of
]g. Therefore, by Lemma 3.1, D
Something like the assumption that D una [f[S 0 ]g decides all equality sentences
in the above lemma seems necessary. To see why, consider:
Example 4.1. Let F and G be unary
uents, and let be the conjunction of
the following three sentences:
G:
Then it is easy to see that
but
Theorem 4.1. Suppose that is objective,
objective, decides all equality sentences, is a ground situation
term, and K Init includes the re
exivity axiom. Then
Proof. The (() direction follows from Lemmas 3.1 and 4.4.
For the ()) direction, suppose D [
Knows(
By Lemma 4.3,
f
Knows(
Therefore,
Knows(
By Lemma 4.5,
Knows(:
R
Knows(
If the latter is the case, we are done. Suppose the former. Then certainly,
Knows(:
i.e., D
Knows(
The result now follows from Lemma
3.1.
This is the central result of this section; it completely characterizes entailments of
the form Knows(;) relative to D [
in terms of provability, in the initial
5 Recall that we assume the underlying language of the situation calculus has no non
uent predicate
symbols and no functional
uents (Section 2.1).
situation, for knowledge-free sentences. What about entailments of sentences of the
form :Knows(;)? That is the topic of the next section.
5. THE DYNAMIC CLOSED-WORLD ASSUMPTION
We now consider the problem, discussed in item 1 of Section 1, of characterizing an
agent's lack of knowledge, and we begin rst by considering knowledge in the initial
situation we shall make the closed-world assumption
on knowledge, namely, that Knows(;S 0 ) characterizes everything that the agent
knows initially, and whatever knowledge does not follow from this will be taken
to be lack of knowledge. How can we characterize this closed-world assumption in
a way that relieves the axiomatizer from having to gure out the relevant lack of
knowledge axioms when given what the agent does know? We propose the following:
objective and D 6j= Knows(;S 0 )g:
Under the closed-world assumption on knowledge, the o-cial basic action theory
is taken to be closure(D). To make her life easier in specifying the initial database
of a basic action theory, we ask the axiomatizer only to provide a specication of
the positive initial knowledge DS0 that the robot has of its domain, but this is
understood to be a convenient shorthand for what holds initially, and closure(D),
as dened above, species the actual basic action theory.
Example 5.1. Here, we specify the positive initial knowledge available to the
agent inhabiting the blocks world of Example 2.1. While she knows nothing about
which blocks are where, this does not mean she knows nothing at all. There are state
constraints associated with this world, and we must suppose the agent knows these.
Specically, she must know that these constraints hold initially, and therefore, her
initial database consists of the following:
By making the closed-world assumption about this initial database, the axiomatizer
need not concern herself with writing additional lack of knowledge axioms like
does she have to worry about whether she has succeeded in expressing all the
relevant lack of knowledge axioms. The closed-world assumption takes care of
these problems for her.
However, as noted above, the axioms of D are being continuously augmented by
sentences asserting knowledge about the outcomes of the sense actions that have
been performed during the on-line execution of a program, and D [
species
what these axioms are, when the program is currently in situation . Therefore,
under the closed-world assumption in situation , we are supposing that D [
characterizes all of the agent's positive knowledge about the initial situation, so we
are actually interested in the closure of DS0 relative to D [
here we are
actually making a dynamic closed-world assumption.
Definition 5.1. Dynamic Closed-World Assumption on Knowledge.
On Knowledge-Based Programming with Sensing in the Situation Calculus 17
closure(D [
objective and D [
Under the dynamic closed-world assumption on knowledge, the o-cial basic action
theory, when the on-line execution of a program is in situation , is taken to be
closure(D [
)).
This closed-world assumption on knowledge is a metalevel account of a special
case of Levesque's logic of only knowing [Levesque 1990]. His results have been
considerably extended (to include an account of knowledge within the situation
calculus) in [Levesque and Lakemeyer 2001; Lakemeyer and Levesque 1998].
Next, we need to study the properties of closure(D [
)), with the ultimate
objective of reducing negative knowledge to non provability.
Lemma 5.1. Suppose that
decides all equality sentences, and K Init includes the re
exivity axiom, together
with any subset of the accessibility relations Symmetric, Transitive, Eu-
clidean. Then
closure(D [
Proof.
R ()[S 0 ]g is unsatisable. Then by Lemma 3.1,
Knows(:
R re
exivity, D
R ()[S 0 ]; and therefore, D [
f
R ()[S 0 ]g is unsatisable. Because, by Lemma
4.3,
and
are
logically equivalent (relative to D), D [
is unsatisable, and therefore, so is
closure(D [
is unsatisable. Therefore,
objective and D [
is unsatisable. Therefore, by Lemma 4.3,
f
objective and D [
is unsatisable. By item 2 of Theorem 2.1,
objective and D [
is unsatisable. By compactness, there is a nite, possibly empty subset of
objective and D [
say
such that
is unsatisable. Therefore,
f
is unsatisable, so
By Lemma 4.5, D
Knows(:
R
latter case is impossible, because for
fore, D
Knows(:
R
unsatisable.
Lemma 5.2. Suppose
decides all equality sentences, is a ground situation term, and K Init includes the
re
exivity axiom. Suppose further that closure(D [
closure(D [
Proof. The (() direction follows from Theorem 4.1.
the result is immediate by Theorem 4.1. If
Knows(
R ()
R(;
Lemma 4.4, closure(D [
contradicting the hypothesis
that closure(D [
Definition 5.2. (Subjective Sentences). We say a sentence is a subjective sentence
about a ground situation term i it has the form Knows(;), where is
objective, or it has the form :W , where W is a subjective sentence about , or it
has the form W 1 _ W 2 , where W 1 and W 2 are subjective sentences about .
Lemma 5.3. Suppose K Init includes the re
exivity axiom. Then for any subjective
sentence W about a ground situation term ,
closure(D [
Proof. Induction on the syntactic form of W , using Lemma 4.4 to help prove
the base case.
We can now combine Lemmas 5.1, 5.2, and 5.3 to obtain our main result:
Theorem 5.1.
Let
be a sense outcome function. Suppose that
decides all equality sentences.
(3 ) is a ground situation term.
(4
(5 ) K Init consists of the re
exivity axiom, together with any subset of the accessibility
axioms Symmetric, Transitive, Euclidean.
Then,
closure(D [
is a subjective sentence about ,
closure(D [
(3 ) When W 1 and W 2 are subjective sentences about ,
closure(D [
closure(D [
On Knowledge-Based Programming with Sensing in the Situation Calculus 19
6. AN INTERPRETER FOR KNOWLEDGE-BASED PROGRAMS WITH SENSING
Under the stated conditions, Theorem 5.1 justies the following decisions in implementing
an interpreter for an on-line knowledge-based program.
(1) If the implementation uses [S 0 ] as its initial database.
(2) Whenever a sense (~g) action is performed by the program in a situation , the
implementation adds the regression of (~g; to the current initial
database, depending on the outcome of the sense action.
(3) Suppose a test condition W is evaluated by the program in a situation . Using
items 2 and 3 of Theorem 5.1, the implementation recursively breaks down
W [] into appropriate subgoals of proving, or failing to prove, sentences of the
form Knows(;). By item 1, these reduce to proving, or failing to prove,
the regression of [] relative to the current initial database. So for these base
cases, the implementation performs this regression step, then invokes a theorem
prover on the regressed sentence, using the current initial database (plus unique
names for actions) as premises. Notice the assumption here, required by the
theorem, that every test condition W of the program will be such that, at
evaluation time, W [] will be a subjective sentence about .
Guarded Sense Actions. Condition 4 of Theorem 5.1 requires that D una [
R ()[S 0 ]g be satisable. Therefore, an implementation must perform
this satisability test, and it must do so after each sense action. However, there
is one natural condition on a knowledge-based program that would eliminate
the need for such a test, and that is that every sensing action in the program
be guarded. By this, we mean that a sense action is performed only when
its outcome is not already known to the robot. In the allT oT able program,
the statement if :KWhether(clear(x)) then sense clear (x) endIf provides
such a guard for the sense clear action. Whenever the program guards all its
sense actions, as allT oT able does, then condition 4 of Theorem 5.1 reduces to
requiring the satisability of D una [ f[S 0 ]g, and this can be performed once
only, when the initial database is rst specied. This is the content of the
Proposition 6.1. Assume the conditions of Theorem 5.1. Assume further
that sense (~g) is a ground sense action, and that
closure(D [
Then
R (do(sense (~g); ))[S 0 ]g is satisable.
Proof. Without loss of generality, assume
that
a sense outcome is (~g),
so
do(sense
Therefore, by Lemma 4.4,
Knows(
Ray Reiter
Hence, by Lemma 3.1,
f
and therefore, D una [ f[S 0
must be satis-
able. The result now follows because :R(: (~g); )[S 0 ] is the same thing as
because
is
R (do(sense (~g); )):
7. COMPUTING CLOSED-WORLD KNOWLEDGE
The reduction of knowledge to provability under the closed-world assumption on
knowledge makes little computational sense for full rst-order logic, because its
provability relation is not computable. Therefore, in what follows, we shall restrict
ourselves to the quantier-free case. Specically, we shall assume:
(1) The only function symbols not of sort action or situation are constants.
(2) DS0 includes knowledge of unique names axioms for these constants:
(3) All quantiers mentioned in DS0 , and all quantiers mentioned in test expressions
of a knowledge-based program are typed, and these types are abbreviations
for descriptions of nite domains of constants:
where there will be one such abbreviation for each type .
Therefore, typed quantiers can be eliminated in formulas in favour of conjunctions
and disjunctions, so we end up with sentences of propositional logic, for which the
provability relation is computable. Because the agent has knowledge of unique
names, D una [DS0 will decide all typed equality sentences. Therefore, the conditions
of Theorem 5.1 will hold.
8. PUTTING IT ALL TOGETHER
We have implemented an on-line Golog interpreter for knowledge-based programs
based on Theorem 5.1. It assumes that all sense actions in the program are guarded,
and it therefore does not perform the consistency check required by condition 4 of
Theorem 5.1. Here, we include the principal new features that need to be added to
a standard Golog interpreter [Reiter 2001] to obtain this knowledge-based program
interpreter.
A Golog Interpreter for Knowledge-Based Programs with Sense Actions Using
Provability to Implement Knowledge
% The clauses for do remain as for standard Golog, except that an extra clause is
% added to treat sense actions by interactively asking the user for the outcome of
% the action, and updating the initial database with the regression of this outcome.
This clause for sense actions appeals to a user-provided declaration
senseAction(A,SensedOutcome), meaning that A is a sense action, and SensedOutcome
% is the formula whose truth value the action A is designed to determine.
% restoreSitArgThroughout(W,S,F) means F is the result of restoring the situation
% argument S into every fluent mentioned by the formula W.
On Knowledge-Based Programming with Sensing in the Situation Calculus 21
queryUser(SensedOutcome,YN), restoreSitArgThroughout(SensedOutcome,S,Outcome),
regress(Outcome,R),
queryUser(SensedOutcome,YN) :- nl, write("Is "),
true now? y or n."), read(YN).
% Add the following clauses to those for holds in the standard Golog interpreter.
Implementing knowledge with provability .
holds(knows(W),S) :- restoreSitArgThroughout(W,S,F), prove(F).
In the above interpreter, two Prolog predicates were left unspecied:
(1) The theorem prover prove. Any complete propositional prover will do. We
use one that supposes the current initial database is a set of prime implicates.
There is no signicance to this choice; we simply happen to have had available
a prime implicate generating program. prove(F) rst regresses F, converts the
result to clausal form, then tests these clauses for subsumption against the
initial database of prime implicates.
(2) updateInitDatabase(R), whose purpose is to add the sentence R, which is the regression
of the outcome of a sense action, to the initial database. Because in our
implementation this is a database of prime implicates, updateInitDatabase(R)
converts R to clausal form, adds these to the initial database, and recomputes
the prime implicates of the resulting database.
Here is an execution of the allT oT able program, with a four block domain, using
this prover and the above Golog interpreter. 6
Running the Program for Four Blocks
[eclipse 2]: compile. % Compile the initial database to prime implicate form.
Clausal form completed. CPU time (sec): 0.06 Clauses: 34
Database compiled. CPU time (sec): 0.04 Prime implicates: 34
yes.
[eclipse 3]: run.
Is clear(a) true now? y or n. y.
Is ontable(a) true now? y or n. y.
Is clear(b) true now? y or n. n.
Is clear(c) true now? y or n. y.
Is ontable(c) true now? y or n. n.
6 All code needed to run this blocks world example is available, on request, from the author.
22 Ray Reiter
Performing moveToTable(c).
Is clear(b) true now? y or n. n.
Performing moveToTable(d).
Final situation: [senseClear(a), senseOnTable(a), senseClear(b), senseClear(c),
senseOnTable(c), moveToTable(c), senseClear(b), moveToTable(d)]
Notice how smart the program is: after learning that a is clear and need not be
moved, and after moving c to the table and learning that b is still not clear, it
gures that d must therefore be on b and that b must be on the table, so it simply
moves d.
9. DISCUSSION
The concept of knowledge-based programming was rst introduced in [Halpern and
Fagin 1989]. Chapter 7 of [Fagin et al. 1995] contains an extensive discussion, with
specic reference to the specication of communication protocols. Knowledge-based
programs play a prominent role in the literature on agent programming; see, for
example, [Shoham 1993].
The closed-world assumption on knowledge is also made in [de Giacomo et al.
1996], where they reduce the entailment problem for knowledge to entailment of
knowledge-free sentences. Their work diers from ours in two essential ways: theirs
is an epistemic description logic, and their epistemic modality is for the purposes
of planning, not knowledge-based programming.
This paper is a close relative of [Pirri and Finzi 1999]. There, Pirri and Finzi give
a mechanism for on-line execution of action sequences, with sense actions. Their
concept of sense actions and their outcomes is more sophisticated than ours, partly
because they allow for perceptions that may con
ict with an agent's theory of the
world, but in one special case|the so-called safe action sequences|their treatment
of a sense actions outcome is the same as ours: update the initial database with the
regression of this outcome. Their concept of safety also corresponds closely to our
notion of guarded sense actions. The basic dierences between us is that our action
theories are formulated with knowledge, while theirs are not, and we are interested
in the on-line execution of programs, not just action sequences. Nevertheless, it
seems that one way of viewing (some of) the results of this paper is as a specica-
tion of agent behaviours in terms of knowledge and the closed-world assumption,
for which the Pirri-Finzi account is a provably correct implementation. But this
possibility raises so many issues, they are best dealt with in future work.
10. POSTSCRIPT
It is particularly gratifying to be able to provide this paper in honour of Bob Kowal-
ski's 60th birthday because he has long been an advocate of metalevel reasoning,
especially for reasoning about modalities. So although Bob has not been a great fan
of the situation calculus, 7 I expect he would approve of my reduction to provability
7 But who knows, perhaps he has had a change of heart recently; turning does have certain
mellowing eects.
On Knowledge-Based Programming with Sensing in the Situation Calculus 23
of the knowledge modality. I do hope so, because this is my birthday gift to him,
and there is no exchange policy where it came from.
--R
Moving a robot: The KR&R approach at work.
Reasoning about Knowledge.
Modelling knowledge and action in distributed systems.
On sensing and
AOL: a logic of acting
All I know: A study in autoepistemic logic.
The Logic of Knowledge Bases.
Reasoning about knowledge and action.
A formal theory of knowledge and action.
ADL and the state-transition model of action
An approach to perception in theory of actions: Part 1.
http://www.
Some contributions to the metatheory of the situation calculus.
Journal of the ACM
Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems.
The frame problem and knowledge producing actions.
Agent oriented programming.
Achieving several goals simultaneously.
--TR
All I know: a study in autoepistemic logic
Agent-oriented programming
Reasoning about knowledge
Some contributions to the metatheory of the situation calculus
The logic of knowledge bases
Knowlege in action
--CTR
Stefan Schiffer , Alexander Ferrein , Gerhard Lakemeyer, Football is coming home, Proceedings of the 2006 international symposium on Practical cognitive agents and robots, November 27-28, 2006, Perth, Australia
Michael Thielscher, FLUX: A logic programming method for reasoning agents, Theory and Practice of Logic Programming, v.5 n.4-5, p.533-565, July 2005
Sebastian Sardina , Giuseppe De Giacomo , Yves Lesprance , Hector J. Levesque, On the Semantics of Deliberation in Indigologfrom Theory to Implementation, Annals of Mathematics and Artificial Intelligence, v.41 n.2-4, p.259-299, August 2004
Richard B. Scherl , Hector J. Levesque, Knowledge, action, and the frame problem, Artificial Intelligence, v.144 n.1-2, p.1-39, March
Viviana Mascardi , Maurizio Martelli , Leon Sterling, Logic-based specification languages for intelligent software agents, Theory and Practice of Logic Programming, v.4 n.4, p.429-494, July 2004 | situation calculus;situation calculus programming languages;dynamic closed-world assumption;theorem-proving;sensing and knowledge |
383892 | Querying ATSQL databases with temporal logic. | We establish a correspondence between temporal logic and a subset of ATSQL, a temporal extension of SQL-92. In addition, we provide an effective translation from temporal logic to ATSQL that enables a user to write high-level queries which are then evaluated against a space-efficient representation of the database. A reverse translation, also provided in this paper, characterizes the expressive power of a syntactically defined subset of ATSQL queries. | Introduction
This paper brings together two research directions in temporal databases. The first direction is concerned
with temporal extensions to practical query languages such as SQL [Gad93, NA93, Sar93].
The issues addressed include space-efficient storage, effective implementation techniques and handling
of large amounts of data. This direction includes ATSQL [SJB95], the integration of ideas
from TSQL2 [Sno95] and ChronoLog [B-oh94]. The second direction of research focuses on high-level
query languages for temporal databases based on temporal logic [TC90, GM91, CCT94]. The
advantages of using a logic-based query language come from their well-understood mathematical
properties [GHR94]. The declarative character of these languages also allows the use of advanced
optimization techniques. In addition, temporal logic has been proposed as the language of choice
for formulating temporal integrity constraints and triggers [Cho95, CT95, GL96, LS87, SW95], admitting
space-efficient methods for enforcing these constraints.
While temporal logic may seem to be a natural choice for a temporal query language, its semantics
is defined with respect to abstract temporal databases: time-instant-indexed sequences of database
states [GHR94]. This point of view has often disqualified temporal logic as a practical temporal query
language: For efficiency reasons we can not construct and store all the individual states explicitly
(this is indeed impossible if the sequence is not finite). Therefore most of the practical proposals
associate a concise encoding of a set of time instants at which a particular fact holds with the tuple
representing this fact. The encoding is commonly realized by a period 1 [NA93, Sar90, Sno87, Tan86]
or a temporal element-a finite union of periods [CC87, Gad88, Sno95].
The contributions of this paper are twofold. First, we develop a translation of temporal logic to
ATSQL. This translation allows the users to take advantage of a high-level declarative language while
queries are still efficiently evaluated over compactly represented ATSQL temporal databases that
use period encodings. The translation also dispels the myth of logic-based temporal query languages
being inherently inefficient: The approach presented in this paper shows that queries expressed in
temporal logic can be evaluated as efficiently as queries in any of the practical approaches mentioned
above. We also develop a syntactic criterion that guarantees safety of queries in temporal logic and
is broad enough to contain the equivalents of all domain-independent queries. Second, we present a
reverse translation of a syntactically defined subset of ATSQL, to clarify the expressiveness picture.
Although we use ATSQL as the domain of the reverse translation, our results apply, with minor
adjustments, to any other temporal query language that uses a distinguished period-valued attribute
to represent valid time for facts stored in the database and enforces coalescing on the encoding .
The paper is organized as follows: We start with a discussion of the basic framework in Section 2,
including the syntax and semantics of temporal logic and ATSQL (in the case of ATSQL we introduce
only constructs relevant to the development in this paper; for a full description see [SJB95]). In
Section 3 we give the mapping from temporal logic to ATSQL. We conclude the section with an
example and the discussion of some implementation issues. In Section 4 we discuss the reverse
mapping and relate the expressive power of (a subset of) ATSQL and temporal logic. Section 5
discusses the relations to other temporal query languages including the impact of the presented
results.
2 Basic framework
Before we start comparing temporal logic and ATSQL we need to set up a common formal framework
suitable to both languages. In this paper we fix the structure of time to be integer-like: linear (totally
discrete, unbounded in both the past and the future. However, our approach can be easily
adopted to other structures of time, e.g., bounded in the past (natural numbers like time), or dense
(rational-like time). The proposed mapping changes in only minor ways to accommodate such
extensions. We also assume a single, fixed time granularity (one year).
All the references to time in this paper represent the valid-time references capturing the relationship
between individual time points and validity of facts in reality [JCE + 94]. In particular, the
transaction time, which relates when facts are stored in the database, is not considered. (This is
because the standard temporal logic can only deal with a single temporal dimension.)
Finally, we restrict the discussion to the point-based view of a temporal database-the view
adopted by temporal logic. As ATSQL is period-based we use coalescing to enforce strictly point-based
semantics. Coalescing is a unary operation on ATSQL temporal relations that merges value-equivalent
tuples with adjacent or overlapping periods into a single tuple [BSS96]. Throughout, we
make sure that base relations as well as intermediate relations are coalesced.
2.1 Temporal logic
Temporal logic is an abstract query language, a language defined with respect to the class of abstract
temporal databases [Cho94, CT98]. An abstract temporal database, in turn, is a database which cap-
1 In this paper we use the term 'period' rather than the term `interval' commonly used in temporal logic because
the latter term conflicts with the SQL INTERVALs, which are unanchored durations, such as 3 months.
tures the formal semantics of a temporal database without considering any particular representation
issues.
It is possible to view an abstract temporal database in several different but equivalent ways. We
choose here the snapshot view [Cho94] in which every time instant is associated with a (finite) set
of facts that hold at it. For integer-like time, abstract temporal databases can be viewed as infinite
sequences of finite database states of the form
Example 2.1 Figure 1 presents an example of an abstract temporal database, viewed as a sequence
of states. The database represents information about Eastern European history, modeling the independence
of various countries [Cho94]. Each fact indicates an independent nation and its capital.
This relation is used as a running example throughout the paper.
Year Timeslice
Figure
1: Eastern European history: the abstract temporal database
Syntax. First-order temporal logic (FOTL) extends first order logic with binary temporal connectives
since and until, and unary connectives 5 ("previous" or "yesterday") and 4 ("next" or
"tomorrow"). Informally, A since B is true in a state if A is true for states between when B was
true and now (this state). A until B is true in a state if A will be true into the future until B will
be true. The rest of the usual temporal connectives can be defined in terms of these, e.g.,
true since A A was true sometime in the past
A will be true sometime in the future
was true always in the past
will be true always in the future
In the rest of this paper we also consider the universal quantifier (8X)A to be a shorthand for
:(9X):A, the implication A ! B a shorthand for :A - B, etc.
Example 2.2 Our first example is a query which does not relate different database states. The
query
determines all years when Poland but not Slovakia was an independent country, i.e., the times when
the query evaluates to true.
Example 2.3 The second example relates different database states. The query
returns the name of the city that superseded Cracow as Poland's capital and the years when this
city was the capital.
Example 2.4 Consider the query [Cho94, p.515] "list all countries that lost and regained indepen-
dence" over the abstract temporal database shown in Figure 1. This is formulated in temporal logic
as:
For a country and a year to result, the country will have been independent in the past, will be
independent in the future, but is currently not independent.
Formally, the semantics of the temporal logic queries is defined as follows:
Definition 2.5 An abstract temporal database is an integer indexed sequence of database states
Every database state D i contains a relation (relation instance)
r for each relation schema R. We define the semantics of temporal logic formulas in terms of a
satisfaction relation j= and a valuation - (a valuation is a mapping from variables to constants):
is the result of applying - to variables of R,
is a valuation identical to
- except that it maps X to c,
An answer to a temporal logic query ' in D is the set 'g. Thus, temporal
logic may be viewed as a natural extension of relational calculus.
As indicated by the example queries, temporal logic provides a convenient means for expressing
rather involved English queries in a natural way. However, the state-based semantics of temporal
logic does not suggest an efficient implementation of such queries. Any implementation taking
advantage of a compact period-based representation of temporal databases promises much better
performance.
2.2 ATSQL
ATSQL [SJB95] is a further development of TSQL2, a temporal extension of SQL-92. An early
version of ATSQL has been proposed to the ISO international committee for standardization for
incorporation into the SQL/Temporal standard, and implemented. Therefore, we use it as the
target query language of our translation.
ATSQL Databases. A period is a pair [a; b] where a is the left endpoint and b the right endpoint .
The period [a; b] is used to encode the set of instants ft ja - t - bg. A valid-time relation is a
finite relation where tuples are implicitly timestamped with periods. A ATSQL database is a finite
collection of valid-time relations. Figure 2 shows an ATSQL database that encodes the abstract
temporal database shown in Figure 1. Remember that throughout the paper we assume that all
indep
Country Capital VALID
Czech Kingdom Prague [1198; 1620]
Czechoslovakia Prague [1918; 1938]
Czechoslovakia Prague [1945; 1992]
Czech Republic Prague [1993; 1]
Slovakia Bratislava [1940; 1944]
Slovakia Bratislava [1993; 1]
Poland Gniezno [1025; 1039]
Poland Cracow [1040; 1595]
Poland Warsaw [1596; 1794]
Poland Warsaw [1918; 1938]
Poland Warsaw [1945; 1]
Figure
2: Eastern European history: the concrete ATSQL relation
the ATSQL temporal relations are coalesced: The timestamps are represented by maximal non-overlapping
periods. This assumption is fundamental for our translation of temporal logic queries
to ATSQL to work correctly.
ATSQL Queries. ATSQL extends the query language of SQL-92 [MS93]. The crucial concepts in
ATSQL are statement modifiers (or flags) that can be prepended to queries to modify their temporal
behavior . As a consequence ATSQL queries come in three flavors:
1. SQL-92 queries (without any additional flags) are executed on the temporal database with
respect to the current time instant (now).
2. SQL-92 queries preceded by the SEQUENCED VALID flag are evaluated relative to every snapshot
of the temporal database; the results are then collected in a temporal relation with timestamps
corresponding to the evaluation point (cf. snapshot reducibility [SJB95]).
3. SQL-92 queries preceded by the NONSEQUENCED VALID flag: in this case the processing of the
timestamps is completely controlled by the query, rather than by some implicit mechanism
of the underlying DBMS. In other words, the enclosed statement is executed with standard
semantics. Timestamps are treated like all other attributes and no built-in temporal
processing is performed. The manipulation of timestamps is made explicit using the following
constructs:
ffl given a period p, BEGIN(p) denotes the start point of p and END(p) the endpoint of p.
We often use shorthands respectively, to denote the endpoints.
ffl given two time points b and e, b - e, PERIOD(b,e) denotes the period constructed out of
two time points (instants). Again, we often use a shorthand [b; e].
ffl we use integer constants to denote time instants (e.g., 1998 stands for the year 1998). We
also include constants denoting the start of time, TIMESTAMP 'beginning' or \Gamma1, and
the end of time, TIMESTAMP 'forever' or 1.
ffl to dislocate a given time point, t by one year 2 we In similar fashion we can
dislocate periods: to dislocate period by one year we resulting
period is [b
ffl finally, we use FIRST(t,s) and LAST(t,s) to find the earlier and later, respectively, point
of t and s.
The above syntax is used to manipulate timestamps in ATSQL select-blocks. First, we need
to gain access to the implicit valid-time attributes of ATSQL relations: VTIME(R) denotes
the timestamp associated with the range variable (tuple in the relation) R and substitutes for
the lack of explicit temporal attributes. The WHERE clause uses temporal built-in predicates
to specify temporal relationships between periods. While all the relationships between two
periods can be expressed using order relationships on their endpoints, ATSQL also supports
Allen algebra-like comparisons of pairs of periods (which we won't use further in this paper).
To be consistent with SQL2, these relationships have a somewhat different meaning than the
identically-named relationships in [All83]:
It is easy to see that the above relationships (and their Boolean combinations) can express
all period relationships of [All83] and thus all possible topological relationships between two
periods. Metric relationships can be captured using the above timestamp constructs. Finally,
the SET VALID p clause, which is part of the NONSEQUENCED VALID statement modifier and
precedes the actual query, defines p to be the resulting timestamp period for all tuples in the
answer to the query (p is usually a function of the VTIME(R) attributes).
In addition every query or table reference can be followed by a (VALID) flag to enforce coalescing
of the corresponding temporal table, that is, tuples with identical explicit attribute values whose
valid-times overlap or are adjacent are merged into a single tuple, with a period equal to the union
of the periods of the original tuples. As a side-effect, duplicates are eliminated.
Example 2.6 In order to determine the name of the city that superseded Cracow as Poland's
capital (cf. Example 2.3), the query has to relate different database states. In ATSQL this means
that we have to specify the VALID clause and the required temporal relationship. This results in the
following ATSQL query:
NONSEQUENCED VALID
SELECT i1.Capital
FROM indep(VALID) AS i1, indep(VALID) AS i2
2 In this paper we assume that the valid-time uses granularity of a year. Thus +1 is shorthand for +INTERVAL '1'
YEAR and -1 for -INTERVAL '1' YEAR.
Example 2.7 The formulation of a query becomes even simpler if it can be answered by looking at
single snapshots. In this case the user simply specifies sequenced semantics when formulating a query,
as illustrated in the following query, which determines all periods when Poland was independent but
not Slovakia (cf. Example 2.2):
SELECT i1.Country
FROM indep(VALID) AS i1
AND NOT EXISTS ( SELECT *
FROM indep(VALID) AS i2
The proposed translation uses both the SEQUENCED VALID variant of the ATSQL queries (to translate
the first-order fragments of the temporal logic queries) and the NONSEQUENCED VALID queries (to
translate the temporal connectives).
Mapping Temporal Logic to ATSQL
In this section we introduce the main result of this paper: the translation of queries formulated
in temporal logic to ATSQL. Similarly to mapping relational calculus queries to SQL, our translation
has to identify a syntactic subset of domain-independent temporal queries that can be safely
translated to ATSQL. The syntactic criterion is based on an extension of the criterion presented in
[AHV95]. However, our approach can be analogously used in more complicated translations, e.g.,
[VGT91]. We discuss several possible refinements in Section 3.5.
3.1 Correspondence of Temporal Databases
Before we can describe the actual mapping of temporal formulas to ATSQL we need to establish
a relationship between temporal databases (over which the semantics of temporal logic queries is
defined) and ATSQL databases, the target of our translation.
Definition 3.1 Let be an abstract temporal database. The support of a
temporal logic formula ' under a valuation - is the set of time instants
The support for ground formulas (facts in particular) does not depend on the valuation. In this way
the definition of the support yields the definition of the class of abstract temporal databases we are
interested in:
Definition 3.2 An abstract temporal database is finitary if it contains a finite number of facts and
the support of every fact can be represented as a finite union of periods.
Not every abstract temporal database is finitary. For example, the database that contains a single
fact p(a) in every even-numbered state and whose every odd-numbered state is empty cannot be
finitely represented by a union of periods. On the other hand, the class of finitary temporal databases
captures exactly all ATSQL databases.
Proposition 3.3 Every ATSQL database represents a unique finitary abstract temporal database
and every finitary abstract temporal database can be represented as an ATSQL database.
In the rest of the paper we use k:k to denote the mapping of ATSQL databases to the corresponding
finitary abstract temporal databases.
3.2 Domain Independence and Range Restriction
The actual translation of temporal logic queries to ATSQL is based on the semantic rules in Definition
2.5. However, there is a problem with a direct use of these rules: the interpretation of
variables is relative to a potentially infinite universe of data values . Thus it is easy to formulate
"unsafe" queries in temporal logic that produce non-finitary answers or use quantification over the
infinite universe of data values (similarly to relational calculus [AHV95]). To avoid these problems
we introduce the notion of domain-independent temporal logic queries:
Definition 3.4 Let ' be a FOTL query and D an abstract temporal database. We define the active
domain, adom(D;'), to be the set of all data constants that appear in D and '.
The interpretation j= U is the ?from Definition 2.5 relativized to the universe of data
values U . We assume that U always contains all the data constants appearing in the query and the
temporal database.
A temporal logic query ' is domain-independent if for all sets U 1 ; U 2 such that adom(D;') ' U 1 "U 2
we have D; a valuation of free variables of ' over
Note that the above definition relativizes the interpretation of the queries only with respect to the
data domain. The universe of time instants is fixed to an integer-like linear order (Z; !). It is
easy to see that to obtain an answer to a domain-independent query it is sufficient to evaluate
the query using the active domain interpretation, i.e., for adom(D;'). Moreover, the formula
characterizing the active domain for a fixed query can be expressed uniformly for all D as a temporal
logic query.
Lemma 3.5 Let D be an abstract temporal database and ' a temporal query. Then there is a
formula adomD;' (x) such that 8i 2 Z:D;
C be the set of all constants in ' and R the set of all formulas of the form
where r is a predicate symbol corresponding to a relation in the database D,
9 x is the string of existential quantifiers for all free variables of r but x. We define
can be used to restrict variables in domain-independent temporal logic
queries without changing their meaning.
We present a syntactic criterion that guarantees domain-independence of temporal logic que-
ries. While domain independence itself is not decidable (the class of temporal logic queries contains
all relational calculus queries), we show that the safe range queries-those queries that pass our
syntactic criterion-can express all the domain-independent queries in temporal logic.
The criterion is based on a modification of the criterion for the relational calculus queries
[AHV95]: we treat the binary temporal connectives since and until as -, and ignore the unary
ones, 3; 1; 2; 0; 5, and 4:
Definition 3.6 [Range restriction (rr).] Let ' be an arbitrary temporal query and FV (') the set
of free variables in '. We define
or OE since /
We say that a formula ' is safe range if and for all subformulas of ' of the form
9x:/ we have x 2 FV (/) oe x 2 rr(/).
Note that this extension of the original criterion for relational calculus queries is the strongest
possible: we map since and until to - and ignore the unary temporal connectives. To achieve
better results we would have to start with a stronger criterion for the first order case, e.g., [VGT91].
Theorem 3.7 Let ' be a domain-independent query. Then there is an equivalent safe range query.
domain-independent query ' can be correctly evaluated using the active-domain
semantics. The active domain adom(D;') can be defined uniformly for all D by a temporal logic
query adomD;' (x) (cf. Lemma 3.5). We add conjuncts that restrict the domain of every variable
in all subformulas of '. The resulting formula is equivalent to ' and safe range (follows from easy
induction on the structure of the formula). 2
Therefore every domain-independent query can be equivalently asked using a safe-range query. Moreover
Lemma 3.8 Let D be a finitary abstract temporal database and ' a safe-range query. Then '(D)
is also finitary.
By induction on the structure of ': it is sufficient to observe that (i) temporal connectives
preserve the finitary properties and (ii) all variables in ' are range-restricted. 2
However, while domain independence is preserved under equivalence of queries, the rr criterion
is not. We define a normal form of temporal logic queries to improve our chances of discovering
equivalent safe-range reformulation of a given query:
Definition 3.9 SRNF be an arbitrary temporal logic query. We define:
Variable substitution: We rename all quantified variables using unique names to avoid variable name
clashes in the subsequent transformations.
Removal of 8: We replace subformulas of the form 8x:A by :9x::A.
Removal of ! and $: We replace implications A ! B by :A-B, and similarly for the equivalences.
Pushing of negations : We use the following rules to push negations towards leaves of the formulas
and to remove double negations:
1. ::A 7! A
2. 9x:A 7! A if x 62 FV (A).
3.
4. :3A 7! 1:A, :1A 7! 3:A, :2A 7! 0:A, and :0A 7! 2:A.
5. :4A 7! 4:A, and :5A 7! 5:A.
6. since :B), and
A SRNF resulting from applying these rules to a temporal formula ' as long as possible
is denoted SRNF
Note that the last rule for since and until is valid only for discrete time; for dense time we have to
omit this rule 3 . For time bounded in the past we would also have to remove the part handling 5
from rule (5), as the equivalence does not hold for time bounded in the past (natural numbers-like).
Clearly, all the above transformations are equivalence-preserving. Thus
Lemma 3.10 DB; -; i
Thus at the end of this step we are left with an equivalent and cleaned-up temporal formula. In
addition it is easy to see that:
Lemma 3.11 Let ' be safe-range. Then SRNF
This lemma guarantees that applying the SRNF TL transformation on the given query can only
improve the chances that the query passes the rr criterion. Thus the rr criterion is always applied
on the result of the SRNF
The safe-range criterion rr assumes that the since and until connectives behave like -. Unfor-
tunately, the temporal connectives do not have the same commutative and distributive properties -
has, e.g., (OE - /) until ' 6j (OE until ') - (/ until '): However, it is easy to see that the variable
x in the formula :p(x) until q(x) is safe-range by the atomic formula q(x): clearly if there is a
valuation - such that DB; -; i there must be another time instant i 0 such
that DB; -; i 0 q(x). Thus the formula q(x) gives us a range restriction for x. We exploit this fact
to propagate the range restricting subformulas towards the leaves of the original formula using the
following equivalences:
Lemma 3.12 OE until / j OE until (3OE - /), OE until / j (OE - 2/) until /.
We show only the first equivalence; proof of the second one is analogous.
by the definition of until we know that there is an j ? i such
that DB; -; j
using the definition of until we get DB; -; i
(: Let DB; -; i similarly to the previous case there is an j ? i such that
thus DB; -; i
A similar lemma holds for the since connective. Using these equivalences we can move the range
restricting subformulas between the left- and right-hand sides of the since and until connectives. In
addition, we may need to move a range restricting formula into the scope of a temporal connective:
Lemma 3.13 '-(OE until /) j '-(OE until (3'-(OE until /) j '-((OE-43') until /).
Again, we prove only the first statement.
and by the definition of until we know that
there is an j ? i such that DB; -; j
using the definition of until we get DB; -; i
3 The rule may also significantly increase the size of the resulting formula and we may not want to use it even in
the case of discrete time.
(distribute in since left-to-right)
(distribute in until left-to-right)
since (distribute in since right-to-left)
(distribute in until right-to-left)
A - (B since C) 7! A - ((52A - B) since C) (push into since, left side)
A - (B until C) 7! A - ((43A - B) until C) (push into until, left side)
A - (C since B) 7! A - (C since (2A - B)) (push into since, right side)
A - (C until B) 7! A - (C until (3A - B)) (push into until, right side)
The rules are used when x is a variable range restricted in the subformula denoted by A, x 2 rr(A),
and free but not range restricted in the subformula denoted by B, x
Figure
3: RANF rules.
(: Let DB; -; i Similarly to the previous case DB; -; i and there is
an
implied by DB; -; i
Similar laws hold for the remaining connectives, including the unary ones (cf. the rewriting rules
in Definition 3.14). We use these laws in the final step of the conversion to propagate the range
restricting subformulas towards the leaves of the query. This way the final ATSQL query can always
be evaluated bottom-up. This goal is achieved by a modified RANF transformation [AHV95].
Definition 3.14 [RANF ' be a safe range temporal formula. A RANF is the result
of applying the rules in Figure 3 together with commutativity and associativity of conjunction to ',
starting from the top-level connective.
Clearly, all the rewriting rules preserve the meaning of the formula:
Lemma 3.15 DB; -; i
Follows from Lemmas 3.12 and 3.13, and standard equivalences for first order logic. 2
It is also easy to see that every rule in Figure 3 propagates x's restriction in A towards B.
Lemma 3.16 Let ' be a safe range temporal formula. Then every subformula of RANF
rooted by - or : is safe range.
Assume that RANF contains a subformula not rooted by - or : that
is not safe range. By case analysis we can show that ' is not safe range (as none of the rules in
Definition 3.14 is applicable by the assumption); a contradiction. 2
Similarly to [AHV95] the RANF rewriting terminates, as there are only finitely many subformulas
in the original query. Moreover, every safe range temporal query is domain independent:
Lemma 3.17 Let ' be a safe-range temporal query. Then ' is domain independent.
is equivalent to RANF Because query equivalence preserves
domain independence, it is sufficient to show that RANF TL (') is domain-independent. This follows
from an easy induction on the structure of RANF applied on every subformula
of RANF
This result, together with Theorem 3.7, shows that the classes of domain-independent queries and
safe-range queries coincide.
The translation itself is defined in three steps:
1. The first step corresponds to transforming the formula to SRNF; essentially we clean up the
formula and remove superfluous connectives, especially double negations.
2. In the second step we test all the variables in the cleaned up formula for the safe-range property.
3. For the formulas that pass the check-the safe range formulas-we propagate the range restrictions
so that all (significant) subformulas also became safe range.
3.3 Translation to ATSQL
The next step in traditional translations is the translation to relational algebra. However, we have
chosen ATSQL as our target language. The translation of temporal logic formulas in RANF TL to
ATSQL is defined by induction on the structure of the formula.
The input to this transformation is a safe-range temporal logic formula in RANF TL . It is
translated to ATSQL by repeating the following two steps:
1. First the maximal non-temporal subformulas are translated to sequenced SQL queries; this
can be done using a simple RANF TL to SQL translation (patterned, e.g., after the RANF to
Relational Algebra translation in [AHV95]).
2. The translations of the subformulas are combined using the translations of the temporal connectives
defined in the next section.
This process is repeated until the whole formula is translated.
3.3.1 Temporal Logic Connectives
We define translations of the individual temporal connectives to ATSQL as ATSQL query templates.
The subformulas rooted by the temporal connectives are then translated to subqueries embedded
into these templates.
The connectives since and until. Figure 4 graphically illustrates the semantics of since and
until over periods. We have listed all possible temporal relationships [All83] between the truth
periods of two formulas A and B. For each relationship we have determined the truth period of
A since B and A until B respectively. More formally the truth periods of A since B and A until B
are defined as follows.
A since B 7!
Temporal relationship
between formulas A and B
A
A
A
A
A
A
A
A
A
A
A
A
A
Temporal logic
A since B
A since B
A since B
A since B
A since B
A since B
A since B
A since B
A since B
A since B
A since B
A since B
A since B
Truth period
of formula F
Figure
4: Period semantics of since and until
The reader may verify that these general expressions evaluated on any particular relationship given
in
Figure
4 result in the correct truth period.
These expressions are translated to ATSQL straightforwardly using the NONSEQUENCED VALID
modifier and specifying the final timestamp using the SET VALID clause. The additional conditions
are translated into appropriate WHERE clause conditions. More precisely, A since B is translated to
NONSEQUENCED VALID
SELECT .
AND .
and A until B is translated to
NONSEQUENCED VALID
SELECT .
AND .
where A 0 and B 0 are the results of applying the translation recursively on A and B, respectively.
The SELECT lists of the ATSQL statements are derived from the sets of free variables occurring in A
and B. Variables used in both A and B give rise to additional WHERE clause conditions that equate
the corresponding attributes in A 0 and B 0 .
It is important to remember that the translations for non-atomic formulas A and B are required
to produce coalesced temporal relations:
Example 3.18 Consider a temporal database D containing two temporal relations A(x;
and B(x; 9. It is easy to see
that if coalescing has not been enforced at every step of the translation, e.g., if the relation A was
not re-coalesced after projecting out the x attribute, the translation would not be correct any more.
Indeed, applying the translation for until on the non-coalesced results of 9x:A and 9x:B would give
us the result t 2 [4; 9] instead of the correct result t 2 [0; 9].
3.3.2 Specialized Mappings
Based on the translation of since and until, the mapping of other temporal connectives can be
defined. While theoretically feasible, such an approach may be cumbersome in practice as it leads
to unnecessarily complicated ATSQL statements. Moreover, introducing these specialized mappings
allows us to translate a wider class of temporal formulas to ATSQL (cf. Section 3.2).
The connectives 3 and 2. We illustrate how the definition of since can be used to derive an
efficient special purpose mapping for 3.
The formula 3B is equivalent to true since B. Therefore we take the definition of A since B
(Section 3.3.1) and substitute A by true. We notice that the truth period of true is the whole time
line which means that BEGIN(VTIME(a0)) evaluates to \Gamma1 (beginning of time) and END(VTIME(a0))
evaluates to +1 (end of time). After the obvious simplifications we obtain:
NONSEQUENCED VALID
SELECT .
which is considerably less complex than the original statement. Similarly, we can use the definition
of until to derive a mapping for 2B, namely
NONSEQUENCED VALID
SELECT .
The connectives 1 and 0. For 1A, one can rewrite it as :3:A and use the approach presented
above. Unfortunately, this approach is not very practical as it may lead to formulas that cannot be
translated (e.g., :3:p(X) versus 1p(X)). Therefore we derive a ATSQL translation for 1A from
the definition
This can be easily expressed in ATSQL, where the 'beginning' keyword stands for \Gamma1:
NONSEQUENCED VALID
SELECT .
Again, coalescing of A 0 is crucial for the translation to work correctly. By analogy, a special purpose
mapping for 0A can be derived:
NONSEQUENCED VALID
SELECT .
The connectives 5 and 4. We use discrete time to model the temporal domain in the ATSQL
databases. Thus, in addition to the since and until connectives, we add temporal connectives that
allow us to refer to the immediately previous (5) and the immediately following
The mapping of these connectives is defined as follows: First we define the truth periods for 5A
and 4A with respect to the truth period of A:
The result is translated to ATSQL using a definition of the corresponding valid-time clause that
shifts the valid-time period by one in the appropriate direction. The translation for 5A is
NONSEQUENCED VALID
SELECT .
and the translation for 4A is
NONSEQUENCED VALID
SELECT .
Similarly to the previous cases, the SELECT list is obtained from the set of free variables of A and
A 0 is the ATSQL translation of A.
3.3.3 Putting it Together
Using the transformation defined in Section 3.2 we can convert every safe range temporal query to an
equivalent RANF We have already shown that the RANF preserves
equivalence. We have also shown that the translations of the individual temporal connectives are
correct. By composing these two steps we have:
Theorem 3.19 Let ' ba a safe range temporal logic formula and D an ATSQL database. Then
ATSQL is the ATSQL translation of the temporal logic query '.
4 With respect to the chosen granularity of time which in this paper is a year.
3.4 Example
Consider the query "list all countries that lost and regained independence" (Example 2.4) formulated
in temporal logic as:
To simplify the illustration of the translation we break up the formula into a set of auxiliary rules
We translate the first rule to
NONSEQUENCED VALID
SELECT a0.Country, a0.Capital
FROM indep(VALID) AS a0
and the second rule to
NONSEQUENCED VALID
SELECT a1.Country, a1.Capital
The main query is then translated to
SEQUENCED VALID
a2.Country AS Country
FROM aux view1(VALID) AS a2, aux view2(VALID) AS a3
AND NOT EXISTS (
FROM indep(VALID) AS a4
Note that, apart from the sequenced flag, this last step is identical to the translation from first
order logic to SQL. Because ATSQL handles the temporal dimension of snapshot-reducible queries
automatically, the translation of formulas that do not contain temporal connectives reduces to the
translation of first order logic to SQL.
3.5 Refinement and Optimization
In Section 3.2 we described only the simplest version of the translation; we used a direct temporal
extension of the translation presented in [AHV95]. However, such a direct extension has several
drawbacks. We address some of them in this section:
Negation is pushed too deep during the SRNF phase. This is necessary to find all double
negations in the original formula and to eliminate them:
However, if there are no hidden double negations (and this happens in many common cases) then
the resulting formula in SRNF does not improve range restrictedness of the variables and may be
unnecessarily complicated. In such cases we might be better off using the original query:
In such cases we can use a weaker SRNF we push the negations only if there is a chance
they may cancel out in the subformulas. However, there is no unique SRNF any more: we need
to decide how deep we want to push the negations (this decision can be based on heuristics or we
can pick the query with the cheapest execution plan).
The restricting formulas are unnecessarily duplicated. The second problem is intimately
connected with the first one: by transforming the original query we may end up with a formula,
where we need to propagate the bindings for variables across numerous connectives in order to obtain
a formula in RANF . However, this propagation often unnecessarily duplicates subformulas in the
resulting query:
The underlined p(x) - part of the resulting formula is redundant. Note, that this is a general
problem with the conversion proposed in [AHV95] (and most of the other proposals) rather than
with its temporal extension-our example indeed uses only pure first order logic. This problem can
be addressed in two ways:
1. by restricting the depth : gets pushed during the SRNF translation (this is often the main
source of this problem).
2. by eliminating the superfluous restricting formulas; this can be achieved by an additional
bottom-up pass through the generated query after the RANF
Note that in the additional pass we are not trying to eliminate redundant parts of the original query;
this is too difficult. We only eliminate unnecessary subformulas introduced during the RANF
transformation.
Nested temporal connectives (and conjunctions) create unnecessary ATSQL query
blocks. The translation generates a separate ATSQL query block for every temporal connective
(and conjunction). However, this approach may produce unnecessarily nested query blocks, that
may be merged into a single block: Consider the query 31R(x). The translation produces the
following code:
)(VALID) AS R
However, it is obvious we could merge the nested select blocks into a single equivalent block as the
inner temporal operation (1) preserves coalescing and thus the re-coalescing is not necessary:
However, at this point we need to emphasize that the translations of the individual temporal connectives
do require the input relations to be coalesced (cf. Example 3.18). Therefore we can only
merge those select blocks that preserve coalescing (as in the above example). This optimization
corresponds to flattening conjunctions in the relational calculus queries. While in theory this step
could be performed by a smart query optimizer, we are not aware of any implementation that would
be able to perform such an optimization: most query optimizers are not able to perform arithmetic
simplifications needed during the process (e.g., evaluating used in our example). This
observation can be summarized by the following lemma:
Lemma 3.20 The translations of A since B, A until B, 1A, 0A, 3B, 2B, 5B, and 4B remain
correct even if the ATSQL translation of B is not coalesced (however, A has to be coalesced in all
cases). Moreover, if B is coalesced, the result of applying a temporal connective is coalesced too.
This lemma, together with the observation that coalescing is preserved under temporal joins and
differences (and not preserved by unions and projections) allows us to safely remove redundant
coalescing operators in the translated formula.
Mapping ATSQL to Temporal Logic
Establishing a mapping between a subset of ATSQL and temporal logic is less important from a
practical point of view than establishing the mapping in the other direction, as described in the
previous section. A possible application is the decompilation of ATSQL queries. However, the main
purpose of establishing a mapping from ATSQL to temporal logic is to identify a subset of the former
that has the same expressive power as the latter. This clarifies the issue of expressive power of some
proposed restrictions of ATSQL, e.g., [WY98].
Indeed, only a subset of ATSQL can be mapped back to temporal logic. There are several reasons
for that. First, ATSQL inherits SQL-2's duplicate (bag-theoretic) semantics, while the semantics of
temporal logic is set-theoretic. Second, ATSQL (like SQL-2) has aggregate operations that are not
first-order expressible. Finally, ATSQL (like two-sorted first-order logic with a time sort) can express
queries referring to multiple temporal contexts. Recently, it has been shown [AHVdB96, TN96] that
such queries are not expressible in temporal logic.
Therefore, to define the subset of ATSQL corresponding to temporal logic we introduce a few
syntactic restrictions. The first limits ATSQL to the SQL-2 constructs that can be mapped to
relational algebra or calculus:
Definition 4.1 A ATSQL query is pure if:
1. It does not use aggregate functions.
2. Coalescing of periods is forced using (VALID). As a side-effect, this ensures that no duplicates
are generated.
The second restriction prohibits referring to multiple temporal contexts:
Definition 4.2 A ATSQL query Q is local if in every subclause of Q, all the references of the form
VTIME(v) refer to a range variable v of the FROM clause of this particular SELECT. (There is no similar
requirement for non-temporal attributes.) This implies that nested SELECT clauses cannot refer to
the valid-times of range variables specified in the FROM clause of an enclosing SELECT.
Example 4.3 The following ATSQL query is not local.
SELECT * FROM a AS a
Our mapping translates a pure local ATSQL query Q to a temporal logic formula OE Q . We define it
step by step.
Temporal built-in predicates. We start by considering a simplified form of nonsequenced AT-
SQL2 queries:
Additionally, we assume at first that:
1. the only references to valid time in the query are of the form VTIME(v) where v is one of the
range variables R
2. ff does not contain subqueries.
We will subsequently relax these assumptions. For ATSQL queries obeying the above restrictions
the mapping is defined in following steps:
1. We define a set of special points to contain all the time instants explicitly referenced by the
query (essentially the endpoints of valid periods for all relations in the FROM clause). These
points are ordered linearly along the time line consistently with the WHERE clause ff- we try
all such linear orders one by one.
2. Each of the above linear orderings divides the time line into isolated points and (open) periods,
each of which corresponding to a set of time instants that make a local characteristic formula
written in FOTL true over a given database.
3. Similarly, the timestamp for the result of the ATSQL query is represented by a disjunction
of global characteristic formulas (again one for each of the partitions of the time line defined
above).
In the rest of this section we develop this idea formally and show how it can be extended to a more
general class of queries. For a query Q define the set SQ of special points of Q to be
Also, for every range variable v of Q, define l(v) as the literal r( -
X) where r is the relation symbol
of v and -
X is a vector of unique (logical) variables of length equal to the arity of r. In this way, a
unique logical variable is also assigned to every attribute.
Now every temporal predicate in ff (or its negation) can also be written as a disjunction of
atomic order predicates relating some of the special points in SQ . Therefore, there is one or more
(but finitely many) strict linear orderings of SQ that are consistent with ff. (In such orderings some
of the special points may coincide.) For every such ordering W we will construct a
that encodes it.
For every special point p 62 f\Gamma1; +1g in W we determine the set of atomic formulas T (p) that
are true in W :
For this has to be modified in an obvious way.
For every open period are special points we also determine the set
of atomic formulas T (i) that are true in W :
We define now the local characteristic formula \Phi p of a special point p in W as
A
and the local characteristic formula \Phi i of an open period i in W as
A:
Consider a pair of consecutive special points in W . They may correspond to consecutive time
instants, e.g., v \Gamma and We will call the first the point-point case. In
the second case we also have to consider the open period between those two points and have two
cases: point-period and period-point.
The SET VALID clause of the query Q specifies a closed period
are special points in W . This period consists of a number of special points and open periods between
those points. We construct global characteristic formulas: \Psi p for each special point in i 0 and \Psi i
for each (open) period in i 0 . Such formulas encode the ordering W and the position of the point
(resp. period) in this ordering. A formula \Psi
encodes the past of p in W
and \Psi R
encodes the future of p in W (for periods this is defined similarly). Now assuming p 0 is the
predecessor of p in W , \Psi L
p and \Psi L
are defined as follows:
point-point case), or
the predecessor of p is an period (the period-point case), and
p1 for an open period
The formulas \Psi R
are symmetric to \Psi L
used instead of 5 and until
instead of since. To get the formula fl W we take the disjunction of all the global characteristic
formulas corresponding to the points and periods in i 0 .
Finally, the query corresponding to the ATSQL query Q is obtained as the disjunction of all
the formulas fl W over all linear orders W of SQ that are consistent with ff.
4.1 Lifting the restrictions.
The reverse translation can be extended from the restricted ATSQL queries to all pure local ones as
follows:
Nontemporal conditions. Every condition A 1 'A 2 is translated as
is the variable corresponding to the attribute A 1 (resp. A 2 ). A condition A 1 'c is translated similarly.
Nested subqueries. The WHERE clause of a query Q can contain a nested subquery Q 1 . The
subquery Q 1 is recursively translated using the same mechanism yielding a formula OE Q1 . The
only difference is that the subquery can refer to, in addition to its own range variables, the range
variables of Q. However, Q 1 cannot force any relationship between the special points of Q because
the temporal predicates in Q 1 cannot refer to the range variables of Q (Q is local).
Now there are several ways in which Q 1 may be embedded in the WHERE clause of Q. If the
condition is EXISTS Q 1 , then the translation is
X is the vector of the free variables of OE Q1 . Similarly for NOT EXISTS. If the condition is IN
the query is rewritten to a form that uses only EXISTS and NOT EXISTS.
Temporal constants and expressions. To handle a temporal constant c we need to introduce
a constant 0-ary relation r c and treat this relation as if it were part of every FROM clause. Clearly,
c . (It is enough to have just one constant relation, e.g., ZERO, and define the remaining ones
using 5 or 4.) For every occurrence of a temporal expression VALID(v) we need to add the
points
to the set of special points of the query. Similarly for VALID(v) \Gamma k. It is easy to see that every
pure local ATSQL query can be rewritten to a query in which all the temporal expressions are
either constants or of the form VALID(v) \Sigma k. In particular, the occurrences of FIRST/LAST can be
eliminated by splitting the query.
Assume now that the WHERE clause of a pure local ATSQL query Q is a conjunction of temporal
predicates, nontemporal conditions and conditions with subqueries. The formula OE Q is a conjunction
of the formulas obtained by translating each of those separately and conjoining the result with l(v)
for every range variable v in the query (using a consistent naming of variables that correspond
to relation attributes). A SELECT list with attributes A An is translated into an existential
quantifier prefix consisting of all the variables that are not in this list. Finally, UNION is translated
as disjunction and EXCEPT as "and not".
Sequenced queries. Sequenced queries do not contain temporal predicates, except in subqueries.
Therefore the main query is translated as in the standard translation of SQL into relational calculus.
Temporal subqueries are translated as nested subqueries of nonsequenced queries (see above).
Theorem 4.4 For every pure local ATSQL query Q, there is a temporal logic formula fi Q such that
for every ATSQL database D, a tuple - a timestamped by an period i belongs to the answer of Q
over D iff jjDjj; -; t (where jjDjj is the abstract temporal database
corresponding to D and - is the valuation that maps the free variables of fi Q to - a).
Example 4.5 Assume that a relation a has two attributes: X and Y, and a relation b one attribute
Z.
Consider the following (pure local) ATSQL query.
FROM a AS a, b AS b
WHERE VTIME(a) CONTAINS VTIME(b)
AND a.X=b.Z
We extend the previous notation to apply to tuple variables as follows: x
END(VTIME(x)). The WHERE clause of the query generates the following partial order
of endpoints:
a
The following points are special:
a
Consider now all linear orders of special points that are consistent with the above partial order, for
example, the linear order O
a
The local characteristic formulas corresponding to this order are as follows:
a
true
The global characteristic formulas corresponding to this order are as follows (going from left to right
in the order):
a
In a similar way, \Psi R
obtained. The disjunction of \Psi L
linear orders of special
points that are consistent with the partial order of endpoints generated by the query is then formed,
and subsequently conjoined with the nontemporal condition
The translation from temporal logic to ATSQL presented in the previous section produces pure
local ATSQL queries. Thus:
Corollary 4.6 Temporal logic and pure local ATSQL have the same expressive power as query
languages.
The following is a natural next question to ask: Is there a logical query language equivalent to full
ATSQL? The lack of aggregate functions in temporal logic can be remedied by a syntactic extension
of the language, along the lines of one proposed for relational calculus [Klu82]. The requirement of
maximal periods is more fundamental. In fact, allowing non-coalesced periods calls for a temporal
logic that is not point- but period-based. In that case, there can be no translation from full ATSQL
to the temporal logic discussed in this paper, even for local queries.
The restriction to local queries is also critical. Pure ATSQL has the same expressive power as
two-sorted first-order logic in which there is a separate sort for time. It has been recently shown
[AHVdB96, TN96] that temporal logic is strictly less expressive than the above two-sorted logic.
Thus, there can be no translation from ATSQL to temporal logic that works for all pure queries.
5 Related Work
Despite extensive studies of theoretical properties of temporal logic and other logic-based temporal
query languages [GM91, CCT94, AHVdB96, TN96], there has been surprisingly little work on implementations
of these languages. The main reason for disregarding logic-based approaches to practical
query language for temporal databases is their perceived inefficiency, mainly due to the point-based
semantics commonly accompanying these languages. Indeed, some of the early approaches have
utilized explicit construction of all temporal snapshots of the database [TC90]. However,
ffl for infinite temporal databases this approach fails (even when the databases are finitary);
ffl for finite databases the space requirements are exponentially worse (in the number of bits).
Our approach avoids the above problems while preserving the declarative nature of temporal logic
(similar statement can also be made about most other translation-based approaches, e.g., [Tom97,
Tom98]).
Our translation converts safe-range temporal logic formulas to ATSQL. However, the translation
could have used any other temporal query language as the target, provided it operated over temporal
databases based on a single distinguished temporal attribute over periods and the corresponding
query language enforced coalescing. Fortunately, majority of the proposed temporal data models
and languages satisfy these conditions. Therefore temporal logic can serve as a convenient tool for
interoperability of temporal database represented using one of these models (until a single standard
emerges).
A more general translation from two-sorted first-order logic to ATSQL, which is of clear practical
interest, is considerably more complicated than the translation from temporal logic to ATSQL given
in the present paper. In [Tom96] a translation from a point based two-sorted first-order logic to an
period based temporal query language was proposed. This approach was subsequently extended to
a SQL-based temporal query language SQL/TP [Tom97, Tom98]. This translation could serve as
a translation from two-sorted first-order logic to ATSQL. There are two subtle points about this
ffl It generates non-local ATSQL queries. Indeed, the results [AHVdB96, TN96] show that there
can not be a translation of the two-sorted first-order logic to local ATSQL queries (and views).
ffl In general the generated query may be exponential in the size of the input query 5 . In [Tom97]
we defined a syntactic criterion, that guarantees only polynomial (linear) increase in size for
a subclass of the two-sorted first-order logic queries. Moreover, this subclass contains the
first-order temporal logic.
The reverse translation, while of little practical use, provides the desperately needed insight into how
the various practical query languages for temporal database compare in expressive power: It allows
us to classify temporal extensions of SQL (and related languages) that are essentially equivalent to
temporal logic (with since and until connectives) 6 . The notion of locality plays a major role in this
classification:
Languages equivalent to temporal logic: TQuel [Sno87], HSQL [Sar93], temporal algebra [WY98]
used for temporal data warehousing.
Languages strictly stronger than temporal logic: ATSQL [SJB95] and SQL/Temporal [SBJS96]
(using explicit coercion of temporal attributes to data attributes and vice versa), IXRM [Lor93]
(description incomplete), SQL/TP [Tom97, Tom98].
An important consequence of this classification is, that only the first group of languages can be
implemented by a temporal relational algebra over the universe of temporal relations (with a single
distinguished valid-time attribute) [TN96]. All the languages in the second group require relations
with multiple temporal attributes to store intermediate results during bottom-up query evaluation.
Moreover, there is no upper bound on the number of temporal attributes needed even if the top-level
query is boolean. For a more comprehensive discussion of the above issues see [CT98].
6
Summary
We have established an exact correspondence between temporal logic and a syntactically defined
subset of ATSQL. The translation from temporal logic to ATSQL allows the efficient implementation
of temporal logic queries within a temporal database management system supporting ATSQL.
Future work includes extending temporal logic and the translation to support aggregate functions.
Also interesting would be an adaptation of our approach to a dense domain. This would require first
extending ATSQL to such a domain, including support for half-open and open periods, and then
extending the mapping introduced here.
5 However, this may happen even in the translation from relational calculus to algebra [AHV95].
6 More precisely, their first-order fragments are equivalent to FOTL.
Acknowledgment
We are grateful to Rick Snodgrass for participating in the preparation of the first version of this paper
and continued guidance. Jan Chomicki's work was partially supported by NSF grant IRI-9632870.
--R
Foundations of Databases.
Maintaining Knowledge about Temporal Intervals.
Managing Temporal Knowledge in Deductive Databases.
Coalescing in Temporal Databases.
The Historical Relational Data Model (HRDM) and Algebra Based on Lifespans.
On Completeness of Historical Relational Query Languages.
Temporal Query Languages: A Survey.
Efficient Checking of Temporal Integrity Constraints Using Bounded History Encoding.
Implementing Temporal Integrity Constraints Using an Active DBMS.
Temporal Logic in Information Systems.
A Homogenous Relational Model and Query Languages for Temporal Databases.
Temporal Databases: A Prelude to Parametric Data.
Temporal Logic: Mathematical Foundations and Computational Aspects.
Deriving Optimized Integrity Monitoring Triggers from Dynamic Integrity Constraints.
Temporal Logic and Historical Databases.
A Glossary of Temporal Database Concepts.
Equivalence of Relational Algebra and Relational Calculus Query Languages Having Aggregate Functions.
The Interval-Extended Relational Model and Its Application to Valid-time Databases
Monitoring Dynamic Integrity Constraints Based on Temporal Logic.
Understanding the new SQL: A Complete Guide.
Temporal Extensions to the Relational Model and SQL.
Extensions to SQL for Historical Databases.
HSQL: A Historical Query Language.
Adding Valid Time to SQL/Temporal.
Evaluating and Enhancing the Completeness of TSQL2.
The Temporal Query Language TQuel.
The TSQL2 Temporal Query Language.
Temporal Triggers in Active Databases.
Adding Time Dimension to Relational Model and Extending Relational Algebra.
A Temporal Relational Algebra as a Basis for Temporal Relational Completeness.
Temporal Databases: Theory
Point vs. Interval-based Query Languages for Temporal Databases
Safety and Translation of Relational Calculus Queries.
Maintaining temporal views over non-temporal information sources for data warehousing
--TR
The temporal query language TQuel
Adding time dimension to relational model and extending relational algebra
Monitoring dynamic integrity constraints based on temporal logic
A homogeneous relational model and query languages for temporal databases
A temporal relational algebra as a basis for temporal relational completeness
Safety and translation of relational calculus
Evaluation of relational algebras incorporating the time dimension in databases
Understanding the new SQL
Temporal databases
On completeness of historical relational query languages
A consensus glossary of temporal database concepts
Temporal logic (vol. 1)
Efficient checking of temporal integrity constraints using bounded history encoding
Point vs. interval-based query languages for temporal databases (extended abstract)
Deriving optimized integrity monitoring triggers from dynamic integrity constraints
Temporal logic in information systems
Temportal connectives versus explicit timestamps to query temporal databases
Equivalence of Relational Algebra and Relational Calculus Query Languages Having Aggregate Functions
Maintaining knowledge about temporal intervals
Temporal statement modifiers
The TSQL2 Temporal Query Language
Foundations of Databases
Extensions to SQL for Historical Databases
Temporal Triggers in Active Databases
Implementing Temporal Integrity Constraints Using an Active DBMS
First-Order Queries over Temporal Databases Inexpressible in Temporal Logic
Maintaining Temporal Views over Non-Temporal Information Sources for Data Warehousing
Point-Based Temporal Extension of Temporal SQL
The Historical Relational Data Model (HRDM) and Algebra Based on Lifespans
Temporal Query Languages
Temporal Logic MYAMPERSANDamp; Historical Databases
Coalescing in Temporal Databases
--CTR
Fusheng Wang , Carlo Zaniolo, An XML-Based Approach to Publishing and Querying the History of Databases, World Wide Web, v.8 n.3, p.233-259, September 2005
Paolo Reconciling Point-Based and Interval-Based Semantics in Temporal Relational Databases: A Treatment of the Telic/Atelic Distinction, IEEE Transactions on Knowledge and Data Engineering, v.16 n.5, p.540-551, May 2004
Fusheng Wang , Xin Zhou , Carlo Zaniolo, Bridging relational database history and the web: the XML approach, Proceedings of the eighth ACM international workshop on Web information and data management, November 10-10, 2006, Arlington, Virginia, USA
Michael Bhlen , Johann Gamper , Christian S. Jensen, An algebraic framework for temporal attribute characteristics, Annals of Mathematics and Artificial Intelligence, v.46 n.3, p.349-374, March 2006 | first-order temporal logic;ATSQL;query translation;temporal databases |
384000 | The do-all problem in broadcast networks. | The problem of performing t tasks in a distributed system on p failure-prone processors is one of the fundamental problems in distributed computing. If the tasks are similar and independent and the processors communicate by sending messages then the problem is called Do-All. In our work the communication is over a multiple-access channel, and the attached stations may fail by crashing. The measure of performance is work, defined as the number of the available processor steps. Algorithms are required to be reliable in that they perform all the tasks as long as at least one station remains operational. We show that each reliable algorithm always needs to perform at least the minimum amount &OHgr;(t + pt) of work. We develop an optimal deterministic algorithm for the channel with collision detection performing only the minimum work &THgr;(t Another algorithm is given for the channel without collision detection, it performs work O(t is the number of failures. It is proved to be optimal if the number of faults is the only restriction on the adversary. Finally we consider the question if randomization helps for the channel without collision detection against weaker adversaries. We develop a randomized algorithm which needs to perform only the expected minimum work if the adversary may fail a constant fraction of stations, but it has to select the failure-prone stations prior to the start of an algorithm. | INTRODUCTION
We consider a distributed system in which p processors need
to perform t tasks. If the processors communicate by exchanging
messages, are prone to failures, and the tasks are
similar and independent then this problem is called Do-All .
In this paper we consider a setting in which the processors
are stations communicating over a multiple access channel.
The system is synchronized by a global clock. The channel
operates according to the following rules: if exactly one station
performs a broadcast in a step then the message reaches
all the stations, and if more of them broadcast simultaneously
in a step then mutual collisions happen and no station
successfully receives any of these messages. If the stations
attached to the channel do not receive a meaningful message
at a step then there are two possible reasons: either none or
more than one messages were sent. The ability to distinguish
between these two cases is called collision detection, when
it is available then the channel is with a ternary feedback,
because of the three possible events on the channel recorded
by the attached stations: (1) a meaningful message received,
(2) no messages sent, and (3) a collision signal received.
The stations are prone to fail-stop failures. Allowable patterns
of failures are determined by adversarial models. An
adversary is size-bounded if it may fail at most f stations,
for a parameter 0 f < p. We may refer to a size-bounded
adversary as f-bounded to make the value of parameter f ex-
plicit. If f is a constant fraction of p then the adversary is
linearly bounded. A size-bounded adversary is weakly adaptive
if it needs to select a subset of stations which might
be failed prior to the start of an algorithm, otherwise it is
strongly adaptive.
Our results. We consider Do-All in the context of a broadcast
network. More precisely, the communication is over
a multiple-access channel, either with or without collision
detection. We consider algorithms that are reliable in the
sense of being correct in the worst-case scenario when only
one station remains available. We show that the minimum
amount
of work has always to be performed by a
reliable algorithm. This is an absolute lower bound on work
performance of any algorithm, which does not depend on the
collision detection, randomization or the power of an adver-
sary. We show that in a channel with collision detection this
bound can be attained by a deterministic algorithm against
any size-bounded adversary. The situation is more complex
in a weaker channel without collision detection. We develop
a deterministic algorithm for this channel which performs
against f-bounded adver-
sary, even if the number of failures is the only restriction on
the power of the adversary. This is shown to be optimal by
a matching lower bound. Now it could happen that if we
wanted to optimize our solutions against weaker adversaries
then part O(p minff; tg) in the performance bound could
be decreased. This indeed is the case: we show that a randomized
algorithm can have the expected minimum work
against certain weakly-adaptive size-bounded
adversaries. The conclusion is that randomization helps if
collision detection is not available and the adversary is suciently
restricted. The maximum number of faults when this
phenomenon happens is a constant fraction of the number of
all the stations. Next we show a lower bound which implies
that if only f-bounded
adversary with
can force any algorithm
for the channel without collision detection to perform
asymptotically more than the minimum work
t).
Previous work. The problem Do-All was introduced by
Dwork, Halpern, and Waarts [12], and investigated in a
number of papers [7, 9, 10, 11, 14]. All the previous papers
considered networks in which each node can send a message
to any subset of nodes in one step. The algorithmic
paradigms used included balancing work and checkpointing
the progress made. This includes using coordinators, which
are designated nodes to collect and disseminate information.
Dominant models of failures considered have been those of
fail-stop failures. The primary measures of e-ciency of algorithms
used in [12] were the task-oriented work, in which
each performance of a task contributes a unit, and communication
measured as the number of point-to-point messages.
This paper also proposed the eort as a measure of per-
formance, which is work and communication combined; one
algorithm presented in [12] has eort O(t
p).
The early work assuming fail-stop failures model has concentrated
on the adversary who could fail all the stations
but one. More recent work concerned optimizing solutions
against weaker adversaries, while preserving correctness in
the worst-case scenario of arbitrary failure patterns, which
guarantee only at least one available processing unit.
De Prisco, Mayer, and Yung [11] were the rst to use the
available processor steps as the measure of work. They
present an algorithm which has work O(t+(f+1)p) and message
complexity O((f +1)p). Galil, Mayer and Yung [14] improved
the message complexity to O(fp " +minff+1; log pgp),
for any positive ", while maintaining the same work. This
was achieved as a by-product of their work on Byzantine
agreement with stop-failures, for which they found a message-
optimal solution. Chlebus, De Prisco, and Shvartsman [7]
studied failure models allowing restarts. Restarted processors
could contribute to the task-oriented work, but the cost
of integrating them into the system, in terms of the available
processor-steps and communication, might well surpass
the benets. The solution presented in [7] achieves
the work performance O((t
and its message complexity is O(t against
suitably dened adversaries who may introduce f failures
and restarts. This algorithm is an extension of one that
is tolerant of stop-failures and which has work complexity
log p= log log p) log f) and communication complexity
[10] studied the Do-All problem when failure patterns are
controlled by weakly-adaptive linearly-bounded adversaries.
They developed a randomized algorithm with the expected
eort O(n log n), in the case t, which is asymptotically
smaller than a known lower
bound
log n= log log n)
on work of any deterministic algorithm. Recently, strongly-
adaptive linearly-bounded adversaries have been studied by
Chlebus, Gasieniec, Kowalski and Shvartsman [9] who developed
a deterministic algorithm with the eort O(n log 2 n).
This is the rst algorithm known with a performance bound
of the form O(n polylog n) when as many as a linear fraction
of processing units can be failed by an adversary, all the
previously known algorithms had the
performance
such a situation. Note however, that neither work nor communication
performances should be used as the only criteria
to compare algorithmic solutions of Do-All ; such algorithms
are usually designed for concrete environments and
optimized for specic adversaries.
Related work. The multiple-access channel, as considered
in this paper, is a special broadcast network ([4, 33]).
It may be also interpreted as a single-hop radio network,
especially in the context of the relevance of collision de-
tection, see e.g. [6]. Most of the previous research on the
multiple-access channel has concerned methods of handling
packets which the stations keep receiving and which need
to be broadcast on the channel as soon as possible. The
packets may be generated dynamically in a possibly irregular
way which results in a bursty tra-c. Techniques like
time-division multiplexing are not e-cient then, and a better
throughput can be achieved if the control is distributed
among the stations. This is done by con
ict-resolution pro-
tocols, which arbitrate among the stations competing for
access to the channel; among the most popular protocols is
Aloha and the exponential backo. If packets are generated
dynamically then the basic problem is to have stable proto-
cols, which do not make the channel clogged eventually. Recent
work in that direction includes the papers of Goldberg,
MacKenzie, Paterson and Srinivasan [16], Hastad, Leighton
and Rogo [20], and Raghavan and Upfal [31]; see also the
survey of Gallager [15] for an account of the early research
and that of Chlebus [6] for recent developments.
Static problems concern a scenario when input data are allocated
at the stations prior to the start of an algorithm. The
problem of selection concerns the situation when some of
the stations hold messages, the goal is to broadcast just any
single message successfully. Willard [34] developed protocols
solving the problem in the expected time O(log log n) in
the channel with collision detection. Kushilevitz and Mansour
[27] proved a lower bound
nd n) for the selection
problem if collision detection is not available, which yields
an exponential gap between two models for this problem. A
related problem of nding maximum among the keys stored
in a subset of stations was considered by Martel [28].
There is a related all-broadcast problem, in which a subset
of k among n stations have messages, all of them need to be
sent to the channel successively as soon as possible. Komlos
and Greenberg [26] showed how to solve it deterministically
in time O(k log(n=k)), where both numbers n and k are
known. A lower
bound
k(log n)=(log k)) was proved by
Greenberg and Winograd [18].
Gasieniec, Pelc and Peleg [17] compared various modes of
synchrony in multiple-access channel in the context of the
wakeup problem, in which the system is started and the time
when each station joins is controlled by an adversary, while
the goal is to perform a successful broadcast as soon as pos-
sible. If the stations have access to a global clock then a
wakeup can be realized in the expected time O(log n) by
a randomized algorithm. If the local clocks are not syn-
chronized, there is a randomized solution working in the
expected time O(n). It was also shown in [17] that deterministic
algorithms require
time
n), and that there are
deterministic schedules working in time O(n log 2 n).
Problem Do-All specialized to shared-memory models is called
the operation of reading/writing from a memory
cell/register is considered to be an individual task. More
precisely, in this problem p failure prone processors need to
update t shared memory locations. A solution to this problem
can be applied to simulate a step of computation of a
shared-memory computer, and thus make its computations
resilient to processor faults. The problem Write-All was introduced
by Kanellakis and Shvartsman [21]. Algorithms
for the Write-All problem have been developed in a series
of papers, including those by Anderson and Woll [3], Buss,
Kanellakis, Ragde and Shvartsman [5], Chlebus, Dobrev,
Kowalski, Malewicz, Shvartsman, and Vrto [8], Groote, Hes-
selink, Mauw, and Vermeulen [19], Kedem, Palem, Rabin
and Raghunathan [23], Kedem, Palem, Raghunathan and
Spirakis [24], Kedem, Palem and Spirakis [25], Martel, Park
and Subramonian [29], and Martel and Subramonian [30].
A comprehensive account of algorithms for the Write-All
problem can be found in a book by Kanellakis and Shvartsman
[22].
There is another related problem called Collect, it was introduced
by Saks, Shavit and Woll [32]. It is about a number of
processes, each containing a value in a shared memory regis-
ter: the goal the processes need to achieve is to learn all the
values. A process increases its knowledge in a step by reading
a register: then it can add the read value to the contents
of its own register. The number of read/write operations is
a measure of performance. There is an adversary who controls
timing of the processes in an asynchronous computa-
tion. Ajtai, Aspnes, Dwork and Waarts [1] showed that the
problem for n processes can be solved deterministically with
work O(n 3=2 log n), by an adaptation of the algorithm of Anderson
and Woll [3]. Aspnes and Hurwood [2] developed a
randomized algorithm achieving work O(n log 3 n) with high
probability. A lower
bound
log n) for this problem was
given in [32].
2. MODEL
Processing units. There are p stations, each with a unique
identier ID in [1::p]. The station P with identier
i is denoted as P i . The system is synchronized by a global
clock.
Communication. Stations communicate by broadcasting
on a multiple access channel. This model is also called a
single-hop radio network [6], and we often say that a station
can hear the information it receives from the channel.
We assume that all the messages sent on the channel are
delivered to all the stations, but if many are sent simultaneously
then they interfere with each other and are received
as garbled. The size of a packet to carry a single message is
assumed to be O(log p) bits, but all our deterministic algorithms
broadcast messages of merely O(1) bits.
If a message sent on the channel is heard by a station then
the message is said to be successfully delivered to it. This
happens if exactly one station broadcasts a message: then it
is heard by all the stations by the end of the next step. If no
messages are sent then the stations can hear only the background
noise, which is distinct from any meaningful mes-
sage. If more than one messages are broadcast simultaneously
in a step then a collision happens, and no station can
hear any of these messages. We consider two models depending
on what feedback the stations receive if a collision
happens.
Channel without collision detection: each station can hear
the background noise.
Channel with collision detection: each station can hear the
interference noise, which is distinct from the background
noise.
Failures. Stations fail by crashing. A station which has
not failed yet at a step is said to be operational in this step.
Failure patterns are generated by adversaries. Adversarial
models allowing restarts are not considered [7]. An adversary
knows the algorithm against which it competes. An
adversary is said to be adaptive with condition C if it may
make its decisions on-line, the only constraint being that
the condition C has to be satised in each execution. We
consider the following specic adversaries:
Strongly-Adaptive f-Bounded: is adaptive with the condition
that at most f stations are failed, where 0 f < p.
Weakly-Adaptive f-Bounded: it is adaptive with the
following condition, where 0 f < p:
(1) It needs to select f failure-prone stations prior to the
start of an algorithm;
(2) It may fail only the selected failure-prone stations in the
course of an algorithm.
We write simply f-Bounded for Strongly-Adaptive f-
Bounded. The adversary Unbounded is the same as
(Strongly-Adaptive) (p 1)-Bounded. There is no difference
in power between strongly and weakly-adaptive size-
bounded adversaries when they compete against deterministic
algorithms. The adversary Linearly-Bounded denotes
f-Bounded, where
of the considered adversaries may fail all the stations in an
execution of an algorithm.
Complexity measures. The principal measure is work,
which is dened to be the number of available-processor
steps: each station contributes a unit for each step when it is
operational, even when idling. To have the measure dened
precisely, we need to know when an algorithm starts and
when it terminates. The stations are provided with input
and the time-step to begin, we start counting the available-
processor steps from this moment. Each station may come
to a halt state at any time, only when it does so then it
stops contributing to work. A station which halted in this
way is considered non-faulty, hence halts do not restrict the
power of adversaries.
Tasks. There are t tasks given as input. They are known
to all the stations. The following three properties of tasks
are essential:
similar: each takes one step to perform;
independent: can be performed in any order;
idempotent: can be performed many times and concurrently.
The problem Do-All is to perform a given set of tasks with
these properties, in a message passing network with processing
units prone to failures.
Correctness. We require algorithms to be correct against
any strategy of the adversary Unbounded. We call such
algorithms reliable. Formally, an algorithm solving the DoAll
problem is reliable if the following two conditions are
(1) All the tasks are eventually performed, if at least one
station remains non-faulty;
(2) All the stations eventually halt, unless failed.
In proofs of lower bounds, we mean an execution of an algorithm
to be a sequence of congurations of the system
in consecutive steps, including the failure pattern, the messages
broadcast on the channel, and the sequence of random
bits used by each station [13]. If an execution E 0 is obtained
by modifying the actions of an adversary on some other execution
E, then it is assumed that each station in E 0 receives
exactly the same sequence of random bits as in E.
If a reliable algorithm is run then no station halts when
there are still tasks that have not been completed yet: the
remaining stations may be killed and the halted station will
not perform any more tasks. This property can be strengthened
quantitatively as follows:
Lemma 1. A reliable algorithm, possibly randomized, has
to perform
work
in each execution, even if no
failures happen.
Proof. It is su-cient to consider the channel with collision
detection, in which algorithms have more information.
Part
t) of the bound follows from the fact that each task
has to be performed at least once.
A task is conrmed at step i, in an execution of the algo-
rithm, if either a station broadcasts successfully and it has
performed by step i, or more than one station broadcast
simultaneously and all of them, with a possible exception of
one station, have performed by step i. At least half of
the stations broadcasting in step i and conrming have
performed it by then, so at most 2i tasks can be conrmed
in step i. Let E1 be an execution of the algorithm when no
failures happen. Let station P come to a halt at some step
in E1 .
Claim: the tasks not conrmed by step j were performed
by P itself in E1 .
Suppose, on the contrary, that this is not the case, let be
such a task. Consider an execution E2 obtained by running
the algorithm and killing any station that performed in
E1 just before it was to perform , and all the remaining
stations, except for P, killed at step j. The broadcasts on
the channel are the same during the rst j steps in E1 and
E2 . Hence all the stations perform the same tasks in E1 and
E2 till step j. The denition of E2 is consistent with the
power of the adversary Unbounded. The algorithm is not
reliable because task is not performed in E2 and station
P is operational. This justies the claim.
We estimate the contribution of station P to work: The total
number of tasks conrmed in E1 is at most 2(1+2+: :
tasks have been conrmed by step
j. The remaining t t 0 tasks have been performed by P.
The work of P is at
least
t).
The amount of work asymptotically equal to t+p
t is called
shows that the minimum work is an
absolute lower bound on the amount of work performed by
a reliable algorithm in any scenario. The minimum work is
a yardstick we will use in the following sections to measure
the performance of algorithms.
3. CHANNEL WITHOUT COLLISION DE-
TECTION
We develop a deterministic algorithm TwoLists. The algorithm
avoids con
icts between broadcasting stations: at
each time-step there is at most one station scheduled to
broadcast. A broadcast message may consist of just a single
bit, since its only purpose is to conrm that the station
is still alive. A station does not need to announce which
tasks it performed, since all the stations can compute this
themselves.
The stations have the same global knowledge implied by
what was broadcast on the channel. Each station maintains
two circular lists: TASKS and STATIONS. The items in TASKS
are the tasks still not announced on the channel as per-
formed, and the items in STATIONS are the stations which
either made a broadcast each time they were to do so or were
not scheduled to broadcast yet at all. Since these lists are
dened by what happened on the channel, they are exactly
the same in every station.
There is a pointer associated with each of the lists. Let
Station denote the station pointed on the list STATIONS
and Task be the task pointed on the list TASKS. The pointers
provide reference points to assign tasks to stations: Task is
assigned to Station, then the next task on TASKS is assigned
to the next station on STATIONS, and so on in the circular
order. Notice that a task can be assigned to a number of
stations if the list STATIONS winds around the list TASKS
during the assignment process.
Each station performs:
a) its assigned task
b) the rst task not performed by it yet located after Task
(2) Counter is decremented by one; if Counter 0 then the stations halt
(3) If jTASKSj < Shift then Shift := dShift=2e
Station performs a broadcast
(5) If a broadcast is heard then
a) the tasks performed by Station are removed from TASKS
b) if TASKS is empty then the stations halt
c) the pointer on STATIONS is advanced by one position
else Station is removed from STATIONS
(6) The pointer on TASKS is advanced by d
Shifte positions
Figure
1: Algorithm TwoLists: a phase.
After a scheduled broadcast one of the lists is updated as
follows. If a successful broadcast happened then the tasks
which have been performed by the broadcasting station are
removed from TASKS. If a broadcast did not happen then the
station which failed to broadcast is removed from the list
STATIONS. If an item pointed at by the pointer is removed
from a list then the pointer is assigned automatically to
the next item on the list. Other then that the pointers
are updated as follows: pointer Task is moved by d
Shifte
positions, and pointer Station is always advanced by one
position.
The variable Shift is initialized to dt=2e. The lists TASKS
and STATIONS are initialized to contain all the tasks and
stations, respectively, ordered by their identiers. The size
a phase is given in Figure 1.
Lemma 2. Algorithm TwoLists is reliable.
The next theorem shows how the performance of TwoLists
degrades gracefully with the number of faults.
Theorem 1. Algorithm TwoLists performs work O(t
against the adversary f-Bounded, for
Proof. For the purpose of the argument, the time of
computation is partitioned conceptually into rounds: during
a round the value of Shift is constant. If no broadcast is
heard at a step then both the work performed by all the
stations at this step and the work performed by the station
that was to broadcast (since its last successful broadcast) is
at most 2p. There are O(minff; tg) such rounds, because a
station performs a new task in each phase, unless failed.
Consider the work performed by broadcasting stations during
a round. The work performed while jSTATIONSj 2
Shift is estimated as follows. Each task performed and
reported was performed only once in that way, and this can
be charged to O(t).
Consider next the work performed while jSTATIONSj 2 > Shift.
A segment of consecutive items on STATIONS of size d
Shifte,
starting from Station, perform dierent tasks in a step.
Each of these stations is eventually heard, if not failed, so
this work can be charged to O(p
Shift). The values of
Shift are a geometrically decreasing sequence, hence summing
over rounds gives the estimate O(p
t).
Next we show a matching lower bound.
Theorem 2. The adversary f-Bounded, for 0 f < p,
can force any reliable randomized algorithm for the channel
without collision detection to perform
work
Proof. We consider two cases, depending on which term
dominates the bound. If it is
t) then the bound follows
from Lemma 1. Consider the case
when
is the bound. Let
Let E1 be an execution obtained by running the algorithm
and killing any station that wants to broadcast successfully
during the rst g=4 steps. Denote as A the set of stations
failed in E1 . The denition of E1 is consistent with the
power of the adversary f-Bounded, since jAj g=4 f .
no station halts by step g=4 in the execution E1 .
Suppose, on the contrary, that some station P halts before
step g=4 in E1 . We show that algorithm is not reliable.
To this end we consider another execution E2 , which can
be realized by the adversary Unbounded. Let
be a task
Each station performs:
a) task assigned to its group
b) the rst task after Task not performed by it yet
(2) Counter is decremented by one; if Counter 0 then the stations halt
(3) If jTASKSj < Shift then Shift := dShift=2e
Each station in Group performs a broadcast
(5) If a broadcast was heard, including a collision, then
a) the tasks performed by Group are removed from TASKS
b) if TASKS is empty then the stations halt
c) the pointer on GROUPS is advanced by one position
else Group is removed from GROUPS
(6) The pointer on TASKS is advanced by d
Shifte positions
Figure
2: Algorithm GroupsTogether: a phase.
which is performed in E1 by at most pg=(4(t g=4))
stations except P during the rst g=4 steps. It exists
because g t. Let B be this set of stations, we have jBj
pg=(3t) p=3. We dene operationally a set C of stations as
follows. Initially C equals A[B; notice jA [Bj 7p=12. If
there is any station that wants to broadcast during the rst
g=4 steps in E1 as the only station not in the current C then
it is added to C. At most one station is added to C for each
among the rst g=4 p=4 steps of E1 , so jCj 10p=12 < p.
Let an execution E2 be obtained by failing all the stations
in C at the start and then running the algorithm. The
denition of E2 is consistent with the power of the adversary
Unbounded. There is no broadcast heard in E2 during the
rst g=4 steps. Therefore each station operational in E2
behaves in exactly the same way in both E1 and E2 during
the rst g=4 steps. Task
is not performed in execution E2
by step g=4, because the stations in B have been failed and
the remaining ones behave as in E1 .
Station P is not failed during E2 hence it does the same
both in E1 and in E2 . Consider an execution E3 : it is like
E2 till step g=4, then all the stations except P are failed. The
denition of E3 is consistent with the power of the adversary
Unbounded. Station P is operational but halted and task
is still outstanding in E3 at step g=4. We conclude that
the algorithm is not reliable. This contradiction completes
the proof of the claim.
Hence there are at least p
stations still operational
and non-halted in step g=4 in E1 , and they have
generated
work
pg) by then.
Corollary 1. Algorithm TwoLists is asymptotically op-
timal, among reliable randomized algorithms for the channel
without collision detection, against the f-Bounded adver-
This shows that randomization does not help against the
strongest possible size-bounded adversaries for the channel
without collision detection. In Section 5 we show that randomization
can make a dierence for this channel in weaker
adversarial models.
4. CHANNEL WITH COLLISION DETEC-
TION
We develop a deterministic algorithm GroupsTogether.
The algorithm is a suitable modication of TwoLists. The
main dierence is that while TwoLists avoids con
icts by
its design, the algorithm in this section uses con
icts aggressively
as a hedge against faults. Algorithm GroupsTo-
gether is in phases repeated in a loop, a phase is given if
Figure
2.
The algorithm maintains two lists. List TASKS is the same
as in Section 3. The stations are partitioned into groups,
which are maintained as a circular list GROUPS. There is a
pointer which points to Group. The stations in the same
group always perform the same tasks and broadcast simul-
taneously. The tasks are assigned to groups as follows: Task
is assigned to Group, then the next task is assigned to the
next group, and so on in the circular orders on both lists.
The way the lists are updated after a scheduled broadcast
depends on the fact if a broadcast happened, which is detected
by receiving either a message or a signal of collision.
When a broadcast happens then the tasks performed by the
group just heard on the channel are removed from TASKS.
If a broadcast did not happen then Group is removed from
GROUPS. If an item which is pointed at by the associated
pointer is removed from a list then the pointer is automatically
advanced to the following item. Other then that the
pointers are updated as follows: pointer Task is moved by
d
Shifte positions, and pointer Group is always advanced
by one position.
The stations are partitioned initially into minfd
groups, this partitioning is never changed in the course
(1) If STATIONS is empty then the control is returned to the algorithm
(2) Each station in STATIONS tosses a coin with the probability jSTATIONSj 1
of heads to come up
(3) A station that obtained heads broadcasts its ID on the channel
(4) If number i was heard then station P i is removed from STATIONS and
appended to SELECTED
Figure
3: Mixing: a phase.
of the algorithm. The lists TASKS and GROUPS are initialized
to contain all the tasks and groups, respectively, and are
ordered by the identiers of items. The variable Shift is
initialized to dt=2e. Variable Counter is initialized to t.
Lemma 3. Algorithm GroupsTogether is reliable.
Theorem 3. Algorithm GroupsTogether performs only
the minimum work
against the adversary f-
Bounded, for 0 f < p.
Proof. Rounds are dened as in the proof of Theorem 1.
If no broadcast is heard at a step then the work performed
by all the stations at this step and the stations that were to
broadcast (since their last successful broadcast) is at most
2p. There are at most O(
t) such rounds because of the
total number of groups.
Consider the work performed by broadcasting groups during
a round. The work performed while jGROUPSj 2 jTASKSj
is estimated as follows. Each task performed and then reported
was performed by only one group in that way. If
groups contain O(1) stations then this can be charged to
O(t), otherwise to O(p
t).
Consider next the work performed while jGROUPSj 2 > jTASKSj.
The number stored in the variable d
Shifte has the property
that these many groups in list GROUPS perform distinct
tasks in a step. Each of these groups was heard, if not
failed, so this work can be charged to O(p
Shift). The
values of Shift make a geometrically decreasing sequence,
hence summing over rounds gives the estimate O(p
t).
The fact that the algorithm GroupsTogether needs to
perform only the minimum amount of work has the following
two consequences:
Corollary 2. Algorithm GroupsTogether is asymptotically
optimal, among reliable randomized algorithms for
the channel with collision detection, against the strongest
size-bounded adversaries.
Corollary 3. Algorithm GroupsTogether cannot be
beaten in asymptotic work performance by any randomized
reliable algorithm in any weaker adversarial model in which
not all stations can be failed.
5. RANDOMIZED SOLUTIONS
We have shown that randomization does not help against
strongly-adaptive size-bounded adversaries. Corollary 3 implies
that this is also the case for the channel with collision
detection against weaker adversaries. In this section we
show that, as far as the channel without collision detection is
concerned, the power of a size-bounded adversary does matter
if we compare the optimal performance of deterministic
algorithms versus randomized ones.
We develop a randomized algorithm MixThenWork. It selects
a su-ciently long random set of stations, which then
run a suitably modied algorithm TwoLists. The algorithm
uses the same lists and pointers as TwoLists. Ad-
ditionally, there is a cyclic list SELECTED of stations, also
with a pointer. The process of random selection of stations
is performed by procedure Mixing. It iterates phases in a
loop, as described in Figure 3.
Next we describe procedure AltTwoLists. It operates by
having the stations on list SELECTED run TwoLists, with
SELECTED playing the role of STATIONS. The remaining stations
in STATIONS, if any, do not make any attempts to
broadcast then. Instead, they listen to the channel to learn
about tasks performed by other stations and keep performing
consecutive tasks still in TASKS. A task is removed from
TASKS by a station only if it was performed by a station that
managed to broadcast on the channel, otherwise it is just
marked as done. This allows to maintain the same items on
the instances of the list TASKS by all the stations. A station
halts when it has all the tasks on its copy of TASKS marked
as done. Procedure AltTwoLists returns control, and a
new iteration starts, as soon as all the stations in SELECTED
fail to broadcast in their assigned time slots. The whole
algorithm MixThenWork is described in Figure 4.
Lemma 4. Algorithm MixThenWork is reliable.
Theorem 4. The expected work performed by the algorithm
MixThenWork against the adversary
Weakly-Adaptive Linearly-Bounded equals the minimum
t).
Proof. If all the stations are in the list
SELECTED from the start and all stations execute TwoLists
algorithm performing work O(t).
Iterate in a loop:
(1) If jTASKSj < jSTATIONSj 2 then
jTASKSj phases of Mixing are performed,
else all the stations are moved to SELECTED
(2) Procedure AltTwoLists is called
Figure
4: Algorithm MixThenWork.
Consider the case p 2 t. A new station is added to the list
SELECTED in a phase of Mixing with some constant prob-
ability. The number of phases of the rst call of Mixing
is O(
t). It follows from the denition of the adversarial
model and by the Cherno bound that there
are
stations in SELECTED that are not failure-prone, after this
rst call, with the probability at least 1 e a
t , for some
a > 0. If this event holds then there is just one iteration
of the loop in the algorithm, and the bound O(p
t) on the
work follows. Only a constant fraction of (
stations executing
algorithm TwoLists may fail with the probability
t . Hence the work done by these ( p
stations
is O(t) by Theorem 1. In the meantime, as many as O(p)
other stations listening to the channel perform work, which
is O(p=
times larger than work performed by the selected
stations. Hence the total work is O(t
with the respective large probability.
Otherwise the work is O(pt). The total expected work is
thus of order
t(1 e a
which is within the claimed bound.
Is there an algorithm that needs to perform only the expected
minimum work against such weakly-adaptive adversaries
who could fail asymptotically more than a constant
fraction of the stations? Corollary 4 answers this question
in the negative for certain values of p, f and t. We show the
following precise lower bound:
Theorem 5. The Weakly-Adaptive f-Bounded adversary
can force any reliable randomized algorithm for the
channel without collision detection to perform the expected
work
Proof.
Part
t) follows from Lemma 1. We show
the remaining one. Let number g f be a parameter, we
will set its value later in the proof. Consider the following
experiment: the algorithm is run for g steps, and each
station which wants to broadcast successfully is failed just
before it is to do so. Additionally, at step g, a su-cient
number of the remaining stations, say, those with the smallest
identiers, is failed to make the total number of failures
by step g equal to exactly g. We dene a probabilistic space
in which the elementary events are the sets of IDs of stations
corresponding to sets of size g which can be failed in
the experiment. Let F denote the family of all such elementary
events. The probability Pr(!) of ! 2 F is dened to
be equal to the probability of an occurrence of an execution
during the experiment, in which exactly the stations with
IDs in ! are failed by step g.
The following equality holds
where we sum over subsets A [1::p] and elementary events
because each Pr(!) occurs p g
times on the left-hand
side. There are p
subsets A [1::p] with
. Hence there is some C [1::p], with
such that the probability that the IDs of stations failed in
the experiment are all outside C is at least
O(1), which is the case if g
Let the adversary declare exactly the stations not in C as
prone to failures. Suppose the algorithm is run for minfh; tg=4
steps and each station not in C which is to broadcast successfully
is failed just before it attempts to do so. Such an
execution is consistent with the power of the adversary. The
event that no message is heard during minfh; tg=4 steps has
the
probability
376 Simultaneously, the number of operational
stations is
p). None of these stations may halt by
step minfh; tg=4, by an argument similar to that used in the
proof of Theorem 2: the algorithm would prove not to be
reliable because this step occurs earlier than minff; tg=4.
The expected work in such an execution is thus of order
de p minfh=4;
which completes the
proof.
Corollary 4. If
then the adversary
Weakly-Adaptive f-Bounded can force any algorithm
for the channel without collision detection to perform
6. CONCLUSIONS
We study solutions to the Do-All problem in the context
of synchronous broadcast networks. The questions we ask
are as follows: What is the impact of the availability of
collision detection? Does randomization help? How does the
e-ciency of solutions depend on various adversarial models?
We show that all these parameters have an impact.
Most of the previous research on the multiple-access channel
has concerned the issues of stability of protocols handling
dynamically generated packets. There have been quite few
static algorithmic problems considered, in which the whole
input is provided to the attached stations in advance. The
broadcast channel is ubiquitous in local area networks, and
all its algorithmic aspects deserve a study, those concerning
static problems in particular. This paper attempts to
demonstrate that this is the case for the problem Do-All .
7.
--R
A theory of competitive analysis of distributed algorithms
Spreading rumors rapidly despite an adversary
Algorithms for the certi
Parallel algorithms with processor failures and delays
Randomized communication in radio networks
Performing tasks on synchronous restartable message-passing processors
Randomization helps to perform tasks on processors prone to failures
Performing work e-ciently in the presence of faults
Lower bounds in distributed computing
Resolving message complexity of byzantine agreement and beyond
A perspective on multiaccess channels
Contention resolution with constant expected delay
A lower bound on the time needed in the worst case to resolve con icts deterministically in multiple access channels
An algorithm for the asynchronous write-all problem based on process collision
Combining tentative and de
On the complexity of certi
Stochastic contention resolution with short delays
Optimal time randomized consensus - making resilient algorithms fast in practice
--TR
Correction to "An asymptotically nonadaptive algorithm for conflict resolution i
Data networks
Log-logarithmic selection resolution protocols in a multiple access channel
Efficient robust parallel computations
Combining tentative and definite executions for very fast dependable parallel computing
Optimal time randomized consensusMYAMPERSANDmdash;making resilient algorithms fast in practice
Efficient program transformations for resilient parallel computation via randomization (preliminary version)
Work-optimal asynchronous algorithms for shared memory parallel computers
On the complexity of certified write-all algorithms
Maximum finding on a multiple access broadcast network
Time-optimal message-efficient work performance in the presence of faults
A lower bound on the time needed in the worst case to resolve conflicts deterministically in multiple access channels
Parallel algorithms with processor failures and delays
Analysis of Backoff Protocols for Mulitiple AccessChannels
Computer networks (3rd ed.)
Algorithms for the Certified Write-All Problem
An $\Omega(D\log (N/D))$ Lower Bound for Broadcast in Radio Networks
Spreading rumors rapidly despite an adversary
Performing Work Efficiently in the Presence of Faults
Stochastic Contention Resolution With Short Delays
The wakeup problem in synchronous broadcast systems (extended abstract)
Contention resolution with constant expected delay
Towards practical deteministic write-all algorithms
Fault-Tolerant Parallel Computation
Randomization Helps to Perform Tasks on Processors Prone to Failures
Lower Bounds in Distributed Computing
Resolving message complexity of Byzantine Agreement and beyond
--CTR
Dariusz R. Kowalski , Alex A. Shvartsman, Performing work with asynchronous processors: message-delay-sensitive bounds, Proceedings of the twenty-second annual symposium on Principles of distributed computing, p.265-274, July 13-16, 2003, Boston, Massachusetts
Bogdan S. Chlebus , Dariusz R. Kowalski, Randomization helps to perform independent tasks reliably, Random Structures & Algorithms, v.24 n.1, p.11-41, January 2004
Dariusz R. Kowalski , Alex A. Shvartsman, Performing work with asynchronous processors: message-delay-sensitive bounds, Information and Computation, v.203 n.2, p.181-210, December 15, 2005
Bogdan S. Chlebus , Dariusz R. Kowalski , Mariusz A. Rokicki, Adversarial queuing on the multiple-access channel, Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, July 23-26, 2006, Denver, Colorado, USA | fail-stop failures;independent tasks;multiple-access channel;lower bound;adversary;distributed algorithm |
384010 | On scalable and efficient distributed failure detectors. | Process groups in distributed applications and services rely on failure detectors to detect process failures completely, and as quickly, accurately, and scalably as possible, even in the face of unreliable message deliveries. In this paper, we look at quantifying the optimal scalability, in terms of network load, (in messages per second, with messages having a size limit) of distributed, complete failure detectors as a function of application-specified requirements. These requirements are 1) quick failure detection by some non-faulty process, and 2) accuracy of failure detection. We assume a crash-recovery (non-Byzantine) failure model, and a network model that is probabilistically unreliable (w.r.t. message deliveries and process failures). First, we characterize, under certain independence assumptions, the optimum worst-case network load imposed by any failure detector that achieves an application's requirements. We then discuss why traditional heart beating schemes are inherently unscalable according to the optimal load. We also present a randomized, distributed, failure detector algorithm that imposes an equal expected load per group member. This protocol satisfies the application defined constraints of completeness and accuracy, and speed of detection on an average. It imposes a network load that differs frown the optimal by a sub-optimality factor that is much lower than that for traditional distributed heartbeating schemes. Moreover, this sub-optimality factor does not vary with group size (for large groups). | INTRODUCTION
Failure detectors are a central component in fault-tolerant
distributed systems based on process groups running over
unreliable, asynchronous networks eg., group membership
protocols [3], supercomputers, computer clusters [13], etc.
The ability of the failure detector to detect process failures
completely and e#ciently, in the presence of unreliable messaging
as well as arbitrary process crashes and recoveries,
can have a major impact on the performance of these sys-
tems. "Completeness" is the guarantee that the failure of
a group member is eventually detected by every non-faulty
group member. "E#ciency" means that failures are detected
quickly, as well as accurately (i.e., without too many mis-
takes).
The first work to address these properties of failure detectors
was by Chandra and Toueg [5]. The authors showed
why it is impossible for a failure detector algorithm to deterministically
achieve both completeness and accuracy over
an asynchronous unreliable network. This result has lead to
a flurry of theoretical research on other ways of classifying
failure detectors, but more importantly, has served as
a guide to designers of failure detector algorithms for real
systems. For example, most distributed applications have
opted to circumvent the impossibility result by relying on
failure detector algorithms that guarantee completeness deterministically
while achieving e#ciency only probabilistically
[1, 2, 4, 6, 7, 8, 14].
The recent emergence of applications for large scale distributed
systems has created a need for failure detector algorithms
that minimize the network load (in bytes per second,
or equivalently, messages per second with a limit on maximum
message size) used, as well as the load imposed on
participating processes [7, 14]. Failure detectors for such
settings thus seek to achieve good scalability in addition to
e#ciency, while still (deterministically) guaranteeing completeness
Recently, Chen et al. [6] proposed a comprehensive set of
metrics to measure the Quality of Service (QoS) of complete
and e#cient failure detectors. This paper presented three
primary metrics to quantify the performance of a failure detector
at one process detecting crash-recovery failures of a
single other process over an unreliable network. The authors
proposed failure detection time, and recurrence time and duration
times of mistaken detection as the primary metrics for
complete and e#cient failure detectors. However, the paper
neither deal with the optimal relation among these metrics,
nor focussed on distributed or scalable failure detectors.
In this paper, we first address the question of quantifying
the optimum worst-case network load (in messages per sec-
ond, with a limit on messages sizes) needed by a complete
distributed failure detector protocol to satisfy the e#ciency
requirements as specified by the application. We are concerned
with distributed failure detectors working in a group
of uniquely identifiable processes, which are subject to failures
and recoveries, and communicate over an unreliable net-
work. We deal with complete failure detectors that satisfy
application-defined e#ciency constraints of 1) (quickness)
detection of any group member failure by some non-faulty
member within a time bound, and 2) (accuracy) probability
(within this time bound) of no other non-faulty member
detecting a given non-faulty member as having failed.
The first (quickness) requirement merits further discussion.
Many systems, such as multi-domain server farm clusters [7,
13] and virtual synchrony implementations [3] rely on a single
or a few central computers to aggregate failure detection
information from across the system. These computers are
then responsible for disseminating that information across
the entire system. In such systems, e#cient detection of
a failure depends on the time the failure is first detected
by a non-faulty member. Even in the absence of a central
server, notification of a failure is typically communicated,
by the first member to detect it, to the entire group via a
(possibly unreliable) broadcast [3]. Thus, although achieving
completeness is important, e#cient detection of a failure
is more often related with the time to the first detection, by
another non-faulty member, of the failure.
We derive the optimal worst-case network load (in messages
per second, with a limit on maximum message size) imposed
on the network by a complete failure detector satisfying the
above application-defined constraints. We then discuss why
the traditional and popular distributed heartbeating failure
detection schemes (eg., [7, 14]) do not achieve these optimal
scalability limits. Finally, we present a randomized
distributed failure detector that can be configured to meet
the application-defined constraints of completeness and ac-
curacy, and expected speed of detection. With reasonable
assumptions on the network unreliability (member and message
failure rates of up to 15%), the worst-case network load
imposed by this protocol has a sub-optimality factor that is
much lower than that of traditional distributed heartbeat
schemes. This sub-optimality factor does not depend on
group size (in large groups), but only on the application-
specified e#ciency constraints and the network unreliability
probabilities. Furthermore, the average load imposed per
member is independent of the group size.
In arriving at these results, we will assume that message loss
and member failures can each be characterized by probabilistic
distributions, independent across messages and failures.
While the practicality of these assumptions in real networks
will probably be subject to criticism, these assumptions are
necessary in order to take this first step towards quantifying
and achieving scalable and e#cient failure detectors. Be-
sides, we believe that these independence assumptions are
partially justified because of 1) the randomized nature of the
new failure detector algorithm, and 2) the large temporal
separation between protocol periods, typically O(seconds)
in practice (mitigating much of the correlation among message
loss probability distributions).
The rest of the paper is organized as follows. Section 2
briefly summarizes previous work in this area. In Section 3,
we formally describe the process group model assumed in
this paper. Section 4 presents a discussion of how an application
can specify e#ciency requirements to a failure de-
tector, and quantifies the optimal worst-case network load
a failure detector must impose, in order to meet these re-
quirements. Section 5 presents the new randomized failure
detector protocol. We conclude in section 6.
2. PREVIOUS WORK
Chandra and Toueg [5] were the first to formally address
the completeness and accuracy properties of failure detec-
tors. Subsequent work has focused on di#erent properties
and classifications of failure detectors. This area of literature
has treated failure detectors as oracles used to solve the
Distributed Consensus/Agreement problem [9], which is unsolvable
in the general asynchronous network model. These
classifications of failure detectors are primarily based on the
weakness of the model required to implement them, in order
to solve the Distributed Consensus/Agreement problem
[11].
Proposals for implementable failure detectors have sometimes
assumed network models with weak unreliability semantics
eg., timed-asynchronous model [8], quasi-synchronous
model [2], partial synchrony model [12], etc. These proposals
have treated failure detectors only as a tool to e#ciently
reach agreement, ignoring their e#ciency from an application
designer's viewpoint. For example, most failure detectors
such as [12] provide eventual guarantees, while applications
are typically concerned about real timing constraints.
In most real-life distributed systems, the failure detection
service is implemented via variants of the "Heartbeat mech-
anism" [1, 2, 4, 6, 7, 8, 14], which have been popular as
they guarantee the completeness property. However, all existing
heartbeat approaches have shortcomings. Centralized
heartbeat schemes create hot-spots that prevent them from
scaling. Distributed heartbeat schemes o#er di#erent levels
of accuracy and scalability depending on the exact heart-beat
dissemination mechanism used, but we show that they
are inherently not as e#cient and scalable as claimed.
Probabilistic network models have been used to analyze heart-beat
failure detectors in [4, 6], but only with a single process
detecting failures of a single other process. [6] was the first
paper to propose metrics for non-distributed heartbeat failure
detectors in the crash-recovery model. These metrics
were not inclusive of scalability concerns.
Our work di#ers from all this prior work in that it is the
first to approach the design of failure detectors from a distributed
application developer's viewpoint. We quantify the
performance of a failure detector protocol as the network
load it requires to impose on the network, in order to satisfy
the application-defined constraints of completeness, and
quick and accurate detection 1 . We also present an e#cient
and scalable distributed failure detector. The new failure
detector incurs a constant expected load per process, thus
We will state these application-defined requirements formally
in Section 4.
avoiding the hot-spot problem of centralized heartbeating
schemes.
3. MODEL
We consider a large group of n (# 1) members 2 . This set of
potential group members is fixed a priori. Group members
have unique identifiers. Each group member maintains a
list, called a view, containing the identities of all other group
members (faulty or otherwise). Our protocol specification
and analysis assumes that this maximal group membership
is always the same at all members, but our results can be
extended to a model with dynamically changing membership
and members with incomplete views, using methodologies
similar to [10].
Members may su#er crash (non-Byzantine) failures, and recover
subsequently. Unlike other papers on failure detectors
(eg., [14]) that consider a member as faulty if they are perturbed
and sleep for a time greater than some pre-specified
duration, our notion of failure considers that a member is
faulty if and only if it has really crashed. Perturbations at
members that might lead to message losses are accounted for
in the message loss rate pml (which we will define shortly).
Whenever a member recovers from a failure, it does so into
a new incarnation that is distinguishable from all its earlier
incarnations. At each member, an integer in non-volatile
storage, that is incremented every time the member recov-
ers, su#ces to serve as the member's incarnation number.
The members in our group model thus have crash-recovery
semantics with incarnation numbers distinguishing di#erent
failures and recoveries. When a member M i crashes (fails),
it does so in its current incarnation (say its l'th incarnation).
We say that such a failure is "detected" at exactly the first
instant of time that some other non-faulty member detects
either 1) failure of M i in incarnation greater than or equal
to l, or 2) recovery of M i in an incarnation strictly greater
than l.
We characterize the member failure probability by a parameter
f is the probability that a random group member
is faulty at a random time. Member crashes are assumed to
be independent across members.
We assume no synchronization of clocks across group mem-
bers. We only require that each individual member's clock
drift rate (from some fixed clock rate) remains constant.
Members communicate using unicast (point-to-point) messaging
on an asynchronous, fault-prone network. Since we
are interested in characterizing the network bandwidth uti-
lized, we will assume that maximal message sizes are a con-
stant, containing at most a few bytes of data (assuming a
bound on the size of message identifiers and headers, as is
typical in IP packets).
Each message sent out on the network fails to be delivered
at its recipient (due to network congestion, bu#er overflow
at the sender or receiver due to member perturbations, etc.)
with probability pml # (0, 1). The worst-case message prop-
2 All of which are either processes, or servers, or network
adaptors etc.
agation delay (from sender to receiver through the network)
for any delivered message is assumed to be so small compared
to the application-specified detection time (typically
O( several seconds )) that henceforth, for all practical pur-
poses, we can assume that each message is either delivered
immediately at the recipient with probability (1 - pml ), or
never reaches the recipient. 3
This message loss distribution is also assumed to be independent
across messages. Message delivery losses could, in fact,
be correlated in such a network. However, if application-
specified failure detection times are much larger than message
propagation and congestion repair times in the network,
messages exchanged by the failure detector will have considerable
temporal separation. This reduces the correlation
among the loss distributions of di#erent messages. Randomized
selection of message destinations in the new failure
detector also weakens such message loss correlation.
In the rest of the paper, we use the shorthands q f and qml
instead of (1 - pf ) and (1 - pml ) respectively.
4. SCALABLE AND EFFICIENT FAILURE
The first formal characterization of the properties of failure
detectors was o#ered in [5], which laid down the following
properties for distributed failure detectors in process groups:
. fStrong/Weakg Completeness: crash-failure of any
group member is detected by {all/some} non-faulty
members 4 ,
. Strong Accuracy: no non-faulty group member 5 is
declared as failed by any other non-faulty group member
[5] also showed that a perfect failure detector i.e., one which
satisfies both Strong Completeness and Strong Accuracy, is
su#cient to solve distributed Consensus, but is impossible
to implement in a fault-prone network.
Subsequent work on designing e#cient failure detectors has
attempted to trade o# the Completeness and Accuracy properties
in several ways. However, the completeness properties
required by most distributed applications have lead to
the popular use of failure detectors that guarantee Strong
Completeness always, even if eventually [1, 2, 4, 5, 6, 7,
8, 14]. This of course means that such failure detectors
cannot guarantee Strong Accuracy always, but only with a
probability less than 1. For example, all-to-all (distributed)
3 This assumption is made for simplicity. In fact, the optimality
results of section 4 hold if pml is assumed to be the
probability of message delivery within T time units after its
send. The randomized protocol of section 5 and its analysis
can be extended to hold if pml is the probability of message
delivery within a sixth of the protocol period.
4 Recollect that in our model, since members recover with
unique incarnations, detection of a member's failure or recovery
also implies detection of failure of all it's previous
incarnations.
5 in its current incarnation
heartbeating schemes have been popular because they guarantee
Strong Completeness (since a faulty member will stop
sending heartbeats), while providing varying degrees of accuracy
We have explained in Section 1 why in many distributed
applications, although the failure of a group member must
eventually be known to all non-faulty members, it is important
to have the failure detected quickly by some non-faulty
member (and not necessarily all non-faulty members). In
other words, the quickness of failure detectors depends on
the time from a member failure to Weak Completeness with
respect to that failure, although Strong Completeness is a
necessary property.
The requirements imposed by an application (or its designer)
on a failure detector protocol can thus be formally specified
and parameterized as follows:
1. Completeness: satisfy eventual Strong Completeness
for member failures.
2. Efficiency:
(a) Speed: every member failure is detected by some
non-faulty group member within T time units after
its occurrence (T # worst-case message round
time).
(b) Accuracy: at any time instant, for every non-faulty
member M i not yet detected as failed, the
probability that no other non-faulty group member
will (mistakenly) detect M i as faulty within
the next T time units is at least (1 - PM(T )).
T and PM(T ) are thus parameters specified by the application
(or its designer). For example, an application designer
might specify
To measure the scalability of a failure detector algorithm, we
use the worst-case network load it imposes - this is denoted
as L. Since several messages may be transmitted simultaneously
even from one group member, we define:
Definition 1. The worst-case network load L of a failure
detector protocol is the maximum number of messages transmitted
by any run of the protocol within any time interval
of length T , divided by T .
We also require that the failure detector impose a uniform
expected send and receive load at each member due to this
tra#c.
The goal of a near-optimal failure detector algorithm is thus
to satisfy the above requirements (Completeness, Effi-
ciency) while guaranteeing:
. Scale: the worst-case network load L imposed by the
algorithm is close to the optimal possible, with equal
expected load per member.
That brings us to the question - what is the optimal worst-case
network load, call it L # , that is needed to satisfy the
above application-defined requirements - Completeness,
Speed (T ), Accuracy are able to answer
this question in the network model discussed earlier
when the group size n is very large (# 1), and PM(T ) is
very small ( # pml ).
Theorem 1. Any distributed failure detector algorithm
for a group of size n (# 1) that deterministically satisfies the
Completeness, Speed, Accuracy requirements above, for
given values of T and PM(T ) (# pml ), imposes a minimal
worst-case network load (messages per time unit, as defined
above) of:
log(pml
Furthermore, there is a failure detector that achieves this
minimal worst-case bound while satisfying the Complete-
ness, Speed, Accuracy requirements.
L # is thus the optimal worst-case network load required to
satisfy the Completeness, Speed, Accuracy requirements.
Proof. We prove the first part of the theorem by showing
that each non-faulty group member could transmit up to
log(PM(T
log(p ml )
messages in a time interval of length T .
Consider a group member M i at a random point in time
t. Let M i not be detected as failed yet by any other group
member, and stay non-faulty until at least time
m be the maximum number of messages sent by M i , in the
time interval [t, t any possible run of the failure
detector protocol starting from time t.
Now, at time t, the event that "all messages sent by M i in
the time interval [t, t+T ] are lost" happens with probability
at least p m
ml . Occurrence of this event entails that it is indistinguishable
to the set of the rest of the non-faulty group
members (i.e., members other than M i ) as to whether M i is
faulty or not. By the Speed requirement, this event would
then imply that M i is detected as failed by some non-faulty
group member between t and t
Thus, the probability that at time t, a given non-faulty member
M i that is not yet detected as faulty, is detected as failed
by some other non-faulty group member within the next T
time units, is at least p m
ml . By the Accuracy requirement,
we have p m
ml # PM(T ), which implies that m # log(PM(T
log(p ml ) .
A failure detector that satisfies the Completeness, Speed,
Accuracy requirements and meets the L # bound works as
follows. It uses a highly available, non-faulty server as a
group leader 6 . Every other group member sends [ log(PM(T
log(p ml )
"I am alive" messages to this server every T time units. The
6 The set of central computers, that collect failure information
and disseminate it to the system, can be designated as
the server.
server declares a member as failed when it does not receive
any "I am alive" message from it for T time units 7 .
Corollary: The optimal bound of Theorem 1 applies to the
crash-stop model as well.
Proof: By exactly the same arguments as in the proof of
Theorem 1. 2
Definition 2. The sub-optimality factor of a failure detector
algorithm that imposes a worst-case network load
L, while satisfying the Completeness and Efficiency re-
quirements, is defined as L
In the traditional distributed Heartbeating failure detection
algorithms, every group member periodically transmits a
"heartbeat" message (with an incremented counter) to every
other group member. A member M i is declared as failed
by a non-faulty member M j when M j does not receive heartbeats
from M i for some consecutive heartbeat periods (this
duration being the detection time T ).
Distributed heartbeating schemes have been the most popular
implementation of failure detectors because they guarantee
Completeness - a failed member will not send any
more heartbeat messages. However, the accuracy and scalability
guarantees of heartbeating algorithms di#er, depending
entirely on the actual mechanism used to disseminate
heartbeats.
In the simplest implementation, each member M i transmits
a few "I am alive" messages to each group member it knows
of, every T time units. The worst-case number of messages
transmitted by each member per unit time is #(n), and the
worst-case total network load L is #(n 2 ). The sub-optimality
factor (i.e., L
as #(n), for any values of pml , pf and
The Gossip-style failure detection service, proposed by van
Renesse et al. [14], uses a mechanism where every tgossip
time units, each member gossips a #(n) list of the latest
heartbeat counters (for all group members) to a few other
randomly selected group members. The authors show that
under this scheme, a new heartbeat count typically takes
an average time of #[log(n) - tgossip ] to reach an arbitrary
other group member. The Speed requirement thus leads
us to choose
log(n) ]. The worst-case network
load imposed by the Gossip-style heartbeat scheme is thus
T ]. The sub-optimality factor varies as
#[n - log(n)], for any values of pml , pf and PM(T ).
In fact, distributed heartbeating schemes do not meet the
optimality bound of Theorem 1 because they inherently attempt
to communicate a failure notification to all group
members. As we have seen above, this is an overkill for
systems that can rely on a centralized coordinated set of
7 This implementation, which is essentially a centralized
heartbeat mechanism, is undesirable as it requires a highly
available server and has bad load balancing (does not satisfy
the Scale property).
servers to disseminate failure information. These systems
require only some other non-faulty member to detect a given
failure.
Other heartbeating schemes, such as Centralized heartbeating
(as discussed in the proof of Theorem 1) and heartbeating
along a logical ring of group members [7], can be configured
to meet the optimal load L # , but have problems such
as creating hot-spots (centralized heartbeating) or unpredictable
failure detection times in the presence of multiple
simultaneous faults at larger group sizes (heartbeating in a
ring).
5. A RANDOMIZED DISTRIBUTED
In the preceding sections, we have characterized the optimal
worst-case load imposed by a distributed failure detector
that satisfies the Completeness, Speed and Accuracy
requirements, for application specified values of T and
(Theorem 1). We have then studied why traditional
heartbeating schemes are inherently not scalable.
In this section, we relax the Speed condition to detect a failure
within an expected (rather than exact, as before) time
bound of T time units after the failure. We then present a
randomized distributed failure detector algorithm that guarantees
Completeness with probability 1, detection of any
member failure within an expected time T from the failure,
and an Accuracy probability of (1 -PM(T )). The protocol
imposes an equal expected load per group member, and
a worst-case (and average case) network load L that di#ers
from the optimal L # of Theorem 1 by a sub-optimality factor
(i.e., L
that is independent of group size n (# 1). In such
large groups, at reasonable values of member and message
delivery failure rates pf and pml , this sub-optimality factor
is much lower than the sub-optimality factors of the traditional
distributed heartbeating schemes discussed in the
previous section.
5.1 New Failure Detector Algorithm
The failure detector algorithm uses two parameters: protocol
period T # (in time units) and integer k, which is the size
of failure detection subgroups. We will show how the values
of these parameters can be configured from the required values
of T and PM(T ), and the network parameters p f , pml .
Parameters T # and k are assumed to be known a priori at
all group members. Note that this does not need clocks
to be synchronized across members, but only requires each
member to have a steady clock rate to be able to measure
The algorithm is formally described in Figure 1. At each
non-faulty member M i , steps (1-3) are executed once every
units (which we call a protocol period), while steps
are executed whenever necessary. The data contained
in each message is shown in parentheses after the message. If
sequence numbers are allowed to wrap around, the maximal
message size is bounded from above.
Figure
2 illustrates the protocol steps initiated by a member
during one protocol period of length T # time units. At
the start of this protocol period at M i , a random member
Integer pr; /* Local period number */
Every T # time units at
1. Select random member M j from view
Send a ping(M i , M j , pr) message to M j
Wait for the worst-case message round-trip time for
an
2. If have not received an ack(M i , M j , pr) message yet
Select k members randomly from view
Send each of them a ping-req(M i , M j , pr) message
Wait for an ack(M i , M j , pr) message until
the end of period pr
3. If have not received an ack(M i , M j , pr) message yet
Declare M j as failed
Anytime at
4. On receipt of a ping-req(Mm ,
Send a ping(M i , M j , Mm , pr) message to M j
On receipt of an ack(M i , M j , Mm , pr) message from M j
Send an ack(Mm , M j , pr) message to received to Mm
Anytime at
5. On receipt of a ping(Mm , M i , M l , pr) message from
member Mm
Reply with an ack(Mm , M i , M l , pr) message to Mm
Anytime at
6. On receipt of a ping(Mm , M i , pr) message from member Mm
Reply with an ack(Mm , M i , pr) message to Mm
Figure
1: Protocol steps at a group member M i .
Data in each message is shown in parentheses after
the message. Each message also contains the current
incarnation number of the sender.
is selected, in this case M j , and a ping message sent to it.
If M i does not receive a replying ack from M j within some
time-out (determined by the message round-trip time, which
is # T ), it selects k members at random and sends to each
a ping-req message. Each of the non-faulty members among
these k which receives the ping-req message subsequently
pings M j and forwards the ack received from M j , if any, back
to M i . In the example of Figure 2, one of the k members
manages to complete this cycle of events as M j is up, and
does not suspect M j as faulty at the end of this protocol
period.
In the above protocol, member M i uses a randomly selected
subgroup of k members to out-source ping-req messages,
rather than sending out k repeat ping messages to the target
. The e#ect of using the randomly selected subgroup is
to distribute the decision on failure detection across a sub-group
of members. Although we do not analyze it
in this paper, it can be shown that the new protocol's properties
are preserved even in the presence of some degree of
variation of message delivery loss probabilities across group
members. Sending k repeat ping messages may not satisfy
this property. Our analysis in Section 5.2 shows that the
cost (in terms of sub-optimality factor of network load) of
using a 1)-sized subgroup is not too significant.
5.2 Analysis
In this section, we calculate, for the above protocol, the
expected detection time of a member failure, as well as
the probability of an inaccurate detection of a non-faulty
choose random
choose k random
members
ack
ping
ack
ping
ack
ping
Figure
2: Example protocol period at M i . This
shows all the possible messages that a protocol period
may initiate. Some message contents excluded
for simplicity.
member by some other (at least one) non-faulty member.
This will lead to calculation of the values of T # and k, for
the above protocol, as a function of parameters specifying
application-specified requirements and network unreliabil-
ity, i.e., T , PM(T ), pf , pml .
For any group member M j , faulty or otherwise,
Pr [at least one non-faulty member chooses to
ping M j (directly) in a time interval
Thus, the expected time between a failure of member M j
and its detection by some non-faulty member is
(1)
This gives us a configurable value for T # as a function of
At any given time instant, a non-faulty member M j will be
detected as faulty by another non-faulty member M l within
the next T time units if M l chooses to ping M j within the
next T time units and does not receive any acks, directly
or indirectly from transitive ping-req's, from M j . Then,
PM(T ), the probability of inaccurate failure detection of
member M j within the next T time units, is simply the
probability that there is at least one such member M l in the
group.
A random group member M l is non-faulty with probability
qf , and the probability of such a member choosing to ping
Given this, the
probability that such a M l receives back no acks, direct or
indirect, according to the protocol of section 5.1 equals
Therefore,
This gives us
log[ PM(T )
ml )- e q f
log(1 - qf - q 4
(2)
Thus, the new randomized failure detector protocol can be
configured using equations (1) and (2) to satisfy the Speed
and Accuracy requirements with parameters E[T ], PM(T ).
Moreover, given a member M j that has failed (and stays
failed), every other non-faulty member M i will eventually
choose to ping M j in some protocol period, and discover M j
as having failed. Hence,
Theorem 2. This randomized failure detector protocol:
(a) satisfies eventual Strong Completeness, i.e., the Completeness
requirement,
(b) can be configured via equations (1) and (2) to meet the
requirements of (expected) Speed, and Accuracy, and
(c) has a uniform expected send/receive load at all group
members.
Proof. From the above discussion and equations (1),
(2).
Finally, we upper-bound the worst-case and expected net-work
load (L, E[L] respectively) imposed by this failure detector
protocol.
The worst-case network load occurs when, every T # time
units, each member initiates steps (1-6) in the algorithm
of
Figure
1. Steps (1,6) involve at most 2 messages, while
steps (2-5) involve at most 4 messages per ping-req target
member. Therefore, the worst-case network load imposed
by this protocol (in messages/time unit) is
Then, from Theorem 1 and equations (1),(2),
log[ PM(T )
ml )- e q f
log(1 - qf - q 4
log(PM(T
thus di#ers from the optimal L # by a factor that is independent
of the group size n. Furthermore, (3) can be written
as a linear function of 1
as:
(4a)
where g(pf , pml ) is:
log(1 - q f - q 4
(4b)
and f(pf , pml ) is:
log(1 - qf - q 4
(4c)
Theorem 3. The sub-optimality factor L
L # of the protocol
of
Figure
1, is independent of group size n (# 1). Furthermore
1. if f(pf , pml ) < 0,
(a) L
L # is monotonically increasing with -log(PM(T )),
and
(b) As PM(T
2. if f(p f , pml ) > 0,
(a) L
L # is monotonically decreasing with -log(PM(T )),
and
(b) As PM(T
Proof. From equations (4a) through (4c).
We next calculate the average network load imposed by the
new failure detector algorithm. Every T # time units, each
non-faulty member (numbering (n - q f ) on an average) executes
steps (1-3), in the algorithm of Figure 1. Steps (1,6)
involve at most 2 messages, while steps (2-5) (which are executed
only if no ack is received from the target of the ping
of step (1) - this happens with probability (1 - q f - q 2
ml
involve at most 4 messages per non-faulty ping-req target
member. Therefore, the average network load imposed by
this protocol (in messages/time unit) is
Then, from Theorem 1 and equations (1),(2),
log[ PM(T )
ml )- e q f
log(1 - qf - q 4
log(PM(T
Even E[L] can be upper-bounded from the optimal L # by a
factor that is independent of the group size n.
Do the values of L
very high compared to
the ideal value of 1.0 ? The answer is a 'No' when values
of pf , pml are low, yet reasonable. Figure 3(a) shows
the variation of L
# as in equation (3), at low but reason-able
values of pf , pml , and PM(T ). This plot shows that
the sub-optimality factor of the network load imposed by
the new failure detector rises as pml and pf increase, or
decreases, but is bounded above by the function
g(pf , pml ), at all values of PM(T ). This happens because
at such low values of pf and pml , as seen
from
Figure
3(b) - Theorem 3.1 thus applies here. From
figure 3(a), the function g(pf , pml ) (bottom-most surface),
does not attain too high values (staying below 26 for the
values shown). Thus the performance of the new failure detector
algorithm is good for reasonable assumptions on the
network unreliability.
Figure
3(c) shows that the upper bound on E[L]
very
low (below 8) for values of pf and pml up to 15%. More-
over, as PM(T ) is decreased, the bound on E[L]
actually
decreases. This curve reveals the advantage of using randomization
in the failure detector. Unlike traditional distributed
heartbeating algorithms, the average case network
load behavior of the new protocol is much lower than the
worst-case network load behavior.
Figure
3 reveals that for values of p f and pml below 15%,
the L
L # for the new randomized failure detector stays below
26, and E[L]
8. Further, as is evident from
equations (3) and (5), the variation of these sub-optimality
factors does not depend on the group size (at large group
sizes). Compare this with the sub-optimality factors of distributed
heartbeating schemes discussed in Section 4, which
are typically at least #[n].
In reality, message loss rates and process failure rates could
vary from time to time. The parameters p f and pml , needed
to configure protocol parameters T # and k, may be di#-
cult to estimate. However, Figure 3 shows that assuming
reasonable bounds on these message loss rates/failure rates
and using these bounds to configure the failure detector suf-
fices. In other words, configuring protocol parameters with
ensure that the failure detector preserves
the application specified constraints (T , PM(T imposing
a network load that di#ers from the optimal worst-case
load L # by a factor of at most 26 in the worst-case, and
8 in the average case, as long as the message loss/process
failure rates do not exceed 15% (this load is lower when loss
or failure rates are lower).
(a) Variation of L
(according to equation
versus pml , pf , at di#erent values of
PM(T ). For low values of pml and pf ,
g(pf , pml ) is an upper bound on L
(b) Values of pf , pml for which f(pf , pml ) is
positive or negative.
pf 00.060.12pml26(E[L]/L*) <=
(c) Variation of E[L]
(according to equation (5)).
Figure
3: Performance of new failure detector algorith
5.3 Future Work and Optimizations
At Cornell University, we are currently testing performance
of a scalable distributed membership service that uses the
new randomized failure detection algorithm.
Extending the above protocol to the crash-stop model inherent
to dynamic groups involves several protocol extensions.
Every group member join, leave or failure detection entails a
broadcast to the non-faulty group members in order to up-date
their view. Further, this broadcast may not be reliable.
Implementing this protocol over a group spanning several
subnets requires that the load on the connecting routers or
gateways be low. The protocol currently imposes an O(n)
load (in bytes per second) on such routers during every protocol
period. Reducing this load inevitably leads to compromising
some of the Efficiency properties of the protocol,
as pings are sent less frequently across subnets.
The protocol can also be optimized to trade o# worse Scale
properties for better Accuracy properties. One such optimization
is to follow a failure detection (by an individual
non-faulty member through the described protocol) by
multicast of a suspicion of that failure, waiting for some
time before turning this suspicion into a declaration of a
member failure. With such a suspicion multicast in place,
protocol periods at di#erent non-faulty group members, targeting
this suspected member, can be correlated to improve
the Accuracy properties. This would also reduce the e#ect
of correlated message failures on the frequency of mistaken
failure declarations.
A disadvantage of the protocol is that since messages are restricted
to contain at most a few bytes of data, large message
headers mean higher overheads per message. The protocol
also precludes optimizations involving piggy-backed mes-
sages, primarily due to the random selection of ping targets.
The discussion in this paper also points us to several new
and interesting questions.
Is it possible to design a failure detector algorithm that,
for an asynchronous network setting, satisfies Complete-
ness, Efficiency, Scale requirements, and the Speed requirement
(section with a deterministic bound on time to
detection of a failure (T ), rather than as an average case as
we have done in this paper ? 8 Notice that this is not di#cult
to achieve in a synchronous network setting (by modifying
the new failure detector algorithm to choose ping targets
in a deterministic and globally known manner during every
protocol period).
We also leave as an open problem the specification and realization
of optimality load conditions for a failure detector
with the Speed timing parameter T set as the time to
achieve Strong Completeness for any group member failure
(rather than just Weak Completeness).
8 Heartbeating along a logical ring among group members
(eg., [7]) seems to provide a solution to this question. How-
ever, as pointed out before, ring heartbeating has unpredictable
failure detection times in the presence of multiple
simultaneous failures.
Of course, it would be ideal to extend all such results to
models that assume some degree of correlation among message
losses, and perhaps even member failures.
6. CONCLUDING COMMENTS
In this paper, we have looked at designing complete, scal-
able, distributed failure detectors from timing and accuracy
parameters specified by the distributed application. We
have restricted ourselves to a simple, probabilistically lossy,
network model. Under certain independence assumptions,
we have first quantified the optimal worst-case network load
(messages per second, with a limit on maximal message size)
required by a complete failure detector algorithm in a process
group over such a network, derived from application-
specified constraints of 1) detection time of a group member
failure by some non-faulty group member, and 2) probability
(within the detection time period) of no other non-faulty
member detecting a given non-faulty member as having
failed. We have then shown why the popular distributed
heartbeating failure detection schemes inherently do not satisfy
this optimal scalability limit.
Finally, we have proposed a randomized failure detector algorithm
that imposes an equal expected load on all group
members. This failure detector can be configured to satisfy
the application-specified requirements of completeness
and accuracy, and speed of failure detection (on average).
Our analysis of the protocol shows that it imposes a worst-case
network load that di#ers from the optimal by a sub-optimality
factor greater than 1. For very stringent accuracy
requirements (PM(T ) as low as e -30 ), reasonable message
loss probabilities and process failure rates in the network
(up to 15% each), the sub-optimality factor is not as large as
that of traditional distributed heartbeating protocols. Fur-
ther, this sub-optimality factor does not vary with group
size, when groups are large.
We are currently involved in implementing and testing the
behavior of this protocol in dynamic group membership sce-
narios. This involves several extensions and optimizations
to the described protocol.
Acknowledgments
We thank all the members of the Oc-eano group for their
feedback. We are also immensely grateful to the anonymous
reviewers and Michael Kalantar for their suggestions
towards improving the quality of the paper.
7.
--R
Heartbeat: a timeout-free failure detector for quiescent reliable communication
Timing failure detection and real-time group communication in real-time systems
The process group approach to reliable distributed computing.
Probabilistic analysis of a group failure detection protocol.
Unreliable failure detectors for reliable distributed systems.
On the quality of service of failure detectors.
Impossibility of distributed Consensus with one faulty process.
A probabilistically correct leader election protocol for large groups.
Solving Agreement problems with failure detectors
Optimal implementation of the weakest failure detector for solving Consensus.
In search of Clusters
A gossip-style failure detection service
--TR
The process group approach to reliable distributed computing
Impossibility of distributed consensus with one faulty process
Unreliable failure detectors for reliable distributed systems
Fail-awareness in timed asynchronous systems
In search of clusters (2nd ed.)
Optimal implementation of the weakest failure detector for solving consensus (brief announcement)
Heartbeat
A Probabilistically Correct Leader Election Protocol for Large Groups
On the Quality of Service of Failure Detectors
Probabilistic Analysis of a Group Failure Detection Protocol
--CTR
Y. Horita , K. Taura , T. Chikayama, A Scalable and Efficient Self-Organizing Failure Detector for Grid Applications, Proceedings of the 6th IEEE/ACM International Workshop on Grid Computing, p.202-210, November 13-14, 2005
Jin Yang , Jiannong Cao , Weigang Wu , Corentin Travers, The notification based approach to implementing failure detectors in distributed systems, Proceedings of the 1st international conference on Scalable information systems, p.14-es, May 30-June 01, 2006, Hong Kong
Tiejun Ma , Jane Hillston , Stuart Anderson, Evaluation of the QoS of crash-recovery failure detection, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Greg Bronevetsky , Daniel Marques , Keshav Pingali , Paul Stodghill, Automated application-level checkpointing of MPI programs, ACM SIGPLAN Notices, v.38 n.10, October
Wei Xu , Jiannong Cao , Beihong Jin , Jing Li , Liang Zhang, GCS-MA: A group communication system for mobile agents, Journal of Network and Computer Applications, v.30 n.3, p.1153-1172, August, 2007
On the Implementation of Unreliable Failure Detectors in Partially Synchronous Systems, IEEE Transactions on Computers, v.53 n.7, p.815-828, July 2004
Andrei Korostelev , Johan Lukkien , Jan Nesvadba , Yuechen Qian, QoS management in distributed service oriented systems, Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: parallel and distributed computing and networks, p.345-352, February 13-15, 2007, Innsbruck, Austria
Kelvin C. W. So , Emin Gn Sirer, Latency and bandwidth-minimizing failure detectors, ACM SIGOPS Operating Systems Review, v.41 n.3, June 2007
Michel Reynal, A short introduction to failure detectors for asynchronous distributed systems, ACM SIGACT News, v.36 n.1, March 2005 | scalability;failure detectors;efficiency;distributed systems;accuracy |
384052 | A framework for semantic reasoning about Byzantine quorum systems. | We have defined a class of shared variables called TS-variables that includes those implemented by the various Byzantine quorum system constructions of Malkhi and Reiter, and developed a number of definitions and theorems enabling us to reason about these variables abstractly. Using these tools, we have reduced the problem of Lamport's atomic semantics for such variables to the simpler problem of regular semantics. We discuss the fact that both these problems have remained stubbornly difficult to solve for some types of Byzantine quorum system variables (notably masking quorum system variables) by showing that they are not solvable by traditional approaches in an asynchronous environment. Finally, for such variables we define the notion of pseudoregular and pseudoatomic semantics, and state briefly that a similar reduction holds for these concepts. | Introduction
Byzantine quorum systems [MR98a] are a promising approach to the problem of e-ciently implementing
Byzantine fault-tolerant data services. There are several variations on this approach [Baz97, MRWr97,
MRW97, MR98a], but the basic concept is the same for all of them: data are maintained simultaneously
at multiple sites, and each read or write operation is processed at a subset (called a quorum) of those
sites. Quorums are dened in such a way that the intersection of any two quorums contains enough servers
to allow a query to determine and return accurate and up-to-date information even in the presence of a
limited set of arbitrarily faulty servers. Furthermore, because only a subset of the servers is concerned with
any given operation, such a system can also remain available in spite of limited server crashes or network
partitions. Finally, the fact that the service is designed to tolerate out-of-date servers (e.g., those which
were not part of the most recent write quorum) greatly simplies the task of recovering from failures; as
long as a quorum of servers is up to date, others may be brought back online without any need to recover
their most recent state.
Analyzing the semantics of shared variables implemented by these quorum systems can be quite chal-
lenging. Heretofore, such analysis has been limited to individual protocols; there has been no framework for
reasoning about the semantics of quorum variables as a family. For example, while there exist compelling
arguments to the eect that fully serializable operations have been achieved for some types of quorum
systems (notably the dissemination quorum systems of [MR98b]) and remain an open problem for others
(e.g., masking quorum systems, [MR98a]), these arguments do not tell us why these discrepancies exist, or
the degree to which individual solutions can be generalized.
One of the primary contributions of this paper is to address this need. We present a set of denitions
and theorems that allow us to reason about the class of shared variables implemented by quorum systems,
including the various Byzantine quorum systems; we call such variables TS-variables because of the important
role of timestamps in their protocols. 1 Further, we give an adapted version of Lamport's formal
denitions of the concepts of safe, regular, and atomic semantics [Lam86]. These concepts have traditionally
been used to describe the semantics of Byzantine quorum systems, but their use has necessarily had
to be somewhat informal, as Lamport's formal denitions and theorems were based on the assumption
that variable writes were never concurrent with one another. Our adaptation is not dependent on this
assumption, and so can be applied directly to the variables of interest in a fully calculational proof style.
As far as we know, this is the rst paper to apply calculational proofs to quorum system variables.
We use these formalisms to prove that the atomicity result of [MR98b] generalizes to an important
theorem about TS-variables: the writeback mechanism used in that particular protocol in fact reduces the
problem of atomic variable semantics for any TS-variable to the simpler problem of regular semantics. The
correctness of the atomic protocol of [MR98b] can in fact be viewed as a corollary of this result, as the
cryptographic framework of dissemination quorum systems (sans writeback) enforces regular semantics.
As a follow-up, we show why the problem of atomic semantics (fully serializable operations) has been
straightforwardly solved for some types of quorum system while remaining unsolved for others. Specically,
we show that for a signicant subclass of TS-variables, traditional approaches to protocol design will always
have some danger of failed read queries (aborted, retried or incorrect) in an asynchronous environment.
(In fact, the masking quorum systems of [MR98a], for which atomic semantics have proved stubbornly
elusive, fall into this category.) Finally, we propose and brie
y discuss the somewhat weaker notions of
pseudoregular and pseudoatomic semantics for such systems.
The structure of this paper is as follows. In Section 2, we dene TS-variables and a number of related
concepts and theorems, including our adapted version of Lamport's semantic categories. In Section 3 we use
In fact, our denition of TS-variables is not specic to quorum system variables; it simply captures those properties that
are common to such variables and are relevant to our analysis. Our theorems therefore also hold for any other variable types
that may share these properties.
these formalisms to give a fully calculational proof that any regular read/write protocol that satises the
denition of a TS-variable protocol can be used to implement a corresponding atomic read/write protocol.
In Section 4 we show that for an important class of possible protocols, traditional approaches to protocol
design always result in a danger of unresolvable queries in an asynchronous system; we then dene the
weaker notions of pseudoregular and pseudoatomic semantics, which can be implemented in spite of such
queries. We conclude in Section 5. (An example of a pseudoregular protocol for masking quorum systems
is included in the appendix.)
Preliminaries
2.1 Formalizing masking quorum system variables: TS-variables
In order to reason formally about Byzantine quorum system variables as a class, we need an abstraction
that denes the important features of such variables independently of operational details. To this end, in
this section we introduce the concept of TS-variables. We begin by dening the more general concept of
\timestamped variables" as well as a number of useful functions on such variables:
Denition 1 A timestamped variable is a variable of any type whose value is read and updated in conjunction
with an associated timestamp, where timestamps are drawn from some unbounded totally ordered
set.
Let RW be a set of read and write operations on some timestamped variable with a given read/write
protocol; let R RW be the set of reads, and let W RW be the set of writes. Then the following
function denitions hold (R and B represent the set of reals and the set of booleans, respectively):
value: For op 2 RW , if op is a read, then value(op) is the value returned by the read; if op is a write, then
value(op) is the value written.
ts: For op 2 RW , if op is a read, then ts(op) is the timestamp of the value returned by the read; if op is a
write, then ts(op) is the timestamp assigned to the value written.
readsfrom: For r 2 R, w reads the result of write w. For timestamped
variables, we dene this to be equivalent to:
readsfrom
For the purposes of the next two functions, we postulate a real-valued global \clock" (e.g., the age of the
universe in milliseconds) that provides an absolute timescale for system events. As the systems we discuss
are asynchronous, individual processes do not have access to global clock values or to these functions,
which are used only for reasoning purposes.
start: The start time of the operation in global time.
end: The end time of the operation in global time.
The purpose of these functions is to give us a convenient shorthand for reasoning about the possibility
of concurrency between operations without being specic about the actual (nondeterministic, in an
asynchronous environment) order in which servers process requests. Essentially, if
then op2 is not concurrent with op1, whereas if
such concurrency may exist and thus needs to be resolved in any proposed serialization of op1 and op2.
For simplicity, we will therefore treat the latter expression as our denition of concurrency hereafter. 2
In keeping with their hypothetical meaning, we stipulate that the start and end functions meet the
following restriction:
2.1.1 TS-variables
A variable consists of a type, a memory address, and a specication of the operations that may be performed
on it, including at least read and write. 3 We refer to such a specication as a variable protocol. Read and
write activity on a variable is described in terms of a run of its protocol:
Denition 2 A run of a variable v is a set of operations performed on v, all of which meet the specication
of v's protocol. We call a run RW complete if, for all read operations r 2 RW , there exists a write operation
RW such that readsfrom (r; w).
It is useful to have a separate term for the run consisting of all operations performed on a variable during
its lifetime:
Denition 3 The history of a variable is the run consisting of all operations performed on that variable
during its lifetime.
In this chapter we will continue to use the label RW to represent a variable run; subscripts will be used
to distinguish between runs when the context is not otherwise clear. The projection of a run RW onto its
read operations will be denoted R; the corresponding projection onto write operations will be denoted W .
Although some researchers use the terms \run" and \execution" interchangeably, in this work we nd
it useful to follow the example of [Lam86], which gives them distinct technical meanings. Specically, an
execution associates a run with a precedence relation on the operations of that run, i.e.:
Denition 4 An execution of a variable v is a pair hRW; - i, where RW is a run of v and - is a
precedence relation (irre
exive partial order) on the operations in RW .
We now dene two specic types of execution that are of special importance to this work:
Denition 5 An execution hRW; - i is said to be real-time consistent if
Denition 6 An execution hRW; - i is said to be write-ordered if it satises the following:
1.
2. hW; - i is real-time consistent.
2 A more literal denition would be that two operations are concurrent if and only if there exist two servers that process
them in dierent order. However, it will be readily seen that if end(op1) < start(op2), then every server processes op1 before
hence they are not concurrent.
3 We do not concern ourselves with read-only variables in the context of this work.
In other words, (1) in a write-ordered execution, the write operations are totally ordered by - , and (2)
the order is consistent with the partial order of write operations in real time.
Denition 7 For all runs RW of a timestamped variable v, the relation ts - is dened by:
1. 8op 2 RW;8w ts - w ts(op) < ts(w)
2. 8w 2 W;8r ts - r ts(w) ts(r)
3. 8r a ; r b
ts
It is easy to see that ts - is irre
exive, antisymmetric and transitive. It is therefore an irre
exive partial
order. (Note that operations with identical timestamps are not necessarily ordered by ts - .)
We now dene TS-variables as follows:
Denition 8 A TS-variable is a timestamped variable v such that, for all histories RW of v, hRW; ts - i
is write-ordered.
Note that Denitions 7 and 8 imply that TS-variable writes are uniquely identied by timestamp; thus
for any given read, there is at most one write with the same timestamp. We can therefore make the
following observation, which provides a simplied form of the denition of readsfrom() for TS-variables:
Observation 1 For any read operation r and write operation w of a complete TS-variable run,
2.2 Formalizing data semantics for TS-variables
We now dene what it means for a write-ordered execution to be safe, regular or atomic. The denitions
of safe and regular are based on the idea that once a write to a variable has completed, previous values of
that variable should not be read. This concept is expressed in [Lam86] in terms of the set of writes that
a given read \sees": 4
Denition 9 For a write-ordered execution hRW; - i, let w be the ordered list of write operations
from RW as dened by - . Furthermore, for a given read operation r, let i be the index of the last write
that precedes r, i.e., start(r)g. Then we say that r sees W 0 W , where:
We express this relationship in predicate form as
Thus the values that a read sees are those that might be legitimately returned by that read, i.e., the
value of the most recently completed write w i and the values of any concurrent writes.
The above denition is di-cult to use directly. Fortunately, the fact that hRW; - i is write-ordered
implies that all writes seen by r fall within a well-dened range { no such write is earlier than the last
(in terms of the write order) to precede r or later than the last write that is concurrent with r:
4 [Lam86] dened this concept for a single-writer register, whose write operations are thus necessarily serial. We relax this
requirement, dening our version of \sees" in terms of serializable, rather than serial, writes. Thus our denition can be
applied to variables with multiple writers.
Observation 2 For a given read r, let i be dened as in Denition 9, and let
Then: 5
We are now ready to give our denitions of safe, regular, and atomic executions.
2.2.1 Safe executions
In informal terms, an execution is safe if any read that sees only one write returns the value of that write.
Operationally, this means that a read that is concurrent with no writes returns the result of the \most
recent" write according to the serialization dened by the write-ordering. Formally, continuing to use w
to denote the i th write in the order dened by - we say:
Denition
it is write-ordered, and
2.2.2 Regular executions
A write-ordered execution is regular if every read returns some value that it sees, i.e., the result of the
most recently completed write or a concurrent one. Formally:
Denition 11 An execution hRW; - i is regular if:
it is write-ordered, and
Note that a regular execution is necessarily safe.
For TS-variables, this denition has a useful consequence: the timestamp of any given read is at least the
timestamp of the most recently completed write. Formally:
ts - i be regular. Then
A calculational proof of this lemma is given in Figure 1; it consists of showing that any arbitrary write
that precedes a given read has a timestamp less than or equal to that of the read.
5 Note that the reverse is not true. It is possible for a write to fall within the given range without being seen if the \invisible"
occurs after read r, but concurrently with w j .
Figure
1: Proof of Lemma 1
2.2.3 Atomic executions
Finally, we dene an atomic execution as an execution that behaves as though the operations were totally
ordered in a real-time consistent way, i.e., (A) they are totally ordered, (B) they behave as though performed
serially in that order, and (C) the order is consistent with the partial order of the operations in real time:
Denition 12 An execution hRW; - i is atomic if:
- is a total order on RW ,
Note that the second and third bullets of the denition above imply that any atomic execution is also
regular, while the reverse is not necessarily true.
The safeness, regularity or atomicity of a variable protocol is a property of the set of possible histories (see
Denition of a variable implemented with that protocol:
Denition 13 A variable protocol is safe (regular, atomic) with respect to a precedence relation - if,
for all possible histories RW consistent with the protocol, the execution hRW; - i is safe (regular, atomic).
A protocol is safe (regular, atomic) if it is safe (regular, atomic) with respect to some precedence relation.
A variable is safe (regular, atomic) if its protocol is safe (regular, atomic).
Observation 3 Denitions 11 and 12 (specically, the second bullet of each) imply that every read in a
regular or atomic execution reads from some write; thus all possible histories of a regular or atomic variable
are complete runs.
3 Reducing the Atomic Semantics Problem
In the previous section, we developed tools for reasoning about TS-variables, a class of shared variables
that includes those implemented by various types of benign and Byzantine quorum systems. We now
demonstrate the power of these tools by using them to prove that the writeback mechanism of [MR98b]
does not apply only to dissemination quorum systems; it can be used to promote a regular protocol to an
atomic one for any type of TS-variable. Specically, we show how to construct a protocol for a TS-variable
atom given a protocol for a regular TS-variable v reg , and prove that the result is atomic. We accomplish
this by means of the following steps:
1. Add a new operation to the protocol for v reg , dene the operations of v atom in terms of this expanded
regular protocol, and show that the resulting v atom is a TS-variable.
2. Dene a total order ts 0
- on operations of v atom that extends ts - , i.e., op a
ts - op b ) op a
ts 0
3. Use Denition 13 to prove that v atom is atomic with respect to ts 0
- .
3.1 Dening the atomic protocol
Let v reg be a regular TS-variable. We expand the protocol of v reg by dening a third operation in addition
to read and write: writeback. The writeback operation is similar to the write operation of v reg except that
whereas write operations calculate their own timestamps, a writeback takes its timestamp as an argument;
thus writebacks are not necessarily ordered by ts - . We stipulate, however, that all runs RW exp of the
expanded protocol continue to satisfy Lemma 1, as well as the following additional property: 6
Property 1 For all read operations r, write operations w and writeback operations b in RW exp ,
We now dene our proposed atomic variable protocol v atom as follows, where read reg and write reg are
the read and write protocols of v reg , and val, ts are the value and timestamp respectively of the read reg
operation:
In other words, a write operation of v atom consists of a single write operation of v reg , while a read operation
of v atom consists of a read operation of v reg followed by a writeback of the resulting value and timestamp.
The timestamp of each Read atom or Write atom operation is identical to the timestamp of the underlying
read reg or write reg operation. Because each write operation of v atom consists exactly of one write operation
of v reg , it follows that v atom is also a TS-variable. (For clarity, we will hereafter follow the convention that
operations of v atom are represented in boldface, while operations of v reg are represented in italics.)
6 In masking quorum systems, as in dissemination quorum systems, both Lemma 1 and Property 1 are implemented by
having a write/writeback perform a null operation at any server whose current timestamp for the variable is higher than that
of the write/writeback; thus monotonicity of timestamps is enforced at each server.
3.2 A total order over operations on v atom
In preparation for proving v atom atomic, we specify a precedence relation that totally orders all runs
RW atom of v atom . The ts - relation that we have already dened is not su-cient, as it does not order read
operations that share the same timestamp. We therefore propose to dene an extension ts 0
- of ts - using
the following additional function that maps all operations of any run RW to some totally ordered set:
gtf: An arbitrary function with the following three properties:
Uniqueness: 8op a ; op b 2 RW , gtf (op a
An example of such a function is a mapping from op 2 RW to the pair (time(op); id), where id is a unique
real-valued operation identier, and
The purpose of the gtf function is to act as a supplement to timestamps when we dene a serialization
of the operations. Sequentiality ensures that the order imposed by gtf is compatible with the partial
order of the operations in real-time, Uniqueness ensures that the function can act as a \tie-breaker" for
operations with the same timestamp, and Read Promotion ensures that each read operation has a higher
gtf than any write that might aect it. 7
We now dene ts 0
- as follows: for any given run RW atom of v atom , 8op a ; op b 2 RW atom :
op a
ts 0
In other words, ts 0
- is the lexicographic ordering on the pair (ts(op); gtf (op)). It is therefore a total order by
virtue of the Uniqueness property of gtf() and the fact that ts() and gtf() have totally ordered codomains.
As a consequence of this denition, we have the following lemma and corollary, which allow us to use
Denition 12 to prove atomicity:
ts - op b ) op a
ts 0
Corollary 1 All executions hRW atom ; ts 0
- i of v atom are write-ordered.
ts 0
ts - r
The proofs of this lemma and corollaries are straightforward, and are omitted for reasons of space.
3.3 Proving v atom atomic
Our remaining goal is to prove that hRW atom ; ts 0
- i is atomic for all runs RW atom of v atom , thus proving
that v atom is an atomic variable:
Theorem 1 For all possible histories RW atom of v atom , the execution hRW atom ; ts 0
- i is atomic.
As we have already shown that ts 0
totally orders RW atom , our remaining obligations are to prove:
7 In fact, these properties are su-cient to allow us to dene a total order strictly in terms of gtf . However, gtf alone
does not specify the behavior of timestamps, and so does not allow us to reason directly about the behavior of reads via the
readsfrom function. We will therefore will use gtf as indicated above.
Figure
2: Proof that readsfrom(r; w
ts 0
ts 0
hRW; ts 0
- i is real-time consistent.
The proofs appear in Figures 2 and 3. In the latter case, our obligation is to prove that:
ts 0
We prove this separately for each of the four possible cases: two writes, a write followed by a read, a read
followed by a write, and two reads. For simplicity, we will use the convention that r and w (with possible
subscripts) refer to operations of RW atom , while r, w, and b denote the corresponding read, write and
writeback operations of the expanded regular protocol (Figure 3).
Thus we have reduced the problem of atomic semantics for all TS-variables to that of regular semantics.
In the next section we show that, while the latter problem is readily solvable for some types of TS-variables
(e.g., dissemination quorum systems), there is a signicant class of TS-variables (including, unfortunately,
masking quorum systems) for which regular semantics cannot be achieved in an asynchronous environment
using the type of approach that has heretofore been standard. At the end of the section we brie
y discuss
how regular and atomic semantics may be approximated for such variables.
On Regular and Pseudoregular Semantics
With a few exceptions (e.g., [Baz97]), most Byzantine quorum system protocols have been designed for
asynchronous systems with few restrictions. Typically:
Any client may send a write request to a quorum of servers at any time, using its choice of timestamp;
i.e., writes are always enabled.
No ordering or scheduling is imposed on read and write requests.
Read and write requests are processed by servers in the order received.
8 According to the convention we adopted earlier, Ratom is the set of read operations from RWatom .
Figure
3: Proof that hRW; ts 0
- i is real-time consistent
Hereafter, we describe a system with these characteristics as a nonrestricted system.
Although there are several variations on quorum system protocols, most or all of those currently in the
literature share the following characteristics:
1. Each server maintains a single version of the variable image at any given time.
2. A read generates a single query, and returns a non-? value only if some appropriately dened voucher
set of servers responds to its query with identical images; otherwise it returns ? for both value and
timestamp. (In this context, ? is the signal for an aborted operation, and is never written to the
3. The processing of a write request with a su-ciently high timestamp changes the state of the variable
image at the server, and the processing of a read request consists simply of returning the requested
data.
For the remainder of this discussion, we refer to such a protocol as a classic quorum protocol. More
specically, if the smallest voucher set accepted by the protocol is of size m, we refer to it as a classic m-set
quorum protocol.
In this section, we show that, for m > 1, any classic m-set quorum protocol in a nonrestricted system
may return ? in response to a given query. For a read r that returns this value, there exists no write w
such that readsfrom(r; w); thus any history that contains such a read is not regular (cf. Observation 3).
In short:
Theorem 2 For m > 1, no classic m-set protocol is regular in a nonrestricted system.
We prove this theorem by showing that certain possible server responses to a read query in such a
system are unresolvable for m > 1, and results in a read value of ?. As a corollary to the theorem, we
also show that the same is true even if each server maintains a bounded list of the variable images it has
received.
4.1 Denitions
We begin with a number of useful denitions. Let P be a classic m-set quorum protocol, and let r and w
be operations under P such that r is a read operation and w is the most recently completed (as determined
by timestamp) write operation as of the beginning of r. Let Q r and Qw be the quorums on which r and w
respectively are performed. Let F Q r be the set of servers that return faulty responses during read r.
Denition 14 The informed set for r is the set Q r \ Qw n F .
Note that if there are no writes concurrent with r, so that no servers in Q r \ Qw have been overwritten
since w, then the informed set for r is the voucher set for r. In any case, all servers in the informed set
return the results of writes that r sees (Denition 9), and in the worst case these are the only servers that
do so. We can therefore observe:
Observation 4 Protocol P is regular i all possible sets of responses to a read by informed sets contain
identical responses from at least one voucher set.
4.2 Nonregularity argument: the smallest informed set
Let Q be a quorum system with classic m-set quorum protocol P. Let mininf be the size of the smallest
possible informed set for Q. 9
The smallest informed set represents the worst-case scenario for a successful read. Suppose that for a
given history RW under P, every read operation is concurrent with at most k write operations. Then:
Theorem 3 RW is regular i:
Proof: For an arbitrary read operation r let I be the partitioning of the informed set such
that I 0 contains the servers that return the result of the most recently completed write operation and each
I i contains the servers that return the result of the i th write that is concurrent with r. In a nonrestricted
system, any or all of the sets I i may be nonempty, depending on the order in which concurrent operation
requests are received at individual servers. We prove the \if" and \only if" portions of the theorem
separately.
9 as every quorum system is based on well-dened set of possible failure scenarios, this value is well-dened.
If: If dmininf =(k then for any read r, some I i 2 I r contains a voucher set by the Extended
Pigeonhole Principle, which states that at least one member of a partition contains at least the average
number of elements for the partition.
Only if: Suppose dmininf =(k r be a read operation with an informed set of size mininf,
and suppose that r is concurrent with exactly k writes. Furthermore, let I r be an even partition, i.e., a
partition in which every set contains either the ceiling or the
oor of the average number of elements. I r
does not contain a voucher set, so r returns ?, implying that RW is not regular. Since this history is
possible in a system such as that described above, the protocol is not regular.2
Since by denition a nonrestricted system allows arbitrary values of k, we have:
Lemma 3 A classic m-set protocol P is regular in a nonrestricted system
Theorem 3 follows directly from this result.
Thus, if m > 1 for a classic m-set read protocol, i.e., the protocol requires agreement between multiple
servers in order to determine a correct result, then it is not regular in an unrestricted system. While
this category includes the masking quorum systems of [MR98a], it is worth noting that ordinary quorum
systems and dissemination quorum systems, which are classic m-set protocols for are already
known to be regular for their appropriate failure models (benign, Byzantine-limited-by-authenticated-data
respectively).
4.2.1 Non-regularity of classic protocols with bounded image list
We dene a classic m-set protocol with bounded image list as an enhanced m-set protocol with the following
characteristics:
1. Each server maintains a bounded list of the images it has received for a given variable, i.e., a list of
the last hsize images written to the server.
2. Again, a read returns a non-? value only if it receives identical images from at least b
for a specied b > 0.
Even if each server responds to every query with its entire list of hsize images, it remains possible for a
read query to be unresolvable in a nonrestricted system, i.e.:
Lemma 4 For m > 1, no classic m-set quorum protocol with bounded image list is regular in a nonre-
stricted system.
Proof: In a nonrestricted system, any given read operation may be concurrent with an unbounded
number of writes. Suppose some read operation r is concurrent with hsizes write operations, where hsize
is the size of the bounded image list and s is the size of the informed set for r. For 1 i s, server S i
may receive the rst hsizei write requests before receiving the request for r. Then the image list of S 1
will contain the images of the rst hsize writes, the image list of S 2 will contain the images of the next
hsize writes (which displace the rst hsize because the list is bounded), and so forth. In response to its
query, r therefore receives hsizes dierent variable images, each from exactly one server. It is therefore
unable to resolve the query.2
4.3 Ignoring aborts: pseudoregular semantics
Certain applications, however, may be able to tolerate occasional aborted reads. For such applications it
is worthwhile to reason about a somewhat weaker version of regular (and atomic) semantics for variables,
which we will refer to as pseudoregular (respectively pseudoatomic) semantics. We dene these concepts in
terms of variable pseudohistories, as follows:
Denition 15 The pseudohistory of a variable is the run consisting of all writes in the variable history
and all reads that do not return ? (i.e., the set of all non-aborted operations).
We now dene our new semantic concepts as follows:
Denition variable protocol is pseudoregular (pseudoatomic) with respect to a precedence relation
- if, for all possible pseudohistories RW consistent with the protocol, the execution hRW; - i is regular
(atomic). A protocol is pseudoregular (pseudoatomic) if it is pseudoregular (pseudoatomic) with respect to
some precedence relation.
4.3.1 Reducing pseudoatomicity to pseudoregularity
Suppose we have a pseudoregular protocol for a TS-variable. (An example of such a protocol for masking
quorum systems appears in the appendix to this paper, and was initially sketched in [AMPRW00].) For
any possible pseudohistory RW , the execution hRW; ts - i satises Denition 11, so RW is a complete run.
Therefore the arguments of Section 3 apply to these pseudohistories as well. The same writeback mechanism
we demonstrated before can thus be used to produce a pseudoatomic protocol from the pseudoregular one.
5 Conclusion
In this paper we have presented a set of denitions and theorems that allow us to reason about the semantics
of shared variables implemented by various types of quorum systems, including the often-problematic
Byzantine quorum systems. This framework allows us to develop theorems about such variables (which
we call TS-variables) as a class, without reference to the details of individual implementations. We have
also used the resulting tools to prove that any regular protocol for a TS-variable can be straightforwardly
enhanced into an atomic protocol.
As a subsidiary result, we showed that there is a signicant subclass of TS-variables (including masking
quorum systems) for which traditional design approaches cannot produce a regular protocol for an
asynchronous environment. For such variables, we introduce the idea of pseudoregular and pseudoatomic
semantics, which are similar to the original concept except in that they allow occasonal aborted operations.
Acknowledgements
We would like to express our sincere thanks to Dahlia Malkhi and Michael Reiter
for numerous thought-provoking discussions, and to Jennifer Welch, HyunYoung Lee, Pete Manolios and
Phoebe Weidmann for their helpful comments on earlier drafts of this paper.
--R
Dynamic Byzantine Quorum Systems.
Synchronous Byzantine quorum systems.
Atomic Multireader Register.
On Interprocess Communications.
Quorum systems.
Byzantine Quorum Systems.
Secure and Scalable Replication in Phalanx.
Optimal Byzantine Quorum Systems.
Probabilistic Quorum Systems
Extensions of the UNITY Methodology
The Elusive Atomic Register.
--TR
Dynamic Byzantine Quorum Systems
Secure and Scalable Replication in Phalanx
--CTR
Ittai Abraham , Gregory Chockler , Idit Keidar , Dahlia Malkhi, Wait-free regular storage from Byzantine components, Information Processing Letters, v.101 n.2, p.60-65, January, 2007 | distributed data services;atomic variable semantics;byzantine fault tolerance;quorum systems |
384178 | Combined tuning of RF power and medium access control for WLANS. | Mobile communications, such as handhelds and laptops, still suffer from short operation time due to limited battery capacity. We exploit the approach of protocol harmonization to extend the time between battery charges in mobile devices using an IEEE 802.11 network interface. Many known energy saving mechanisms only concentrate on a single protocol layer while others only optimize the receiving phase by on/off switching. We show, that energy saving is also possible during the sending process. This is achieved by a combined tuning of the data link control and physical layer. In particular, harmonized operation of power control and medium access control will lead to reduction of energy consumption. We show a RF power and medium access control trade-off. Furthermore we discuss applications of the results in IEEE 802.11 networks. | Introduction
Reduction of energy consumption for mobile devices
is an emerging eld of research and engineering. The
driving factors are the weight and time in operation of
mobile devices, which should be small and should allow
for a long operation time, respectively. The weight
is determined to a large extent by the batteries. Besides
the display, CPU and hard disk, one of the main
power sinks is the wireless network interface card, which
requires power for transmitting radio signals and protocol
processing (see [1]). In this paper, we concentrate
on the wireless network interface of a mobile device.
In particular we investigate the dependencies between
MAC protocol processing and the physical layer of an
IEEE 802.11 network interface.
This work has been supported partially by a grant from the
BMBF (German Ministry of Science and Technology) within
the Priority Program ATMmobil.
Various options of power saving on the protocol level
have been published in literature. In [2] it is reported,
that contention protocols result in high energy con-
sumption, while reservation and polling may reduce it.
Furthermore in [3] it is shown that solving the hidden
terminal problem by means of a busy tone channel the
energy consumption substantially reduces in ad hoc net-
works. In [4] it is shown, that powering o the mobile's
network interface during idle times is an important option
to save energy.
The aforementioned mechanisms try to minimize energy
consumption on the MAC/DLC level (Medium Access
Control/Data Link Control). There are also several
options on the physical layer for instance by choosing
appropriate modulation and coding schemes with
respect to the assumed channel characteristics as well
as the use of low power ICs and algorithms with low
computational complexity. Another important option
in the physical layer is power control. In [5] and [6]
it is stated that not only cochannel interference is reduced
but also the system capacity and the time interval
s
between battery charges are increased. The main parameter
for power control is the required level of link
reliability, which is often expressed in terms of the bit
error rate (BER). Power control mechanisms adapt the
radio transmit power to a minimum level required to
achieve a certain link reliability. In this paper we show,
that minimizing the transmit energy does not necessarily
lead to energy savings.
We exploit a novel approach to reduce energy con-
sumption: Protocol Harmonization. In contrast to the
methods mentioned above, which try to optimize a certain
protocol or layer with respect to energy consump-
tion, protocol harmonization strives to balance the protocols
and mechanisms of dierent layers. The need for
protocol harmonization was realized at the start of the
nineties, where the poor Transmission Control Protocol
(TCP) performance over wireless received agreat deal of
attention. For instance, in [7,8] it is reported that link
level retransmissions competing with transport protocol
retransmissions are not only redundant but can degrade
the performance, especially in the case of a higher bit
error rate. This approach was rst used for the reduction
of energy consumption in [9] and [10], where error
control schemes are proposed, which perform optimally
with respect to the channel characteristics. We adopt
this approach for the reduction of power drain of an
IEEE 802.11 (see [11]) 2Mbit/s DSSS network interface
using the Distributed Coordination Function. In par-
ticular, we propose a combined tuning of the physical
and MAC layer. The system under study is shown in
Figure
1.
The idea is to reduce energy consumption by reducing
the RF transmission power. But reduction of
RF transmission power causes a higher bit error rate
and results in a higher packet error rate. The IEEE
802.11 MAC reacts with retransmissions of corrupted
packets leading to a higher power drain because of multiple
transmissions of the same packet. By reversing
this idea, it is possible to increase RF power and decrease
the bit error rate and therefore the probability
of retransmissions. But increasing RF power increases
energy consumption. These two ideas lead to a MAC re-transmission
and RF transmission power trade-o. We
analyze this trade-o and investigate the optimal operating
points to minimize energy consumption. The
next three sections, Link Budget Analysis, Gilbert-Elliot
Channel Model, and IEEE 802.11 present the basics
necessary to analyze the trade-o. In the sections Energy
Consumption and Investigation of the RF transmit
power in
uence we show that there is an optimal
value of RF transmission power minimizing the negative
eects of retransmission and in turn energy con-
sumption. We conclude the paper in Protocol Design
Recommendations with a possible application of the results
to IEEE 802.11 and Summarize the paper in the
last section.
2. IEEE 802.11 Link Budget Analysis
We present shortly the basics of top level link budget
analysis (LBA, see [12,13]). As one of the main results
RF power can be calculated for a given set of parameters
and requirements (e.g. level of link reliability).
In our case we assume the IEEE 802.11 2 MBit/s Direct
Sequence Spread Spectrum (DSSS) physical layer,
which uses a DQPSK modulation scheme, and a single
ad hoc network.
Shannons capacity theorem gives the system capacity
in an ideal environment. The real world system capacity
can approach very closely the theoretical value by
means of modulation. As we can see from equation (2.1)
the channel capacity depends on bandwidth, noise, and
signal strength. The channel capacity C is dened by
strength (watt), and (watt). The
thermal channel noise N is dened by
(Hz). An important LBA factor is the range. In
space the power of the radio signal decreases with
the square of range. The path loss L (dB) for line of
site (LOS) wave propagation is dened by
distance between transmitter and receiver
length (meter). is dened
by c=f , where c is the speed of light (3
and f is the frequency (Hz). The formula has to be
modied for indoor scenarios, since the path loss is usually
higher and location dependent. As a rule of thumb,
LOS path loss is valid for the rst 7 meters. Beyond 7
meter, the degradation is up to 30 dB every meter
(see [13]).
RF indoor propagation very likely results in multi-path
fading. Multi-path causes signal cancellation.
Fading due to multi-path can result in signal reduction
of more than 30 dB. However, signal cancellation
is never complete. Therefore one can add a priori a certain
amount of power to the sender signal, referred to
as fade margin (L fade ), to minimize the eects of signal
cancellation.
Another important factor of LBA is the Signal-to-
Noise-Ratio (SNR in dB) dened by
required per information bit
(watts), thermal noise in 1Hz of bandwidth
data rate (bit/s) and B
bandwidth (Hz). E b =N 0 is the required energy per
bit relative to the noise power to achieve a given BER.
It depends on the modulation scheme. In Figure 2 we
show the in
uence of E b =N 0 on the bit error rate for
the DQPSK modulation. The SNR gives the required
dierence between the radio signal and noise power to
achieve a certain level of link reliability.
Given the equation described above we can compute
the required signal strength at the receiver. In addition
to the channel noise we assume some noise of the
Table
Assumed parameter in Figure 3
Parameter Value
Frequency 2.4GHz
Channel Noise -111dBm
Fade Margin 30dB
Receiver noise gure 7dB
Antenna gain
Range loss
Modulation DQPSK
Data rate 2Mbps
Bandwidth (de-spread) 2MHz
receiver circuits (N rx in dB). The receiver sensitivity
(P rx in dBm) is dened by
Given P rx we can further compute the required RF
power P tx (dBm) at the sender
where G tx and G rx are transmitter and receiver antenna
gain, respectively. In Figure 3 we show for the
IEEE 802.11 2Mbit/s DSSS physical layer the computed
radio transmission power required to achieve a given bit
error rate. The assumed parameters are given in Table
1. It is important to note, that we can control the bit
error rate by controlling the transmission power. The
bit error rate has a strong impact on the medium access
control protocol performance.
3. Gilbert-Elliot Channel Model
The link budget analysis provides for a given transmission
power a certain bit error rate and vice versa.
The bit errors are assumed to occur independently,
which is far from reality where error bursts are seen.
For instance, in [14] it is shown, that the throughput of
a WLAN with parameters similarly chosen is dependent
on position and time. The varying throughput is caused
by varying bit error rates during the measurements. To
s
consider dynamic changes in the bit error rate we use a
Gilbert-Elliot channel model (see [15]).
The Gilbert-Elliot channel model is basically a two
state discrete time Markov chain (see Figure 4). One
state of the chain represents the Good-State, the other
state represents the Bad-State. In every state errors
occur with a certain bit error probability. In [16] an
analytical solution is proposed, which parameterizes
the Markov chain for DQPSK modulation assuming a
Rayleigh-fading channel and movements of mobile ter-
minals. To improve the accuracy of the model more
than two states in a Markov chain can be used. We
follow this approach in computing the channel model
parameter (see [17]). In the following investigations we
use the two state model. The state sojourn times (be-
tween 1 and 200 milliseconds) and the bit error probability
depend on the bit error rate provided by the link
budget analysis. The Gilbert-Elliot model gives periods
with higher bit error and lower bit error probabilities,
which represents the bursty nature of the bit errors sufciently
4. IEEE 802.11 Medium Access Control
The responsibility of a Medium Access Control
(MAC) protocol is the arbitration of accesses to a
shared medium among several terminals. In IEEE
802.11 this is done via an Ethernet-like stochastic and
distributed mechanism - Carrier Sense Multiple Access
Avoidance (CSMA/CA). Since wireless
LANs lack the capability of collision detection, the collision
avoidance mechanism tries to minimize access conicts
a priori. In general, a MAC packet will be transmitted
immediately after a small sensing interval called
DIFS (Distributed Inter-Frame Space) as long as the
radio channel remains free. If the channel is busy or
becomes busy during sensing the MAC packet transmission
has to be postponed until the channel becomes
free and an additional waiting time has elapsed during
which the radio channel must remain free. This additional
waiting time consists now of a DIFS followed by
a Backo interval. The Backo interval is a uniformly
chosen random number of the interval [0,CW] times a
Backo slot time. CW represents the physical layer dependent
Contention Window parameter. The current
CW value is doubled after every packet transmission
error which can be caused by bit errors or collisions. In
the following we concentrate on the error control mechanism
of the IEEE 802.11 MAC protocol. For further
details on this MAC protocol the reader is referred to
[18] or [11].
The IEEE 802.11 MAC protocol uses an immediate
acknowledgment (ACK) to recover from transmission
errors. Transmission errors are caused either by bit errors
or by simultaneous channel access by two or more
mobiles (collisions). Figure 5 shows the ACK process-
ing. After a successful data packet reception, an ACK
transmission has to be started after a short interframe
space (SIFS) to indicate the correct reception. If the
reception of a packet was not successful no ACK will
be sent by the receiver. In case the sender received no
ACK, the packet will be retransmitted. The retransmission
is performed either until the data packet was received
correctly and conrmed by an ACK or the maximum
number of retransmissions is reached according
to the MAC rules. These retransmissions increase the
overall energy needed to transmit the packet. Energy
consumption can be reduced by reducing the number of
retransmissions. This in turn can be achieved by improving
the signal quality due to a higher transmission
power. But an increase of the transmit power also leads
to an increase in energy consumption which is counter-productive
to the goal of reducing the consumed energy.
Therefore the number of retransmissions and transmission
power have to be carefully balanced to reduce energy
consumption.
5. Energy consumption
Our goal is to achieve an optimal operating point
with respect to energy consumption of a IEEE 802.11
DSSS LAN. Therefore we look for a certain RF transmission
power level where the retransmission eects of
the MAC protocol is traded best. In an ideal case,
where no bit errors, no collisions, and no protocol overhead
occur, the energy E ideal (Ws) required to transmit
data equals the duration of the data transmission
T times the mean transmitted power
1 .
The transmission time for the ideal case can be computed
from the bit time (T bit ) and the number of transmitted
data bits (B succ ). Hence, from equation (5.1)
we get
for the required energy, whereas
is the energy required to transmit one bit in the ideal
case.
In reality, the energy to transmit data will be higher
due to protocol overheads and retransmissions, taking
errors and collisions into account. Therefore we introduce
the coe-cient pr , which we call protocol e-
ciency
is the number of successful transmitted
data bits and B all is the number of overall transmitted
bits. The latter includes MAC control packets, successful
and retransmitted data bits and MAC
packet header and trailer. pr indicates how e-cient
the protocol works during the transmission phase. In
other words, pr indicates in a long run how much payload
data is contained in every transmitted bit . The
range of pr is between 0 and 1, whereas the value 1 will
never be achieved because of physical and MAC layer
overheads. By rewriting Eqn. (5.2) and taking (5.4)
into consideration we get
1 Note that we only consider
Ptx . Additional power is required
to keep the entire or parts of the network interface card active
for transmission or reception.
pr
pr
the resulting energy, which considers now the total
number of transmitted bits (B all ) to get the data bits
the radio link. The following equation
pr
represents the resulting bit energy, which is eventually
needed to transmit one data bit successfully.
res incorporates the fact, that one has to send several
overhead bits before getting one data bit successfully
over the radio link.
6. Investigation of the RF transmit power
in
uence
To investigate the RF transmission power and MAC
retransmission trade-o we performed discrete event
simulations (DES). The simulation model for the system
under investigation (see Figure 1) is composed of
three parts as described above: the link budget analysis,
the Gilbert-Elliot channel model, and the IEEE 802.11
DCF model. The simulation parameters are shown in
Table
(see also Table 1).
We used a relatively static simulation setup to investigate
the power control and MAC trade-o. The simulated
WLAN network operates in ad-hoc mode, that
is, there is no access point which arbitrates the channel
access. Further we consider a single ad-hoc radio
cell. Implications of other radio cells (e.g. interference)
are not taken into account. Each mobile is in transmission
range of all other mobiles. The (mean) distance
between a sending and a receiving mobile is assumed
to be meters. Mobility is covered by the bit error
model, which allows changes in bit error rate (good ,
bad state) over the time. It is further assumed, that for
each sender/receiver pair a independent radio channel
s
exists, i.e. while one station receives a packet correctly
other stations might receive the same packet incorrectly.
Every mobile has a packet ready to send at every point
in time. Therefore all mobiles are involved in every
channel access cycle. A mobile always sends a packet
to its successor, which is determined by the mobile's
identier. A packet will be sent at a constant transmission
power to another mobile.
In the following we present the protocol e-ciency
pr and the energy used to successfully transmit one
res from the simulation results we obtained. To
rate these results we also present the channel access de-
lay. We dene the channel access delay as the interval
between the time there the MAC takes a packet from
the MAC queue to transmit it and the start time of
the successful transmission attempt. Figure 6 shows
the protocol e-ciency dependence on the transmission
power 2 used. The parameter of the curves is the number
of mobiles in an ad-hoc network. The graph shows,
that the protocol e-ciency is very small for a relatively
low transmission power of 14 dBm ( BER of 10 4 , see
Figure
3). The primary reason are corrupted packets,
which have to be retransmitted by the MAC protocol.
As a result the protocol e-ciency is low. By increasing
the transmission power, the protocol e-ciency increases
fast up to a certain level, which depends on the number
of stations in the ad-hoc network. An increased transmission
power is equivalent to a smaller BER, which
results in a better protocol e-ciency. The reason for
the better protocol e-ciency for a smaller number of
mobiles can be explained as follows: A large number
2 The transmission power is a (nonlinear) equivalent for the bit
error rate (see section 2).
Table
Simulation parameter
Parameter Value
Number of Mobiles 2,4,8,16
Packet sizes 64 . 2312 Byte
Tra-c Load > 100%
of mobiles results in more collisions during the access
phase since all mobiles have packets to transmit, which
leads to a smaller protocol e-ciency. Furthermore, it is
important to note that if the transmission power reaches
a certain level, only a marginal increase of protocol efciency
can be reported. That indicates that the optimal
operating point is in the region where the curves
start to
atten out (approximately 16 dBm for 512 Byte
packets). This behavior is independent of the number
of mobiles. Figure 14 and 17 (see Appendix) show the
same behavior for very small (64 Byte) and very large
(2312 Byte) MAC packets. We observe that the protocol
e-ciency remains smaller for 64 Byte packets and a
little bit higher using 2312 Byte packets.
Figure
res vs. the transmission power
for 512 Byte. The curve parameter is the number of mo-
biles. The graph clearly indicates that there is an optimal
transmission power providing the smallest E bit res
value, that is, when energy consumption for the transmission
phase is at its lowest level. This optimal transmission
power is nearly independent of the number of
stations. Figure 15 and show the results for 64 and
2312Byte packets, respectively. The graphs show the
same behavior as for 512 Byte packets. There is only
one important dierence. With increasing packet size
the optimal transmission power leading to the smallest
res value is increasing. In other words, for smaller
packets a smaller
should be chosen. The shape of
the curve is aected by the protocol e-ciency. Before
reaching the optimal transmission power (around
dBm for 512 Byte packets) a large amount of energy is
wasted for retransmissions resulting in a low protocol ef-
ciency. After the optimal point of transmission power,
a large amount of energy is unnecessarily sent out because
the protocol e-ciency only increases marginal in
this range.
The access delay curves (see Figure 8) reveal, that
at the optimum transmission power the lowest achievable
channel access delay is nearly achieved. Very small
transmission power levels for a certain packet size are
very harmful since the access delay grows fast while for
higher power levels the access delay does not improve
signicantly. In particular for very large packets it is
important, that the power level is at its optimum or
higher since the channel access delay goes in the region
of seconds if the used transmission power is too low (see
Figure
19).
The gures clearly indicate that there is an optimal
transmission power for a certain packet size and that
this power is nearly independent of the number of sta-
tions. Therefore we investigate the in
uence of packet
size in further detail. In Figures 9 and 10, pr and
res are shown for dierent packet sizes. The curve
parameter is the bit error rate, which is a (nonlinear)
equivalent to the transmitted power (see Figure 3). The
number of stations is xed to 4. In Figures 20 and
(see Appendix) the similiar curves for mobiles
are shown. The protocol e-ciency graph indicates for
low bit error rates larger packets have
the best performance. For bit error rates higher than
5 an optimal packet size is visible. This is around
500 Bytes. The reasons are twofold. At rst, for small
packets the protocol e-ciency is mainly in
uenced by
the MAC. The collision and protocol overheads take the
main share of bandwidth. For long packets the MAC
plays a minor role, but long packets will be corrupted
with a higher probability, resulting in retransmissions.
The graphs for E bit res (see Figure 10 and 21) re
ect
this behavior. 500 Byte packets show the best performance
for high error conditions
packets should be as large as possible.
7. Protocol Design Recommendations
Our results clearly indicate a strong correlation between
the MAC and the physical layer. A poorly selected
transmission power may result in a waste of en-
ergy. In other words, MAC protocols need ne tuning
according to the underlying physical layer and channel
characteristics and vice versa. Therefore we will
elaborate on how we can achieve a reduction of energy
consumption in WLANs using our results.
7.1. Fixed high RF power and large sized MAC packets
Todays Internet tra-c carries packets of dierent
size. Assuming that a WLAN network interface experiences
this kind of tra-c one way to reduce energy is to
adapt the packet size according to the used RF tranmis-
sion power level. The highest power saving gain could
be achieved if only large packets (e.g. > 1000Bytes)
with the appropriate high transmission power would be
transmitted by the WLAN interface. This can be concluded
from the fact, that the optimal energy per successfully
transmitted bit value (E bit res ) is lowest for the
largest possible MAC packet size (2312 Byte, see Figure
7, 15 and 18). But having internet tra-c in mind,
where a large portion of the packets are smaller or equal
than 512 Bytes, a MAC packet assembly mechanism is
required to build up large packets. MAC packet assembly
is not an easy task and might be counterproductive
with respect to energy consumption. For instance, it is
not easy to resolve which packets should be assembled
in one large packet and how long should be waited to ll
up a large packet. Furthermore, an assembly of packets
which are directed to dierented receivers into one
large packet would require that every mobile stations
is awake to receive the big packet and check whether
there is a packet for itself in the large packet. That
might lead to unnecessary awake times of mobile stations
and result in a waste of energy. Last but not least,
the IEEE 802.11 standard does not specify an assembly
mechanism, which makes this method unpractical for
application. Despite that we believe, that the energy
saving potential of a carefully designed MAC assembly
mechanism will outweight the drawbacks.
7.2. Fixed medium RF power and medium sized MAC
packets
The following proposed opportunity to reduce energy
consumption of IEEE 802.11 network interfaces appears
to be the simplest and most realizable since it does not
require any changes to the existing IEEE 802.11 stan-
dard. The idea is to use medium sized MAC packets of
about 512 Byte and transmit them with the xed optimal
sRF power. This is based on the observation, that
for our assumed conditions 512 Byte packets seem to
have a good performance except at very low bit error
rates. The E bit res value for 512 Byte packets is relatively
close to the E bit res for large packets (see Figure
To achieve this large packets have to be
fragmented to 512 Byte chunks. Small packets should
be left as they are, since MAC packet assembly is a difcult
task as we explained above. They are transmitted
with the same RF power as the 512 Byte packets. MAC
packet fragmentation is specied in the IEEE 802.11
standard and supported in nearly every commercially
available WLAN product.
We simulated this approach. For that purpose we
analyzed a half hour tra-c trace le of an 10MBit/s
Ethernet segment connecting the main campus of Harvard
University (USA) with the Internet in the year
1997 (see [19]). We extracted a packet size distribution
of the TCP (Transmission Control Protocol) tra-c from
the trace le as shown in Figure 11 and incorporated the
distribution in our tra-c generation model. As stated
in [20], the TCP tra-c makes up a great share (up to
90%) of the overall network tra-c 3 . We did not sample
the inter-arrival times of the packets from the trace le,
since it is not an easy task to scale from 10MBit/s to
2Mbit/s, where the latter is the transmission speed of
IEEE 802.11. Network tra-c, especially internet and
LAN tra-c is in general very bursty (see e.g. [20] and
[21]). We accomplish the burst characteristic of the
tra-c by means of the Pareto distribution, which exhibits
a heavy tail characteristic. The parameter of
the Pareto distribution was set to the value 1.5. The k
parameter was used to control the tra-c intensity.
In
Figure
12 and Figure 13 we show the E bit res over
normalized network load for 4 and 16 mobiles, where
all mobiles use a source model as described above. We
simulated with a bit error rate of 10 4 and 10 5 , respec-
3 Tra-c shares of protocols are recently changing mainly due to
the availability of multimedia software and services, which rely
on UDP (User Datagram Protocol).
tively 4 . Furthermre, we used the ability of the MAC to
fragment packets into smaller packets. On one hand
MAC level fragmention adds overhead due to protocol
header and necessarily more channel accesses. On the
other hand smaller packets are less likely to be erroneous
due to bit errors. In our simulations we fragmented
packets whereas the fragment size was set to
500 Byte according to the previously achieved results.
The gures show, that MAC level fragmentation has
its advantages when the bit error rate is higher than
. The improvement is relatively high, taking into
account that mobiles very rarely send large packets (i.e.
backbone access tra-c) which can be fragmented. Assuming
networks with more local tra-c (department,
o-ce LANs) where the mean packet size is larger, an
even higher improvement can be anticipated when using
MAC level fragmentation. The curves also show, that
fragmentation should not be used if the radio channel
quality is good (BER < Fragmentation adds un-necessary
overhead in that case . The small ascend in
the graph is a result of increased collision probability
due to increased load. In addition, the more mobiles
are located in a radio cell, the higher E bit res . Again,
this is a result of higher collision probability.
7.3. Variation of transmit power
In contrast to the two proposals we made above, it
also possible to adapt the RF tranmission power according
to the packet size assuming that a WLAN experience
some kind of internet tra-c with varying packet
sizes. This can be done by power control. From the
simulation results (see Figure 7, 15 and 18) we can conclude
that small packets should be sent with a lower
RF transmit power while larger packets should be sent
with a higher RF transmit power 5 . Of course the ac-
4 The packets are sent at the current optimal transmit power for
500 Byte packets regardless of the actual packet size.
5 So far, the main objectives of power control are minimizing the
interferences in multi-radio cell congurations and maximizing
the system capacity. The algorithms used for these goals should
also be taken into account when choosing a power level to minimize
the energy consumption.
tual values of transmit power depend on the WLAN
setup like range, transmission speed and environmental
circumstances.
Although a power control algorithm is not specied
in the standard, IEEE 802.11 provides two means of
power control support. First, it denes dierent power
levels, whereby up to 4 and 8 power levels are allowed
for DSSS and FHSS, respectively. The values for these
power levels are not dened and therefore implementation
dependent. The approach presented here might
be used for a meaningful setting of the power levels
with respect to energy consumption. For instance, the
power level range should be set from about 15 to 17
dBm for the assumptions we have made. The choice of
a transmit power for sending a packet with a certain
packet size should in general tend to a higher transmission
power since this is less harmful with respect to
energy consumption and channel access delays. Second,
the IEEE 802.11 denes a Received Signal Strength Indicator
(RSSI). This indicates the received energy of
a signal and can have a value from 0 up to 256 and
for DSSS and FHSS, respectively. A power control
mechanism should exploit this value to achieve information
about the current channel state. By means of this
information additonal or less RF transmission power,
according to the value of transmission power which depends
on the packet size, can be chosen. That of course
requires, that the receiver passes this information to the
sender. This information could be obtained by means
of the immediate acknowledgment which follows a successful
packet reception. Even if the packet or the acknowledgment
gets lost, the packet sender can assess
the channel state and might in turn stop transmission
for a while or resend the packet with more energy. Such
an approach and the quantication of the gain is subject
of our current research.
8.
Summary
In this paper we study the mutual in
uences of the
medium access protocol and the physical layer with respect
to energy consumption for an IEEE 802.11 LAN.
We showed that by harmonizing these dierent protocol
levels, that is to say sending MAC packets with its
optimal transmit power and exploiting various MAC
level mechanisms, a substantial reduction in energy consumption
is achievable. For the upcoming ultra low
powered micro radios the power saving gain will be even
higher since the share of power consumed by signal and
protocol processing will be smaller. The approach used
here is general and may be used for any wireless sys-
tem. This approach might also be extended to higher
protocol layers such as link error control, transport or
application layer.
Acknowledgements
We would like to thank Andreas Kopsel for his contribution
to the simulation model. We would also like
to thank the reviewers for their detailed and fruitful
comments.
--R
Reducing Power Consumption of Network Interfaces in Hand-Held Devices
Energy Consumption Performance of a class of Access Protocols for Mobile Data Net- works
PAMAS: Power Aware Multi-Access protocol with Signalling for Ad Hoc Networks
Digitale Mobilfunksysteme.
Troughput Performance of Transport-Layer Protocols over Wireless LANs
Low Power Error Control for Wireless Links.
Digital Communications - Fundamentals and Applictions
Tutorial on Basic Link Budget Analysis.
Capacity of a burst-noise channel
Finite state Markov channel - a useful model for radio communication channels
A Gilbert-Elliot Bit Error Model and the E-cient Use in Packet Level Simulation
Harvard Network Traces and Analyses.
Patterns and Characteristics
On the Self-Similar Nature of Ethernet Tra-c
--TR
--CTR
Z. Sun , X. Jia, Energy Efficient Hybrid ARQ Scheme under Error Constraints, Wireless Personal Communications: An International Journal, v.25 n.4, p.307-320, July
Daji Qiao , Sunghyun Choi , Kang G. Shin, Interference analysis and transmit power control in IEEE 802.11a/h wireless LANs, IEEE/ACM Transactions on Networking (TON), v.15 n.5, p.1007-1020, October 2007
Daji Qiao , Sunghyun Choi , Amit Jain , Kang G. Shin, MiSer: an optimal low-energy transmission strategy for IEEE 802.11a/h, Proceedings of the 9th annual international conference on Mobile computing and networking, September 14-19, 2003, San Diego, CA, USA | power saving;MAC;IEEE 80211;retransmission;protocol harmonization;energy saving;power control;vertical optimization;WLAN |
384218 | Compiler Design for an Industrial Network Processor. | One important problem in code generation for embedded processors is the design of efficient compilers for ASIPs with application specific architectures. This paper outlines the design of a C compiler for an industrial ASIP for telecom applications. The target ASIP is a network processor with special instructions for bit-level access to data registers, which is required for packet-oriented communication protocol processing. From a practical viewpoint, we describe the main challenges in exploiting these application specific features in a C compiler, and we show how a compiler backend has been designed that accomodates these features by means of compiler intrinsics and a dedicated register allocator. The compiler is fully operational, and first experimental results indicate that C-level programming of the ASIP leads to good code quality without the need for time-consuming assembly programming. | INTRODUCTION
The use of application specic instruction set processors
(ASIPs) in embedded system design has become quite com-
mon. ASIPs are located between standard "o-the-shelf"
programmable processors and custom ASICs. Hence, ASIPs
represent the frequently needed compromise between high
e-ciency of ASICs and low development eort associated
with standard processors or cores. While being tailored towards
certain application areas, ASIPs still oer programmability
and hence high
exibility for debugging or upgrading.
Industrial examples for ASIPs are Tensilica's congurable
Xtensa RISC processor [1] and the congurable Gepard DSP
core from Austria Micro Systems [2].
Like in the case of standard processors, compiler support
for ASIPs is very desirable. Compilers are urgently required
to avoid time-consuming and error-prone assembly programming
of embedded software, so that fast time-to-market and
dependability requirements for embedded systems can be
met. However, due to the specialized architectures of ASIPs,
classical compiler technology is often insu-cient, but fully
exploiting the processor capabilities demands for more dedicated
code generation and optimization techniques.
A number of such code generation techniques, intended to
meet the high code quality demands of embedded systems,
have already been developed. These include code generation
for irregular data paths [3, 4, 5, 6, 7], address code
optimization for DSPs [8, 9, 10, 11], and exploitation of
multimedia instruction sets [12, 13, 14]. It has been shown
experimentally, that such highly machine-specic techniques
are a promising approach to generate high-quality machine
code, whose quality often comes close to hand-written assembly
code. Naturally, this has to be paid with increased
compilation times in many cases.
While partially impressive results have been achieved in code
optimization for ASIPs in the DSP area, less emphasis has
been so far on a new and important class of ASIPs for bit-serial
protocol processing, which are called Network Processors
(NPs). The design of NPs has been motivated by
the growing need for new high bandwidth communication
equipment in networks (e.g. Internet routers and Ethernet
adapters) as well as in telecommunication (e.g. ISDN and
xDSL). The corresponding communication protocols mostly
employ bitstream-oriented data formats. The bitstreams
consist of packets of dierent length, i.e. there are variable
length header packets and (typically longer) payload pack-
ets. Typical packet processing requirements include decod-
ing, compression, encryption, or routing.
A major system design problem in this area is that the required
high bandwidth leaves only a very short time frame
(as low as a few nanoseconds) for processing each bit packet
arriving at a network node [15]. Even contemporary high-end
programmable processors can hardly keep pace with the
required real-time performance, not to mention the issue
of computational e-ciency, e.g. with respect to power consumption
There are several approaches in ASIC design that deal with
e-cient bit-level processing. In [16] it is shown that narrow
bitwidth operations are detectable in hardware at runtime.
The processor uses the knowledge about the bitwidth of an
operation either to reduce power consumption or to increase
performance. Furthermore it is possible to identify a significant
number of unused bits at compile time. According to
[17] up to 38% of all computed most signicant bits in the
SpecINT95 benchmarks are discarded. Therefore, for ecient
use of hardware operations, reduced bitwidth application
specic processors might achieve a reasonable saving on
hardware resources. The approaches in [18, 19, 20] use information
on bitwidth of operands for reducing the size of the
datapath and functional units in a recongurable processor.
All these solutions require highly application specic hard-
ware. On the other hand, the design of hardwired ASICs is
frequently not desirable, due to the high design eort and
low
exibility. As a special class of ASIPs, NPs represent
a promising solution to this problem, since their instruction
sets are tailored towards e-cient communication protocol
processing. The advantage of this is exemplied in the following
Since the memories of transmitters and receivers normally
show a xed wordlength (e.g. 8 or 16 bits), a relatively expensive
processing may be required on both sides when using
standard processors (g. 1): At the beginning of a communication
the packets to be transmitted are typically aligned
to the word boundaries of the transmitter. For storing these
words into the send buer, they have to be packed into the
bitstream format required by the network protocol. After
transmission over the communication channel, the packets
have to be extracted again at the receiver side, so as to
align them to the receiver wordlength, which may even be
dierent from the transmitter wordlength.
Obviously, this data conversion overhead reduces the benets
of the bitstream-oriented protocol. In contrast, NPs may
be designed to be capable of directly processing bit packets
of variable length, i.e. in the form they are stored in the receive
buer. This feature largely reduces the data transport
overhead.
NPs are relatively new on the semiconductor market. There
are only a few standard chips (e.g. from Intel and IBM), and
several in-house designs (see the overview in [15], which also
describes NP development eorts at STMicroelectronics).
In this paper, we focus on a specic machine, the Inneon
Technologies Network Processor [21], for which an ANSI C
compiler has been developed within an industrial project.
The design of a C compiler for the Inneon NP has been an
important goal in order to avoid time-consuming assembly
programming and to ensure a relatively compiler-friendly architecture
by means of "processor/compiler codesign". As
also observed in [15], this approach turns out to be essential
in order to avoid an expensive compiler/architecture mis-
Network stream
Memory
Receive buffer
Transmitter
Memory
Send buffer
Receiver
Figure
1: Communication via bitstream-oriented protocols
match right from the beginning.
However, e-cient compiler design for NPs is at least as challenging
as for DSPs, since the dedicated bit-packet oriented
instructions are not easily generated from a high-level language
like C. In contrast to the approach taken in [15], which
is based on the retargetable FlexWare tool suite [22], we decided
to develop a nearly full custom compiler backend. This
is essentially motivated by the need to incorporate C language
extensions and a dedicated register allocator, which
will become clear later. Another approach related to our
work is the Valen-C compiler [23], a retargetable compiler
that allows the specication of arbitrary bitwidths of C vari-
ables. However, there is no direct support for NP applications
The purpose of this paper is to show how a complete C
compiler for an advanced NP architecture has been imple-
mented, and to describe the required new code generation
techniques. We believe that these or similar techniques can
also be used for further NP architectures, for which a growing
compiler demand may be expected in the future.
The remainder of this paper is structured as follows. In section
2, the Inneon NP architecture and its instruction set
are described in more detail. Section 3 outlines the problems
associated with modeling bit-level processing in the C lan-
guage. The next two sections describe the actual compiler
design, which is coarsely subdivided into frontend (section
and backend (section 5) components. Experimental results
are presented in section 6. Finally we give conclusions
and mention directions for future work.
2. TARGET ARCHITECTURE
Fig. 2 shows the overall architecture of our target machine,
the Inneon NP [21]. The NP core shows a 16-bit RISC-like
basic architecture with special extensions for bit-level data
access. This principle is illustrated in g. 3.
The NP instruction set permits performing ALU computations
on bit packets which are not aligned to the proces-
Code NP core
Buffer I/O
Figure
2: Inneon NP architecture
ALU
packet
machine wordlength
packet
packet
Figure
3: Processing of variable length bit packets
wordlength 2 . A packet may be stored in any bit index
subrange of a register, and a packet may even span up to
two dierent registers. In this way, protocol processing can
be aligned to the required variable packet lengths instead
of the xed machine wordlength. However, this packet-level
addressing is only possible within registers, not within mem-
ory. Therefore, partial bitstreams have to be loaded from
memory into registers before processing on the ALU can
take place (g. 4).
Register File
Memory
ALU
Figure
4: Data layout in memory and registers: Bit packets
are not aligned to memory or register wordlengths
In order to enable packet-level addressing of unaligned data,
the NP instruction set permits the specication of osets
and operand lengths within registers. This is shown in gs.
5 and bit packet is addressed by means of the corresponding
register number, its oset within the register, and
the packet bit width. If oset plus width are larger than
the register wordlength (16 bits), then the packet spans
over two registers (without increasing the access latency,
though). Especially this feature is very challenging from a
Also some standard processors, such as the Intel i960 processor
family, support bit-oriented data access, however
without corresponding arithmetic capabilities.
compiler designer's viewpoint. The width of argument and
result packets must be identical, and one of the two argument
registers is also the result location of any ALU oper-
ation. Therefore, two osets and one width parameter per
instruction are su-cient.
Register
number
Offset
within
register
CMD Reg1.Off, Reg2.Off, Width
Command
Assembly Width of both
operands
Figure
5: NP assembly instruction format
Register n+1 Register n
Offset within Registern n
Machine word size
Packet width
Figure
Packet-level addressing within registers
3. BIT PACKET PROCESSING IN C
Although possible, the description of bit packet-level addressing
in the C language is inconvenient, since it can only
be expressed by means of a rather complex shift and masking
scheme. An example is given in g. 7, which shows a
fragment of a GSM speech compression algorithm (imple-
mentation by the Communications and Operating Systems
Research Group at the TU Berlin). Here, some pointer c
is used to traverse an array of unaligned bit packets as has
been shown in g. 4.
gsm_byte * c;
word xmc[8];
Figure
7: Bit packet processing example from GSM speech
compression
Obviously, this is not a convenient programming style, and
the situation becomes even worse in case of multi-register
packets. The code readability (and thus maintainability)
is poor, and furthermore the masking constants generally
make the code highly machine-dependent.
3.1 Use of compiler-known functions
As outlined in section 2, the NP instruction set allows to
avoid these costly shift and mask operations by means of
special instructions for packet-level addressing. In the C
compiler, bit packet manipulation is made visible to the
programmer by means of compiler-known functions (CKFs)
or compiler intrinsics. The compiler maps calls to CKFs
not into regular function calls, but into xed instructions
or instruction sequences. Thus, CKFs can be considered as
C-level macros without any calling overhead. The CKF approach
has also been used in several C compilers for DSPs,
e.g. for the Texas Instruments C62xx.
Using CKFs, the programmer still has to have detailed knowledge
about the underlying target processor, but readability
of the code is improved signicantly. In addition, by providing
a suitable set of simulation functions for the CKFs, C
code written for the NP is no longer machine-dependent but
can also be compiled to other host machines for debugging
purposes.
We illustrate the use of CKFs with a simple example. Consider
a case, where we would like to add the constant 2 to
a 7-bit wide packet stored in bits 2 to 8 of some register
denoted by the C variable a. In standard C this can only be
expressed by means of a complex assignment as follows:
On a standard processor, this would translate into a relatively
large instruction sequence. In contrast, the NP can
implement the entire assignment within a single instruction.
For this purpose, we introduce a packet access (PA) CKF as
shown in g. 8. The CKF directly re
ects the packet-level
instructions illustrated in gs. 5 and 6. The Operator
parameter selects the operation (e.g. ADD, SUB, SHIFT,
. ) to be performed on the arguments V arName1 and
arName2. In addition, the required intra-register osets
and the packet bitwidth are passed to function PA.
CKF Name
Operator name Offset of the
PA(int Operator, int VarName1, int Off1, int VarName2, int Off2, int
packet width
Name of result
and first operand
Name of the
second operand
Offset of the packet
second packet
within the first operand
Figure
8: Format of compiler-known function PA
Using function PA, the above example can be expressed very
simply in C as follows:
int a, b;
PA(PA_ADD, a, 3, b, 0, W7);
Here, PA ADD (selecting an ADD instruction) and W7
(specifying a bitwidth of 7) are dened as constants in a
C header le. The scalar variables a and b are mapped to
registers in the assembly code by the register allocator.
3.2 Bit packet access in loops
As exemplied above, CKFs provide an elegant way to reduce
the description eort of bit packet processing in C.
Most urgently, CKFs are required in case of bit packet ar-
rays, that are indirectly accessed within loops, so as to avoid
unacceptable quality of compiler-generated code.2
Figure
9: Array of bit packets
Consider the example in g. 9, where we have to process an
array of 8 packets, each of 10 bit length. The bit packet
array in turn is stored in an array of 5 16-bit registers (R0
R4). As a consequence, the bit packets are not aligned to
the register wordlength, and some packets even cross register
boundaries.
Suppose, we want to compute the sum over the bit packets
within a loop. In standard C, this would require code as
shown in g. 10. Due to the unaligned bit packets, the
register le pointer elem must not be incremented in every
loop iteration, but only if a bit packet crosses a register
boundary. Therefore, control code is required within the
loop, which is obviously highly undesirable with respect to
code quality.
int A[5];
int
for
{
sum += (A[elem] >> offset) & 0x03ff;
else
sum += (A[elem]>>offset)
| (((A[elem+1]<<(16-offset))& 0x03ff );
offset += 10;
if (offset>15) { elem++; offset -= 16; }
Figure
10: Bit packet array access in a loop
In order to avoid such overhead, the NP instruction set architecture
provides means for indirect access to unaligned
bit packet arrays via a bit packet pointer register. In the
compiler, we again exploit this feature by CKFs. The mod-
ied C code with CKFs for the above sum computation example
is given in g. 11. The array A is declared with the
attribute register. This attribute instructs our compiler
backend to assign the whole array to the register le.
In contrast, a regular C compiler would store the array (like
other complex data structures) in memory. This concept of
register arrays is required, since the NP machine operations
using packet-level addressing only work on registers, not on
memory.
The variables PR1 and PR2 are pointers. By being operands
of the CKF INIT they are introduced to the backend as
pointers to bit packets. The backend uses the knowledge
about which pointer belongs to which element/array for life
time analysis in the register allocation. In the example, PR2
is used to traverse the dierent bit packets of array A, while
constantly points to the sum variable.
If the number of physical registers is lower than the number
of simultaneously required register variables, spill code will
be inserted. The register allocator uses the names of the
pointer registers in such a case to identify the register arrays
which have to be loaded into the register le for a given
indirect bit packet operation. The INIT CKF translates to
assembly code as the load of a constant into a register. In
this constant the name of the register, the oset, and the
width of the bit packet where the pointer points to is en-
coded. In the example from g. 11 the pointer PR2 points
to the rst element of the bit packet array. The name of
the register where the bit packet is located is not known
before register allocation. Therefore the backend works on
symbolic addresses before register allocation. The symbolic
are automatically replaced after the register allocation
phase. PAI(ADD,.) is the CKF for a single indirect
addition, like in the C expression "*p *q". The backend
creates a single machine operation for this CKF. In order to
keep the number of CKFs low, we specify the arithmetic operation
as the rst parameter of the CKF instead of having
a dedicated CKF for each operation.
The CKF INC denotes the increment of a bit packet pointer.
Like the increment of a pointer in ANSI C the pointer will
point to the next array element after the call to INC. Because
the NP supports bit packet pointer arithmetic in hard-
ware, this requires again only a single machine instruction,
independent of whether or not advancing the pointer requires
crossing a register boundary.
Obviously, the source code in the example is specied in a
very low level programming style. However, the programmer
still gets a signicant benet from use of the compiler. First
of all, it permits the use of high level language constructs,
e.g. for control code and loops. In addition, address generation
and register allocation are performed by the compiler,
which keeps the code reusable and saves a lot of development
time.
4. FRONTEND DESIGN
Like most other compilers, the NP C compiler is subdivided
into a frontend and a backend part. The frontend is responsible
for source code analysis, generation of an intermediate
representation (IR), and machine-independent optimiza-
register int A[5];
int *PR1, *PR2;
int
INIT(PR1, sum, 10);
for
{
PAI(ADD, PR1, PR2);
Figure
11: Bit packet array access with CKFs
tions, while the backend maps the machine-independent IR
into machine-specic assembly code (see section 5).
As a frontend, we use the LANCE compiler system developed
at the University of Dortmund [28]. We give a brief
description here for sake of completeness. LANCE is a
machine-independent, optimizing C frontend system, that
includes a backend interface for retargeting to dierent pro-
cessors. There is, however, no support for automatic re-
targeting. The current version LANCE V2.0 comprises the
following main components:
ANSI C frontend: The C frontend analyzes the C source
code and generates a low-level, three address code IR.
In case of syntax or semantical errors in the C code,
error messages similar to those of GNU's gcc are emit-
ted. The IR is almost machine-independent, only the
bit width of the C data types and their memory alignment
have to be specied in the form of a conguration
le.
library: LANCE comprises a C++ class library for IR
access, analysis, and manipulation. This includes le
I/O, control and data
ow analysis, symbol table main-
tenance, and modication of IR statements. In addi-
tion, there are auxiliary classes frequently required for
compiler tools, such as lists, sets, stacks, and graphs.
optimization tools: Based on the IR library, LANCE
contains a set of common "Dragon Book" [29] machine-independent
code optimizations, such as constant fold-
ing, dead code elimination, as well as jump and loop
optimizations. Dependent on the required optimization
level, the optimization tools can be called separately
or can be iterated via a shell script. Since all
optimization tools operate on the same IR format, new
optimizations can be plugged in at any time.
Backend interface: The backend interface transforms the
three address code IR into data
ow trees (DFTs) of
maximum size. Each DFT represents of piece of computation
in the C code and comprises arguments, oper-
ations, storage locations, as well as data dependencies
between those. The generated DFT format is compatible
to code generator generator tools like iburg and
olive. This feature strongly facilitates retargeting to
new processors.
What mainly distinguishes LANCE from other C frontends
such as those provided with gcc [30] or lcc [32], is the executable
IR. The basic IR structure is three address code,
that consists of assignments, jumps, branches, labels, and
return statements, together with the corresponding symbol
table information for identiers. The three address code
mainly serves to facilitate the implementation of IR
optimization tools, since the IR structure is much simpler
than that of the original C source language.
Executability is achieved by dening the IR itself as a very
low level, assembly-like subset of the C language. In the
generated by the C frontend, all high-level constructions
(e.g. loops, switch, and if-then-else statements) are replaced
by equivalent branch statement constructs. In addition, all
implicit loads and stores, address arithmetic for array and
structure accesses, as well as type casts are made explicit
in the IR. The SUIF compiler system [31] has similar C
export facilities for the IR, but it does not generate pure
three address code.
int main()
{
static int A[16], B[16], C[16], D[16];
register int
register int
register int
for
Figure
12: Example C source code
An example for a C source code (taken from DSPStone [33])
and a fraction of its corresponding IR is given in gs. 12 and
13. The IR contains auxiliary variables and labels inserted
by the frontend. All local identiers have been assigned
a unique numerical su-x. As can be seen, the IR is still
valid low-level C code, which can be compiled, linked, and
executed on the compiler host machine.
The most signicant advantage of this executable C-based
(in particular in the context of industrial compiler projects,
where correctness is more important than optimizations) is
that a validation methodology as sketched in g. 14 can be
applied to check correctness of the frontend part of the com-
piler. The key idea is that both the original C program and
its IR are compiled with a native C compiler on the host
machine. The equivalence of the two executables is checked
by means of a comparison between their outputs for some
test input data. Any dierence in the outputs indicates an
implementation error. For regression tests, this validation
process can be easily automated.
Although this approach naturally cannot provide a correctness
proof, it ensures a good fault coverage in practice, when
using a representative suite of C programs and test inputs.
In our case, we used a large and heterogeneous set of C applications
(including complex program packages like MP3,
JPEG, GSM, a BDD package, GNU
ex, bison, and gzip,
static int _static_A_3[16], _static_B_4[16];
static int _static_C_5[16], _static_D_6[16];
int main()
{ char *t1,*t3;
int t2,*t4;
/* register int
/* register int
/* for
L3:
return 0;
Figure
13: Partial IR code for C code from g. 12
a VHDL parser, and a 6502 C compiler) to validate the C
frontend, all IR optimizations, as well as the backend interface
of LANCE. The latter is achieved by exporting the
generated DFTs in C syntax again.
This has been very helpful from a practical point of view,
since non-executable IR formats cannot be validated at all
without processor-specic backends, instruction-set simula-
tors, and (frequently very slow) cross-simulation runs. Nat-
urally, validation of machine-specic backends still requires
instruction-set simulation. However, as the frontend part
typically contributes the largest part of the total compiler
source code, most compiler bugs may be expected to be xed
already before that phase.
5. BACKEND DESIGN
The C compiler backend is subdivided into code selection
and register allocation modules. An instruction scheduler
original C source IR generation IR C source
executable 1 executable 2
test input data
output 1 output 2
comparison
Figure
14: Frontend validation methodology
has not been implemented so far. The code selector maps
data
ow trees as generated by the LANCE frontend into
assembly instructions. As in many compilers for RISCs,
during this phase an innite number of virtual registers are
assumed, which are later folded to the available amount of
physical registers by the register allocator.
5.1 Code selection
The code selector uses the widespread technique of tree pattern
matching with dynamic programming [24] for mapping
data
ow trees (DFTs) into assembly code. The basic idea
in this approach is to represent the target machine instruction
set in the form of a cost-attributed tree grammar, and
parsing each given DFT with respect to that grammar. As a
result, an optimum derivation for the given cost metric, and
hence an optimal code selection, are obtained. The runtime
complexity is only linear in the DFT size. The tree parsing
process can be visualized as covering a DFT by a minimum
set of instruction pattern instances (g. 15).
d
c
a b
e
d
c
a b
e
ADD
MAC
b) c)
a)
MAC
ADD
Figure
15: Visualization of DFT-based code selection: a)
data
ow tree, b) instruction patterns, c) optimal tree cover
For the implementation, we used the olive tool (an extension
of iburg [25] contained in the SPAM compiler [26]), that
generates code selector C source code for a given instruction
set, or tree grammar, respectively. Specifying the instruction
set with olive is convenient, since the tool permits to
attach action functions to the instruction patterns, which
facilitates book-keeping and assembly code emission.
The LANCE frontend splits each C function into a set of basic
blocks, each of which contains an ordered list of DFTs.
The DFTs, which are directly generated in the format required
for olive, are passed to the generated code selector
and are translated into assembly code sequences one after
another. During this phase, also the calls to compiler-known
functions (CKFs) are detected and are directly transformed
into the corresponding NP assembly instructions. This step
is rather straightforward, since CKFs are simply identied
by their name. However, the code selector, in cooperation
with the register allocator, is still responsible for a correct
register mapping, since CKFs are called with symbolic C
variables instead of register names. The result of the code
selection phase is symbolic assembly code with references to
virtual registers. This code is passed to the register allocator
described in the following.
5.2 Register allocation
Although the NP shows a RISC-like basic architecture, the
classical graph coloring approach to global register allocation
[27] cannot be directly used. The reason is the need to
handle register arrays. As explained in section 3 (see also
gs. 9 and 11), register arrays arise from indirect addressing
in C programs, where unaligned bit packets are traversed
within loops. As a consequence, virtual registers containing
(fragments of) bit packet arrays have to be assigned to
contiguous windows in the physical register le.
In order to achieve this, the register allocator maintains two
sets of virtual registers: one for scalar values and one for
register arrays. All virtual registers are indexed by a unique
number, where each register arrays gets a dedicated, unique,
and contiguous index range. As usual, register allocation
starts with a lifetime analysis of virtual registers. Potential
con
icts in the form of overlapping life ranges are represented
in an interference graph, where each node represents
a virtual register, and each edge denotes a lifetime overlap.
The lifetime analysis is based on a def-use analysis of virtual
registers.
During lifetime analysis, special attention has to be paid to
bit packets indirectly addressed via register pointers, whose
values might not be known at compile time. In order to
ensure program correctness, all register array elements potentially
pointed to by some register pointer p are assumed
to be live while p is in use. Liveness of p is determined by
inspecting the pointer initializations in the calls to compiler-
known function INIT (see g. 11).
Due to the allocation constraints imposed by register ar-
rays, the mapping of virtual registers to physical registers
is based on a special multi-level graph coloring algorithm.
Physical registers are assigned to those virtual registers rst
that belong to register arrays. This is necessary, since register
arrays present higher pressure for the register allocator
than scalar registers.
First, any node set in the original interference graph that
belongs to a certain register array is merged into a super-
node. Then, the interference graph is transformed into a
super-interference graph (SIG), while deleting all edges internal
to each supernode and all scalar virtual register nodes
and their incident edges (g. 16).
Next, a weight is assigned to each supernode n, which is
equal to the number of internal virtual registers of n plus the
maximum number of internal virtual registers of n's neighbors
in the SIG. The supernodes are mapped to physical
registers according to descending weights. This heuristic is
motivated by the fact that supernodes of a lower weight are
generally easier to allocate, because they cause less lifetime
con
icts. Furthermore, in case of a con
ict, it is cheaper to
spill/reload a smaller array instead of a larger one. For any
R1,
R3, R4,
Figure
Construction of the SIG: In this example, the
virtual register sets fR1; R2g and fR3; R4; R5g are supposed
to represent two register arrays, while R6 refers to a scalar
variable.
supernode n with r internal virtual registers, a contiguous
range in the register le is assigned. Since there may be
multiple such windows available at a certain point of time,
the selection of this range is based on the best t strategy
in order to ensure a tight packing of register arrays in the
register le, i.e. in order to avoid too many spills.
In our approach any element of a register array can be accessed
in two dierent ways: rst by direct addressing (e.g.
A[3]) or indirectly by the use of a bit packet pointer. In
case of insu-cient physical registers using indirect access,
spill code is generated for all virtual registers within a register
array. Otherwise only the particular virtual register is
spilled. After register allocation all symbolic addresses for
bit packets have to be recalculated because now they specify
a physical register within the register le instead of the
name of a virtual register. The size of arrays of bit packets
is restricted by the size of the register array. Therefore the
compiler needs to reject code where no register allocation
can be done. Note that for such code it would be impossible
to nd an equivalent assembly code even manually. Such a
case can be encountered for too large register arrays or by
a control
ow which gives multiple denitions of a pointer
variable to multiple register arrays. In such a case all possibly
accessed register arrays must be assumed to be live at
the same time. Therefore the register le has to be large
enough to hold all of them simultaneously.
After register allocation for the supernodes, all remaining
virtual registers in the original interference graph are mapped
to physical registers by traditional graph coloring, while inserting
spill code whenever required.
6. RESULTS
The C compiler for the NP described in the previous sections
is fully operational. The performance of the generated code
has been measured by means of a cycle-true NP instruction
set simulator for a set of test programs. These mainly include
arithmetic operations and checksum computation on
bitstreams. The test programs are relatively small, due to
the eort required to rewrite the source code with compiler-
known functions (CKFs).
As may be expected, the quality of compiler-generated code
largely depends on the clever use of the CKFs and the underlying
register array concept. When using CKFs without
specic knowledge of the application, the performance
overhead of compiled code may be several hundred percent,
which is clearly not acceptable for the intended application
domain. This is due to a massive increase in register pres-
sure, when too many register arrays are simultaneously live,
which naturally implies a huge amount of spill code. In this
case, even using standard C programs (without the use of
CKFs) might well result in more e-cient code.
On the other hand, a careful use of CKFs, as derived from
detailed application knowledge, generally leads to a small
performance overhead in the order of only 10 %. We observed
that this overhead can be even reduced further by
means of instruction scheduling techniques to reduce register
lifetimes (and thereby spill code), as well as by peephole
optimizations, which so far have not been implemented.
It is also interesting to consider the improvement oered by
CKFs as compared to regular C code. Table 1 shows the
performance for six test routines. These have been specied
in C and have been compiled into NP machine code.
Columns 2 and 3 give the performance (clock cycles) without
and with CKFs and register arrays enabled, respectively.
Column 4 shows the performance gain in percent.
without CKFs with CKFs gain %
43 29 33
prg4 853 639 25
prg5 103 81 21
prg6 156 106
Table
1: Experimental performance results
The use of packet-level addressing resulted in an average
performance gain of 28 % over the standard C reference im-
plementations. Naturally, this mainly has to be attributed
to the NP hardware itself. However, from a system-level
perspective it has been very important to prove that this
performance gain can also be achieved by means of compiled
C programs instead of hand-written assembly code.
As a result of our evaluation, we believe that the introduction
of CKFs and register arrays represents a reasonable
compromise between programming eort and code quality.
CKFs give the programmer direct access to dedicated instructions
which are important for optimizing the "hot spots"
in a C application program, while the compiler still performs
the otherwise time-consuming task of register allocation.
For non-critical program parts, where high performance bit-level
operations are hardly an issue, the productivity gain
oered by the compiler versus assembly programming clearly
compensates the potential loss in code quality.
7. CONCLUSIONS AND FUTURE WORK
Modern embedded systems are frequently designed on the
basis of programmable ASIPs, which allow for high
exibil-
ity and IP reuse. Compiler support for ASIP software development
is urgently required in order to avoid time-intensive
assembly programming. However, special compiler backend
techniques have to be developed in order to make optimal
use of the dedicated architectural features of ASIPs.
In this contribution, we have outlined compiler challenges
encountered for Network Processors, a new class of ASIPs,
that allow for e-cient protocol processing by means of packet-level
addressing. We have described the implementation
of a C compiler for a representative, industrial NP. The
main concepts in this compiler, in order to make packet-level
addressing accessible at the C language level, are the
use of compiler-known functions and a special register allocation
technique. Experimental results indicate that these
techniques work in practice, so that the processor features
are well exploited. Although the detailed implementation is
rather machine-specic, we believe that the main techniques
can be easily ported to similar forthcoming NPs.
Improved versions of the NP C compiler are already planned.
Ongoing work deals with gradually replacing the pragmatic
approach of compiler-known functions with more sophisticated
code selection techniques, capable of directly mapping
complex bit masking operations into single machine instruc-
tions. This will be enabled by the use of special tree grammars
that model the instruction set for the code selector.
In addition, we plan to include a technique similar to register
pipelining [34] in order to reduce the register-memory
tra-c for multi-register bit packets, and several peephole
optimizations are being developed in order to further close
the quality gap between compiled code and hand-written
assembly code.
Acknowledgments
The C compiler described in this paper has been developed
at Informatik Centrum Dortmund (ICD) for Inneon Technologies
AG (Munich), whose project funding is gratefully
acknowledged. The required assembler and simulator tools
have been provided by Frank Engel, TU Dresden. The authors
would also like to thank Yue Zhang, who contributed
to the test of the compiler backend.
8.
--R
Tensilica Inc.
Austria Mikro Systeme International: asic.
Automatic Instruction Code Generation based on Trellis Diagrams
Optimal Code Generation for Embedded Memory Non-Homogeneous Register Architectures
Code Optimization Techniques for Embedded DSP Microprocessors
Con ict Modeling and Instruction Scheduling in Code Generation for In-House DSP Cores
Constraint Driven Code Selection for Fixed-Point DSPs
Optimizing Stack Frame Accesses for Processors with Restricted Addressing Modes
Storage Assignment to Decrease Code Size
A Uniform Optimization Technique for O
Minimizing Cost of Local Variables Access for DSP Processors
Compiling for SIMD Within a Register
Code Selection for Media Processors with SIMD Instructions
Exploiting Superword Level Parallelism with Multimedia Instruction Sets
A Perspective on Market Requirements
Dynamically Exploiting Narrow Width Operands to Improve Processor Power and Performance HPCA-5
BitValue Inference: Detecting and Exploiting Narrow Bitwidth Computations
Bitwidth Analysis with Application to Silicon Compilation
A Coprocessor for Streaming multimedia Acceleration
A New Network Processor Architecture for High-Speed Communications
Retargetable Compilers for Embedded Core Processors
Language and Compiler for Optimizing Datapath Widths of Embedded Systems
Engineering a Simple
Code Optimization Libraries for Retargetable Compilation for Embedded Digital Signal Processors
Register Allocation via Graph Coloring
Code Optimization Techniques for Embedded Processors
The Stanford Compiler Group: suif.
A Retargetable C Compiler: Design And Implementation
Improving Register Allocation for Subscripted Variables
--TR
Compilers: principles, techniques, and tools
Code generation using tree matching and dynamic programming
Improving register allocation for subscripted variables
Optimizing stack frame accesses for processors with restricted addressing modes
Engineering a simple, efficient code-generator generator
Register allocation via graph coloring
Storage assignment to decrease code size
Conflict modelling and instruction scheduling in code generation for in-house DSP cores
Code optimization techniques for embedded DSP microprocessors
Optimal code generation for embedded memory non-homogeneous register architectures
A uniform optimization technique for offset assignment problems
PipeRench
Constraint driven code selection for fixed-point DSPs
Minimizing cost of local variables access for DSP-processors
Code selection for media processors with SIMD instructions
Bidwidth analysis with application to silicon compilation
Exploiting superword level parallelism with multimedia instruction sets
Network processors
Code Optimization Techniques for Embedded Processors
Retargetable Compilers for Embedded Core Processors
A Retargetable C Compiler
PipeRench
Compiling for SIMD Within a Register
BitValue Inference
Dynamically Exploiting Narrow Width Operands to Improve Processor Power and Performance
Code optimization libraries for retargetable compilation for embedded digital signal processors
--CTR
Xiaotong Zhuang , Santosh Pande, Effective thread management on network processors with compiler analysis, ACM SIGPLAN Notices, v.41 n.7, July 2006
Xiaotong Zhuang , Santosh Pande, Balancing register allocation across threads for a multithreaded network processor, ACM SIGPLAN Notices, v.39 n.6, May 2004
Jinhwan Kim , Yunheung Paek , Gangryung Uh, Code optimizations for a VLIW-style network processing unit, SoftwarePractice & Experience, v.34 n.9, p.847-874, 25 July 2004
Jinhwan Kim , Sungjoon Jung , Yunheung Paek , Gang-Ryung Uh, Experience with a retargetable compiler for a commercial network processor, Proceedings of the 2002 international conference on Compilers, architecture, and synthesis for embedded systems, October 08-11, 2002, Grenoble, France
Sriraman Tallam , Rajiv Gupta, Bitwidth aware global register allocation, ACM SIGPLAN Notices, v.38 n.1, p.85-96, January
section instruction set extension of ARM for embedded applications, Proceedings of the 2002 international conference on Compilers, architecture, and synthesis for embedded systems, October 08-11, 2002, Grenoble, France
Bengu Li , Rajiv Gupta, Simple offset assignment in presence of subword data, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
V. Krishna Nandivada , Jens Palsberg, Efficient spill code for SDRAM, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
Chidamber Kulkarni , Matthias Gries , Christian Sauer , Kurt Keutzer, Programming challenges in network processor deployment, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA | compilers;embedded processors;network processors |
384245 | Improving memory performance of sorting algorithms. | Memory hierarchy considerations during sorting algorithm design and implementation play an important role in significantly improving execution performance. Existing algorithms mainly attempt to reduce capacity misses on direct-mapped caches. To reduce other types of cache misses that occur in the more common set-associative caches and the TLB, we restructure the mergesort and quicksort algorithms further by integrating tiling, padding, and buffering techniques and by repartitioning the data set. Our study shows that substantial performance improvements can be obtained using our new methods. | INTRODUCTION
Sorting operations are fundamental in many large scale scientific and commercial
applications. Sorting algorithms are highly sensitive to the memory hierarchy of
the computer architecture on which the algorithms are executed, as well as sensitive
to the types of data sets. Restructuring standard and algorithmically efficient
sorting algorithms (such as mergesort and quicksort) to exploit cache locality is
an effective approach for improving performance on high-end systems. Such ex-
This work is supported in part by the National Science Foundation under grants CCR-9400719,
CCR-9812187, and EIA-9977030, by the Air Force Office of Scientific Research under grant
AFOSR-95-1-0215, and by Sun Microsystems under grant EDUE-NAFO-980405.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for profit or direct commercial
advantage and that copies show this notice on the first page or initial screen of a display along
with the full citation. Copyrights for components of this work owned by others than ACM must
be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, to redistribute to lists, or to use any component of this work in other works, requires prior
specific permission and/or a fee. Permissions may be requested from Publications Dept, ACM
Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.
Xiaodong Zhang, and Stefan A. Kubricht
isting restructured algorithms (e.g., [4]) mainly attempt to reduce capacity misses
on direct-mapped caches. In this paper, we report substantial performance improvement
obtained by further exploiting memory locality to reduce other types of
cache misses, such as conflict misses and TLB misses. We present several restructured
mergesort and quicksort algorithms and their implementations by fully using
existing processor hardware facilities (such as cache associativity and TLB), by integrating
tiling and padding techniques, and by properly partitioning the data set
for cache optimizations. Sorting as a fundamental subroutine, is often repeatedly
used for many application programs. Thus, in order to gain the best performance,
cache-effective algorithms and their implementations should be done carefully and
precisely at the algorithm design and programming level.
We focus on restructuring mergesort and quicksort algorithms for cache opti-
mizations. Our results and contributions are summarized as follows:
-Applying padding techniques, we are able to effectively reduce cache conflict
misses and TLB misses, which are not fully considered in the algorithm design
of the tiled mergesort and the multi-mergesort [4]. For our two mergesort alter-
natives, optimizations improve both cache and overall performance. Our experiments
on different high-end workstations show that some algorithms achieve up
to 70% execution time reductions compared with the base mergesort, and up to
54% reductions versus the fastest of the tiled and multi-mergesort algorithms.
-Partitioning the data set based on data ranges, we are able to exploit cache
locality of quicksort on unbalanced data sets. Our two quicksort alternatives
significantly outperform the memory-tuned quicksort [4] and the flashsort [6] on
unbalanced data sets.
-Cache-effective sorting algorithm design is both architecture and data set depen-
dent. The algorithm design should include parameters such as the data cache size
and its associativity, TLB size and its associativity, the ratio between the data
set size and the cache size, as well as others. Our measurements and simulations
demonstrate the interactions between the algorithms and the machines.
-The essential issue to be considered in sorting algorithm design is the trade-off
between the reduction of cache misses and the increase in instruction count.
We give an execution timing model to quantitatively predict the trade-offs. We
also give analytical predictions of the number of cache misses for the sorting
algorithms before and after the cache optimizations. We show that an increase
in instruction count due to an effective cache optimization can be much cheaper
than cycles lost from different types of cache misses.
2.
A data set consists of a number of elements. One element may be a 4-byte integer,
an 8-byte integer, a 4-byte floating point number, or an 8-byte double floating point
number. We use the same unit, element, to specify the cache capacity. Because the
sizes of caches and cache lines are always a multiple of an element in practice, this
identical unit is practically meaningful to both architects and application program-
mers, and makes the discussions straight-forward. Here are the algorithmic and
architectural parameters we will use to describe cache-effective sorting algorithms:
the size of the data set, C: data cache size, L: the size of a cache line, K:
Improving Memory Performance of Sorting Algorithms \Delta 3
cache associativity, number of set entries in the TLB cache, K
associativity, and P s : a memory page size.
Besides algorithm analysis and performance measurements on different high-end
workstations, we have also conducted simulations to provide performance insights.
The SimpleScalar tool set [1] is a family of simulators for studying interactions between
application programs and computer architectures. The simulation tools take
an application program's binaries compiled for the SimpleScalar Instruction Set
Architecture (a close derivative of the MIPS instruction set) and generate statistics
concerning the program in relation with the simulated architecture. The statistics
generated include many detailed execution traces which are not available from
measurements on a computer, such as cache misses on L1, L2 and TLB.
We run sorting algorithms on different simulated architectures with memory hierarchies
similar to that of high-end workstations to observe the following performance
factors:
-L1 or L2 cache misses per element: to compare the data cache misses.
-TLB misses per element: to compare the TLB misses.
-Instruction count per element: to compare the algorithmic complexities.
-Reduction rate of total execution cycles: to compare the cycles saved in percentage
against the base mergesort or the memory-tuned quicksort.
The algorithms are compared and evaluated experimentally and analytically. We
tested the sorting algorithms on a variety of data sets, each of which uses 8-byte
integer elements. Here are the 9 data sets we have used (Some probability density
functions of number generators are described in [7].):
(1) Random: the data set is obtained by calling random number generator random()
in the C library, which returns integers in the range 0 to 2
(2) Equilikely: function Equilikely(a,b) returns integers in the range a to b.
(3) Bernoulli: function Bernoulli(p) returns integers 0 or 1.
returns integers 0, 1, 2, .
(5) Pascal: function Pascal(N,p) returns integers 0, 1, 2, .
returns integers 0, 1, 2, ., N.
returns integers 0, 1, 2, .
in the data set.
Unbalanced: the function returns integers in the range of 0 to 2
to calling rand() from the C library, where N is data set size;
and returns integers MAX=100+ i for
3. CACHE-EFFECTIVE MERGESORT ALGORITHMS
In this section, we first briefly overview the two existing mergesort algorithms for
their cache locality, as well as their merits and limits. We present two new mergesort
alternatives to address these limits. The experimental performance evaluation by
measurements will be presented in section 5.
Xiaodong Zhang, and Stefan A. Kubricht
3.1 Tiled mergesort and multi-mergesort
LaMarca and Ladner [4] present two mergesort algorithms to effectively use caches.
The first one is called tiled mergesort. The basic idea is to partition the data set
into subarrays to sort individually mainly for two purposes: to avoid the capacity
misses, and to fully use the data loaded in the cache before its replacement. The
algorithm is divided into two phases. In the first phase, subarrays of length C=2
(half the cache size) are sorted by the base mergesort algorithm to exploit temporal
locality. The algorithm returns to the base mergesort without considering cache
locality in the second phase to complete the sorting of the entire data set.
The second mergesort, called multi-mergesort, addresses the limits of the tiled
mergesort. In this algorithm, the first phase is the same as the first phase of the
tiled mergesort. In the second phase, a multi-way merge method is used to merge
all the sorted subarrays together in a single pass. A priority queue is used to hold
the heads of the lists to be merged. This algorithm exploits cache locality well
when the number of subarrays in the second phase is less than C=2. However, the
instruction count is significantly increased in this algorithm.
Conducting experiments and analysis of the two mergesort algorithms, we show
that the sorting performance can be further improved for two reasons. First, both
algorithms significantly reduce capacity misses, but do not sufficiently reduce conflict
misses. In mergesort, a basic operation is to merge two sorted subarrays to a
destination array. In a cache with low associativity, conflict mapping occurs frequently
among the elements in the three subarrays. Second, reducing TLB misses
is not considered in the algorithms. Even when the data set is moderately large, the
TLB misses may severely degrade execution performance in addition to the effect
of normal data cache misses. Our experiments show that the performance improvement
of the multi-merge algorithm on several machines is modest - although it
decreases the data cache misses, the heap structure significantly increases the TLB
misses.
3.2 New mergesort alternatives
With the aim of reducing conflict misses and TLB misses while minimizing the
instruction count increase, we present two new alternatives to further restructure
the mergesort for cache locality: tiled mergesort with padding and multi-mergesort
with TLB padding.
3.2.1 Tiled mergesort with padding. Padding is a technique that modifies the
data layout of a program so that conflict misses are reduced or eliminated. The
data layout modification can be done at run-time by system software [2; 10] or at
compile-time by compiler optimization [8]. Padding at the algorithm level with a
full understanding of data structures is expected to significantly outperform optimization
from the above system methods [11].
In the second phase of the tiled mergesort, pairs of sorted subarrays are sorted
and merged into a destination array. One element at a time from each of the two
subarrays is selected for a sorting comparison in sequence. These data elements in
the two different subarrays and the destination array are potentially in conflicting
cache blocks because they may be mapped to the same block in a direct-mapped
cache and in a 2-way associative cache.
Improving Memory Performance of Sorting Algorithms \Delta 5
On a direct-mapped cache, the total number of conflict misses of the tiled merge-sort
in the worst case is approximately
where log 2
is the number of passes in the second phase of the sorting, and
represents 1 conflict miss per comparison and 1
conflict misses per element
placement into the destination array after the comparison, respectively.
In order to change the base addresses of these potentially conflicting cache blocks,
we insert L elements (or a cache line space) to separate every section of C elements
in the data set in the second phase of the tiled mergesort. These padding elements
can significantly reduce the cache conflicts in the second phase of the mergesort.
Compared with the data size, the number of padding elements is insignificant. In
addition, the instruction count increment (resulting from moving each element in a
subarray to its new position after the padding) is also trivial. We call this method
as tiled mergesort with padding.
On a direct-mapped cache, the total number of conflict misses for the tiled merge-sort
with padding is at most4 Ndlog 2
where log 2
C is the number of passes in the second phase of the sorting and 3represents the number of conflict misses per element. After the padding is added,
the one conflict miss per comparison is reduced to 3, and the 1
conflict misses
from the placement in (1) are eliminated. Comparing the above two approximations
in (1) and (2), we see that the tiled mergesort with padding reduces the conflict
misses of the tiled mergesort by about 25%. (Our experimental results show the
execution times of the tiled mergesort on the Sun Ultra 5, a workstation with a
direct-mapped cache, were reduced 23% to 68% by the tiled mergesort with padding.
The execution time reductions mainly come from the reductions of conflict misses.)
Figure
1 shows an example of how the data layout of two subarrays in the second
phase of tiled mergesort is modified by padding so that conflict misses are reduced.
In this example, a direct-mapped cache holds 4 elements. In the figure, the same
type of lines represent a pair comparison and the action to store the selected element
in the destination array. The letter "m" in the figure represents a cache miss.
Without padding, there are 8 conflict misses when merging the two sorted subarrays
into the destination array; there are only 4 after padding is added.
Figure
2 shows the L1 (left figure) and the L2 (right figure) misses of the base
mergesort, the tiled mergesort, and the tiled mergesort with padding on a simulated
Sun Ultra 5 machine by the SimpleScalar. On this machine, L1 is a direct-mapped
cache of 16 KBytes, and L2 is a 2-way associative cache of 256 KBytes. The
experiments show that the padding reduces the L1 cache misses by about 23%
compared with the base mergesort and the tiled mergesort. These misses are conflict
which cannot be reduced by the tiling. The L2 cache miss reduction by the
tiled mergesort with padding is almost the same as that by the tiled mergesort,
which means that the padding is not very effective in reducing conflict misses in L2
on this machine. This is because the conflict misses are significantly reduced in L2
by the 2-way associative cache.
6 \Delta Li Xiao, Xiaodong Zhang, and Stefan A. Kubricht
x2 x3 x4 x5 x6 x7 x8 x9
Before padding After Padding
destination
array
cache
two sorted
subarray
conflict
conflict
Fig. 1. Data layout of subarrays is modified by padding to reduce the conflict misses.5151K 4K 16K 64K 256K 1M 4M
per
element
Data set size (in elements)
L1 Misses Per Element
base mergesort
tiled mergesort
tiled mergesort with padding13571K 4K 16K 64K 256K 1M 4M
per
element
Data set size (in elements)
L2 Misses Per Element
base mergesort
tiled mergesort
tiled mergesort with padding
Fig. 2. Simulation comparisons of the L1 cache misses (left figure) and L2 misses (right figure)
of the mergesort algorithms on the Random data set on the simulated Sun Ultra 5. The L1 cache
miss curves (left figure) of the base mergesort and the tiled-mergesort are overlapped.
The capacity misses in the second phase of the tiled mergesort are unavoidable
without a complex data structure, because the size of the working set (two subarrays
and a destination array) is normally larger than the cache size. As we have shown,
the potential conflict misses could be reduced by padding in this phase. However,
the padding may not completely eliminate the conflict misses due to the randomness
of the order in the data sets. Despite of this, our experimental results presented
Improving Memory Performance of Sorting Algorithms \Delta 7
in section 5 and the appendix using the 9 different data sets consistently show the
effectiveness of the padding on the Sun Ultra 5.
3.2.2 Multi-mergesort with TLB padding. In the second phase of the multi-mergesort
algorithm, multiple subarrays are used only once to complete the sorting of the entire
data set to effectively use the cache. This single pass makes use of a heap
to hold the heads of the multiple lists. However, since the heads come from all
the lists being multi-merged, the practical working set is much larger than that
of the base mergesort (where only three subarrays are involved at a time). This
large working set causes TLB misses which degrade performance. (We will explain
the TLB structure following this paragraph). Our experiments indicate that the
multi-mergesort significantly decreases the number of data cache misses. However,
it also increases the TLB misses, which offsets the performance gain. Although a
rise in the instruction count leads to additional CPU cycles in the multi-mergesort,
the performance degradation of the algorithm comes mainly from the high number
of TLB misses since memory accesses are much more expensive than CPU cycles.
The TLB (Translation-Lookaside Buffer) is a special cache that stores the most
recently used virtual-physical page translations for memory accesses. The TLB is
generally a small fully associative or set-associative cache. Each entry points to a
memory page of 4K to 64KBytes. A TLB cache miss forces the system to retrieve
the missing translation from the page table in the memory, and then to select a TLB
entry to replace. When the data to be accessed is larger than the amount of data
that all the memory pages in the TLB can hold, TLB misses occur. For example,
the TLB cache of the Sun UltraSparc-IIi processor holds 64 fully associative entries
each of which points to a page of 8 KBytes (P
The 64 pages in the TLB of Sun UltraSparc-IIi processor hold 64 \Theta
elements, which represents a moderately-sized data set for sorting. In practice, we
have more than one data array being operated on at a time. Thus, the TLB can
hold a limited amount of data in sorting.
Some processors' TLBs are not fully associative, but set-associative. For example,
the TLB in the Pentium II and Pentium III processors is 4-way associative (K
4). A simple blocking based on the number of TLB entries does not work well
because multiple pages within a TLB space range may map to the same TLB set
entry and cause TLB cache conflict misses.
In the second phase of the multi-mergesort, we insert P s elements (or a page
space) to separate every sorted subarray in the data set in order to reduce or
eliminate the TLB cache conflict misses. The padding changes the base addresses
of these lists in page units to avoid potential TLB conflict misses.
Figure
3 gives an example of the padding for TLB, where the TLB is a direct-mapped
cache of 8 entries, and the number of elements of each list is a multiple of
8 page elements. Before padding, each of the lists in the data set is mapped to the
same TLB entry. After padding, these lists are mapped to different TLB entries.
When the multi-mergesort operates on a large data set, if the size of each list is
a multiple of T s , the number of TLB misses per element is close to 1. After the
TLB padding, the average TLB miss per element of the multi-mergesort algorithm
Xiaodong Zhang, and Stefan A. Kubricht0000011111000000111111000000111111000000111111000000111111
Before Padding
TLB Data array TLB
Ps Ps Ps
After Padding
Data array
Fig. 3. Padding for TLB: the data layout is modified by inserting a page space at multiple
locations, where K
becomes approximately
A
A +K TLB
Ts
is the number of average misses for each TLB set entry. The above
approximation is further derived to
Figure
4 shows the L2 misses and TLB misses of the 5 mergesort algorithms on the
simulated Pentium II by the SimpleScalar, where L1 is a 4-way set associative cache
of KBytes, L2 is a 4-way associative cache of 256 KBytes, and TLB is a 4-way
set associative cache of 64 entries. The simulation shows that the multi-mergesort
and the multi-mergesort with TLB padding had the lowest L2 cache misses (see the
left figure in Figure 4). The multi-mergesort had the highest TLB misses. These
are significantly reduced by the TLB padding. (see the right figure in Figure
4).
Here is an example verifying the approximation in (4) of TLB misses of the
multi-mergesort. Substituting the parameters of Pentium II to the approximation,
per element for the
multi-mergesort with TLB padding, which is very close to our experimental result,
0.47 (in the right figure of Figure 4) We will show in section 5 that the multi-
mergesort with TLB padding significantly reduces the TLB misses and improves
overall execution performance.
3.3 Trade-offs between instruction count increase and performance gain
Figure
5 shows the instruction counts and the total cycles saved in percentage of the
5 mergesort algorithms compared with the base mergesort on the simulated Pentium
II. The simulation shows that the multi-mergesort had the highest instruction count,
while the tiled mergesort had the lowest instruction counts. Taking advantage of
low L2 misses of the multi-mergesort and significantly reducing the TLB misses
by padding, the multi-mergesort with TLB padding saved cycles by about 40% on
large data sets compared to the base mergesort even though it has a relatively high
instruction count. We also show that the tiled-mergesort with padding did not gain
performance improvement on the Pentium II. This is because this machine has a
4-way set associative cache where conflict misses are not major concerns.
Improving Memory Performance of Sorting Algorithms \Delta 926101K 4K 16K 64K 256K 1M 4M 8M
per
element
Data set size (in elements)
L2 Misses Per Element
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding0.20.61
per
element
Data set size (in elements)
TLB Misses Per Element
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding
Fig. 4. Simulation comparisons of the L2 cache misses (left figure) and TLB misses (right figure)
of the mergesort algorithms on the Random data set on the simulated Pentium II.1003005007001K 4K 16K 64K 256K 1M 4M 8M
Instructions
per
element
Data set size (in elements)
Instructions Per Element
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding
cycles
saved
Data set size (in elements)
Cycles Saved vs. Base Mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding
Fig. 5. Simulation comparisons of the instruction counts (left figure) and saved cycles in percentage
(right figure) of the mergesort algorithms on the Random data set on the simulated Pentium
II.
4. CACHE-EFFECTIVE QUICKSORT
We again first briefly evaluate the two existing quicksort algorithms concerning
their merits and limits, including their cache locality. We present two new quicksort
alternatives for further memory performance improvement. Experimental results
will be reported in the next section.
Xiaodong Zhang, and Stefan A. Kubricht
4.1 Memory-tuned quicksort and multi-quicksort
LaMarca and Ladner in the same paper [4] present two quicksort algorithms for
cache optimization. The first one is called memory-tuned quicksort, which is a
modification of the base quicksort [9]. Instead of saving small subarrays to sort
in the end, the memory-tuned quicksort sorts these subarrays when they are first
encountered in order to reuse the data elements in the cache.
The second algorithm is called multi-quicksort. This algorithm applies a single
pass to divide the full data set into multiple subarrays, each of which is hoped to
be smaller than the cache capacity.
The performance gain of these two algorithms from experiments reported in [4] is
modest. We implemented the two algorithms on simulated machines and on various
high-end workstations, and obtained consistent performance. We also found that
the quicksort and its alternatives for cache optimizations are highly sensitive to the
types of data sets. These algorithms did not work well on unbalanced data sets.
4.2 New quicksort alternatives
In practice, the quicksort algorithms exploit cache locality well on balanced data.
A challenge is to make the quicksort perform well on unbalanced data sets. We
present two quicksort alternatives for cache optimizations which work well on both
balanced and unbalanced data sets.
4.2.1 Flash Quicksort. Flashsort [6] is extremely fast for sorting balanced data
sets. The maximum and minimum values are first identified in the data set to
identify the data range. The data range is then evenly divided into classes to form
subarrays. The algorithm consists of three steps: "classification" to determine the
size of each class, "permutation" to move each element into its class by using a single
temporary variable to hold the replaced element, and "straight insertion" to sort
elements in each class by using Sedgewick's insertion sort [9]. The reason this algorithm
works well on balanced data sets is because the numbers of elements stored in
the subarrays after the first two steps are quite similar and are sufficiently small to
fit the cache capacity. This makes the Flashsort highly effective (O(N )). However,
when the data set is not balanced, unbalanced amounts of elements among the
subarrays are generated, causing ineffective cache usage, and making the flashsort
as slow as the insertion sort (O(N 2 )) in the worst case.
Compared with the pivoting process of the quicksort, the classification step of
the flashsort is more likely to generate balanced subarrays, which is in favor of a
cache optimization. On the other hand, the quicksort outperforms the insertion
sort on unbalanced subarrays. Taking the advantages of both the flashsort and the
quicksort, we present this new quicksort alternative called flash quicksort, where
the first two steps are the same as the ones in the flashsort, and the last step uses
the quicksort to sort elements in each class.
4.2.2 Inplaced Flash Quicksort. We employ another cache optimization to improve
temporal locality in the flash quicksort, hoping to further improve overall
performance. This alternative is called inplaced flash quicksort. In this algorithm,
the first and third steps are the same as the ones in the flash quicksort. In the
second step, an additional array is used to hold the permuted elements. In the
Improving Memory Performance of Sorting Algorithms \Delta 11
original flashsort, a single temporary variable is used to hold the replaced element.
A cache line normally holds more than one element. The data structure of the single
variable minimizes the chance of data reusage. Using the additional array, we
attempt to reuse elements in a cache line before their replacement, and to reduce
the instruction count for copying data elements. Although this approach increases
the required memory space, it improves both cache and overall performance.
4.3 Simulation results
Figure
6 shows the instruction counts (left figure) and the L1 misses (right figure)
of the memory-tuned quicksort, the flashsort, the flash quicksort, and the inplaced
flash quicksort, on the Unbalanced data set on the simulated Pentium III which has
a faster processor (500 MHz) and a larger L2 cache (512 KBytes) than the Pentium
II. The instruction count curve of the flashsort was too high to be presented in
the left figure of Figure 6. The same figure shows that the instruction count of
the memory-tuned quicksort also began to increase rapidly as the data set size
grew. In contrast, the instruction counts of the flash quicksort and the inplaced
flash quicksort had little change as the data set size increased. The simulation
also shows that the L1 misses of the memory-tuned quicksort and the flashsort
increased much more rapidly than that of the flashsort and the inplaced flashsort
algorithms. The simulation results are consistent to our algorithm analysis, and
show the effectiveness of our new quicksort alternatives on unbalanced data sets.2006001000
Instructions
per
element
Data set size (in elements)
Instructions Per Element ( Unbalanced data set )
memory-tuned quicksort
flashsort
flash quicksort
inplaced flash quicksort13571K 4K 16K 64K 256K 1M 4M
per
element
Data set size (in elements)
L1 Misses Per Element ( Unbalanced data set )
memory-tuned quicksort
flashsort
flash quicksort
inplaced flash quicksort
Fig. 6. Simulation comparisons of the instruction counts (left figure) and the L1 misses (right
figure) of the quicksort algorithms on the Unbalanced data set on the simulated Pentium III. (The
instruction count curve of the flashsort was too high to be presented in the left figure).
5. MEASUREMENT RESULTS AND PERFORMANCE EVALUATION
We have implemented and tested all the sorting algorithms discussed in the previous
sections on all the data sets described in section 2 on a SGI O2 workstation, a Sun
Ultra-5 workstation, a Pentium II PC, and a Pentium III PC. The data sizes we
Xiaodong Zhang, and Stefan A. Kubricht
Workstations SGI O2 Sun Ultra 5 Pentium Pentium
Processor type R10000 UltraSparc-IIi Pentium II 400 Pentium III Xeon 500
clock rate (MHz) 150 270 400 500
L2 cache (KBytes) 64 256 256 512
Memory latency (cycles) 208 76 68 67
Table
1. Architectural parameters of the 4 machines we have used for the experiments.
used for experiments are limited by the memory size because we focus on cache-
effective methods. We used "lmbench" [5] to measure the latencies of the memory
hierarchy at its different levels on each machine. The architectural parameters of
the 4 machines are listed in Table 5, where all specifications on L1 cache refer
to the L1 data cache, and all L2s are uniform. The hit times of L1, L2 and the
main memory are measured by lmbench [5], and their units are converted from
nanoseconds (ns) to their CPU cycles.
We compared all our algorithms with the algorithms in [4] and [6]. The execution
times were collected by "gettimeofday()", a standard Unix timing function. The
reported time unit is cycle per element (CPE):
execution time \Theta clock rate
where execution time is the measured time in seconds, clock rate is the CPU speed
(cycles/second) of the machine where the program is run, and N is the number of
elements in the data set.
The performance results on all the data sets are quite consistent in our analysis.
Since the performance of the sorting algorithms using different data sets on different
machines is consistent in principle, we only present the performance results of the
mergesort algorithms using the Random data set on the 4 machines (plus performance
results of the other data sets on the Ultra 5 to show the effectiveness of the
tiled mergesort with padding), and performance results of the quicksort algorithms
using the Random and the Unbalanced data sets on the 4 machines.
5.1 Mergesort performance comparisons
We compared 5 mergesort algorithms: the base mergesort, the tiled mergesort,
the multi-mergesort, the tiled mergesort with padding, and the multi-mergesort
with TLB padding. Proportional to each machine's memory capacity, we scaled
the mergesort algorithms from N=1K up to N=16M elements. All our algorithms
showed their effectiveness on large data sets. Figure 7 shows the comparisons of
cycles per element among the 5 algorithms on the SGI O2 and the Sun Ultra 5.
Improving Memory Performance of Sorting Algorithms \Delta 13
The measurements on the O2 show that the multi-mergesort with TLB padding
performed the best, with execution times reduced 55% compared with the base
compared with the tiled mergesort, and 31% compared with the multi-
mergesort on 2M elements. On the other hand, the tiled mergesort with padding
performed the best on the Ultra 5, reducing execution times 45% compared with the
multi-mergesort, 26% compared with the base mergesort, and 23% compared with
the tiled mergesort on 4M elements. The multi-mergesort with TLB padding on
Ultra 5 also did well, with a 35% improvement over the multi-mergesort, 13% over
the base mergesort, and 9% over the tiled mergesort on 4M elements. The reason for
the super performance improvement on the O2 comes from its long memory latency
(208 cycles). This makes the cache miss reduction techniques highly effective in
improving the overall performance of the sorting algorithms. The L2 cache size of
the SGI is relatively small (64 KBytes), and the TLB is frequently used for memory
accesses. Thus, the TLB padding is very effective. In addition, both L1 and L2
caches are 2-way associative, where the data cache padding is not as effective as the
padding on a direct-mapped cache. In contrast, the Ultra 5's L1 cache is direct-
mapped, and L2 is 4 times larger than that of the O2. Thus, the data cache padding
is more effective than the TLB padding.
In order to further show the effectiveness of the tiled-mergesort with padding on
a low-associativity cache system, such as the Sun Ultra 5, we plot the performance
curves of the 5 mergesort algorithms using other 8 data sets on the Ultra 5 in the
Appendix
. Our experiments show that the tiled-mergesort with padding consistently
and significantly outperforms the other mergesort algorithms on the Ultra
5. For example, the tiled mergesort with padding achieved 70%, 68%, and 54%
execution time reductions on the Zero data set compared with the base mergesort,
the tiled mergesort, and the multi-mergesort, respectively. Using other data sets,
we also show that the tiled mergesort with padding achieved 24% to 53% execution
time reductions compared with the base mergesort, 23% to 52% reductions
compared with the tiled mergesort, and 23% to 44% reductions compared with the
multi-mergesort.
Figure
8 shows the comparisons of cycles per element among the 5 mergesort algorithms
on the Pentium II 400 and the Pentium III 500. The measurements on both
machines show that the multi-mergesort with TLB padding performed the best,
reducing execution times 41% compared with the multi-mergesort, 40% compared
with the base mergesort, and 26% compared with the tiled sort on 16M elements.
The L1 and L2 caches of both machines are 4-way set associative, thus, the issue of
data cache conflict misses is not a concern (as we discussed in section 3.1). Since
the TLB misses degraded performance the most in the multi-mergesort algorithm,
the padding for TLB becomes very effective in improving the performance.
In summary, the tiled mergesort with padding on machines with direct-mapped
caches is highly effective in reducing conflict misses, while the multi-mergesort with
padding performs very well on all the machines.
5.2 Quicksort performance comparisons
We used the Random data set and the Unbalanced data set to test the quicksort
algorithms on the 4 machines. The 4 quicksort algorithms are: the memory-tuned
quicksort, the flashsort, the flash quicksort, and the inplaced flash quicksort.
14 \Delta Li Xiao, Xiaodong Zhang, and Stefan A. Kubricht200600100014001K 4K 16K 64K 256K 1M
cycles
per
element
data set size in elements
Mergesorts on O2 ( Random data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding50015001K 4K 16K 64K 256K 1M 4M
cycles
per
element
data set size in elements
Mergesorts on Ultra 5 ( Random data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding
Fig. 7. Execution comparisons of the mergesort algorithms on SGI O2 and on Sun Ultra 5.200600100014001K 4K 16K 64K 256K 1M 4M 16M
cycles
per
element
data set size in elements
Mergesorts on Pentium II 400 ( Random data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding200600100014001K 4K 16K 64K 256K 1M 4M 16M
cycles
per
element
data set size in elements
Mergesorts on Pentium III 500 ( Random data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding
Fig. 8. Execution comparisons of the mergesort algorithms on Pentium II and on Pentium III.
Figure
9 shows the comparisons of cycles per element among the 4 mergesort
algorithms on the Random data set (left figure) and the Unbalanced data set (right
figure) on the SGI O2 machine. The performance results of the 4 mergesort algorithms
using the Random data set are comparable, where the memory-tuned
algorithm slightly outperformed the others. In contrast, the performance results
using the Unbalanced data set are significantly different. As we expected, the execution
times of the flash quicksort and the inplaced flash quicksort are stable, but
the memory-tuned quicksort and the flashsort performed much worse as data set
sizes increased. The timing curves of the flashsort are even too high to be presented
in the right figure in Figure 9.
Figure
shows the comparisons of cycles per element among the 4 mergesort
Improving Memory Performance of Sorting Algorithms \Delta 1520060010001400
cycles
per
element
data set size in elements
Quicksorts on O2 ( Random data set )
memory-tuned quicksort
flash quicksort
inplaced flash quicksort20060010001400
cycles
per
element
data set size in elements
Quicksorts on O2 ( Unbalanced data set )
memory-tuned quicksort
flash quicksort
inplaced flash quicksort
Fig. 9. Execution comparisons of the quicksort algorithms on the Random data set (left figure)
and on the Unbalanced data set (right figure) on the SGI O2. (The timing curve of the flashsort
is too high to be presented in the right figure).
algorithms on the Random data set (left figure) and the Unbalanced data set (right
figure) on the Sun Ultra 5 machine. On the Ultra 5, all 4 algorithms showed
little difference in their execution times. The flash quicksort and the inplaced flash
quicksort show their strong effectiveness on the Unbalanced data set. For example,
when the data set increased to 128K elements, the execution time of the flashsort
is more than 10 times higher than that of the other three algorithms (the curve is
too high to be plotted in the figure). When the data set increased to 4M elements,
the execution time of the memory-tuned quicksort is more than 3 times higher than
the flash quicksort and the inplaced flash quicksort, and the execution time of the
flashsort was more than 100 times higher than that of the others.
Figure
11 and Figure 12 show the comparisons of cycles per element among the
4 mergesort algorithms on the Random data set (left figure) and the Unbalanced
data set (right figure) on the Pentium II and the Pentium III machine respec-
tively. The measurements on both Pentiums on the Random data set showed that
the flashsort, the flash quicksort, and the inplaced flashsort had similar execution
performance and reduced execution times around 20% compared with the memory-
tuned quicksort. Again, the flash quicksort and inplaced flash quicksort significantly
outperformed the memory-turned quicksort algorithm on the Unbalanced data sets
on the two Pentium machines.
6. A PREDICTION MODEL OF PERFORMANCE TRADE-OFFS
The essential issue to be considered in sorting algorithms design and other algorithms
design for memory optimization is the trade-off between the optimization
achievement-the reduction of cache misses, and the optimization effort-the increment
of instruction count. The optimization objective is to improve overall
performance-to reduce the execution time of a base algorithm. This trade-off and
the objective can be quantitatively predicted through an execution timing model.
Xiaodong Zhang, and Stefan A. Kubricht20060010001400
cycles
per
element
data set size in elements
Quicksorts on Ultra 5 ( Random data set )
memory-tuned quicksort
flash quicksort
inplaced flash quicksort20060010001400
cycles
per
element
data set size in elements
Quicksorts on Ultra 5 ( Unbalanced data set )
memory-tuned quicksort
flash quicksort
inplaced flash quicksort
Fig. 10. Execution comparisons of the quicksort algorithms on the Random data set (left figure)
and on the Unbalanced data set (right figure) on the Ultra 5. (The timing curve of the flashsort
is too high to be presented in the right figure).20060010001400
cycles
per
element
data set size in elements
Quicksorts on Pentium II 400 ( Random data set )
memory-tuned quicksort
flash quicksort
inplaced flash quicksort20060010001400
cycles
per
element
data set size in elements
Quicksorts on Pentium II 400 ( Unbalanced data set )
memory-tuned quicksort
flash quicksort
inplaced flash quicksort
Fig. 11. Execution comparisons of the quicksort algorithms on the Random data set (left figure)
and on the Unbalanced data set on the Pentium II. (The timing curve of the flashsort is too high
to be presented in the right figure).
The execution time of an algorithm on a computer system based on Amdahl's Law
[3] is expressed as
I +CA \Theta MR \Theta MP; (5)
where IC is the instruction count of the algorithm, CP I is the number of cycles
per instruction of the CPU for the algorithm, CA is the number of cache accesses
of the algorithm in the execution, MR is the cache miss rate of the algorithm in
Improving Memory Performance of Sorting Algorithms \Delta 1720060010001400
cycles
per
element
data set size in elements
Quicksorts on Pentium III 500 ( Random data set )
memory-tuned quicksort
flash quicksort
inplaced flash quicksort20060010001400
cycles
per
element
data set size in elements
Quicksorts on Pentium III 500 ( Unbalanced data set )
memory-tuned quicksort
flash quicksort
inplaced flash quicksort
Fig. 12. Execution comparisons of the quicksort algorithms on the Random data set (left figure)
and on the Unbalanced data set on the Pentium III. (The timing curve of the flashsort is too high
to be presented in the right figure).
the execution, and MP is the miss penalty in cycles of the system. The execution
time for a base algorithm, T base , is expressed as
base \Theta CP I base \Theta MR base \Theta MP; (6)
and the execution time for an optimized algorithm, T opt , is expressed as
where IC base and IC opt are the instruction counts for the base algorithm and the
optimized algorithm, CA base and CA opt are the numbers of cache accesses of the
base algorithm and the optimized algorithm, and MR base and MR opt are the cache
miss rates of the base algorithm and the optimized algorithm, respectively.
In some optimized algorithms, such as the tiled mergesort and the tiled mergesort
with padding, the numbers of cache accesses are kept almost the same as that of
the base algorithm. For this type of algorithms, we combine equations (6) and (7)
with to predict the execution time reduction rate of an
optimized algorithm as follows:
base
I
base \Theta CP I base \Theta MR base \Theta MP
where \DeltaM represents the miss rate reduction, and \DeltaI
base represents the instruction count increment. In order to obtain a
positive execution time reduction rate, we must have
This model describes the quantitative trade-off between the instruction count increase
and the miss rate reduction, and gives the condition for an optimized algo-
Xiaodong Zhang, and Stefan A. Kubricht
rithm to improve the performance of a base algorithm as follows:
\DeltaI C
\DeltaM R
CA \Theta MP
I
For multi-phase optimized algorithms which have different cache access patterns in
each phase, such as the multi-mergesort and the multi-mergesort with TLB padding,
we combine equations (6) and (7) with CA base 6= CA opt , and obtain the condition
for an optimized algorithm to improve the performance of a base algorithm as
follows:
\DeltaI C
\Delta(M R \Theta CA)
MP
I
where \Delta(M R \Theta base \Theta CA base \Gamma MR opt \Theta CA opt .
There are architecture related and algorithm related parameters in this prediction
model. The architecture related parameters are CP I and MP which are machine
dependent and can be easily obtained. The algorithm related parameters are IC,
CA, and MR, which can be either predicted from algorithm analysis or obtained
from running the program on a simulated architecture, such as SimpleScalar. The
algorithm related parameters can also be predicted by running the algorithms on
relatively small data sets which oversize the cache capacity on a target machine.
Using the prediction model and the parameters from the SimpleScalar simulation,
we are able to predict the execution time reduction rate of optimized algorithms.
Our study shows that the predicted results using the model are close to the measurement
results, with a 6.8% error rate.
7. CONCLUSION
We have examined and developed cache-effective algorithms for both mergesort
and quicksort. These algorithms have been tested on 4 representative processors of
products dating from 1995 to 1999 to show their effectiveness. The simulations provide
more insightful performance evaluation. We show that mergesort algorithms
are more architecture dependent, while the quicksort algorithms are more data set
dependent. Our techniques of padding and partitioning can also be used for other
algorithms for cache optimizations.
The only machine dependent architecture parameters for implementing the 4
methods we presented in this paper are the cache size (C), the cache line size (L),
cache associativity (K), the number of entries in the TLB cache, and a memory
page size (P s ). These parameters are becoming more and more commonly known to
users. These parameters can also be defined as variables in the programs, which will
be adaptively changed by users from machine to machine. Therefore, the programs
are easily portable among different workstations.
ACKNOWLEDGMENTS
Many students in the Advanced Computer Architecture class offerred in the Spring
1999 participated discussions of cache-effective sorting algorithms and their imple-
mentations. Particularly, Arun S. Mangalam made an initial suggestion to combine
the quicksort and the flashsort. We also appreciate Alma Riska, Zhao Zhang, and
Zhichun Zhu for their comments on the work and helps on simulations.
Improving Memory Performance of Sorting Algorithms \Delta 19
8.
cycles
per
element
data set size in elements
Mergesorts on Ultra 5 ( Equilikely data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding50015001K 4K 16K 64K 256K 1M 4M
cycles
per
element
data set size in elements
Mergesorts on Ultra 5 ( Bernoulli data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding
Fig. 13. Execution comparisons of the mergesort algorithms on Sun Ultra 5 using the Equilikely
data set (left figure) and the Bernoulli data set (right set).50015001K 4K 16K 64K 256K 1M 4M
cycles
per
element
data set size in elements
Mergesorts on Ultra 5 ( Geometric data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding50015001K 4K 16K 64K 256K 1M 4M
cycles
per
element
data set size in elements
Mergesorts on Ultra 5 ( Pascal data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding
Fig. 14. Execution comparisons of the mergesort algorithms on Sun Ultra 5 using the Geometric
data set (left figure) and the Pascal data set (right set).
Xiaodong Zhang, and Stefan A. Kubricht50015001K 4K 16K 64K 256K 1M 4M
cycles
per
element
data set size in elements
Mergesorts on Ultra 5 ( Binomial data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding50015001K 4K 16K 64K 256K 1M 4M
cycles
per
element
data set size in elements
Mergesorts on Ultra 5 ( Poisson data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding
Fig. 15. Execution comparisons of the mergesort algorithms on Sun Ultra 5 using the Binomial
data set (left figure) and the Poisson data set (right set).50015001K 4K 16K 64K 256K 1M 4M
cycles
per
element
data set size in elements
Mergesorts on Ultra 5 ( Unbalanced data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding500150025001K 4K 16K 64K 256K 1M 4M
cycles
per
element
data set size in elements
Mergesorts on Ultra 5 ( Zero data set )
base mergesort
tiled mergesort
tiled mergesort with padding
multi-mergesort
multi-mergesort with TLB padding
Fig. 16. Execution comparisons of the mergesort algorithms on Sun Ultra 5 using the Unbalanced
data set (left figure) and the Zero data set (right set).
--R
The SimpleScalar Tool Set
"Avoiding conflict misses dynamically in large direct-mapped caches"
Computer Architecture: A Quantitative Approach
"The influence of caches on the performance of sorting"
portable tools for performance analysis"
"The Flashsort1 algorithm"
A First Course
"Data transformations for eliminating conflict misses"
"Implementing quicksort programs"
"Cacheminer: a runtime approach to exploit cache locality on SMP"
"Cache-optimal methods for bit-reversals"
--TR
ATOM
Avoiding conflict misses dynamically in large direct-mapped caches
The influence of caches on the performance of sorting
Cache-optimal methods for bit-reversals
Cacheminer
Implementing Quicksort programs
--CTR
Chen Ding , Yutao Zhong, Compiler-directed run-time monitoring of program data access, ACM SIGPLAN Notices, v.38 n.2 supplement, p.1-12, February
Ranjan Sinha , Justin Zobel, Using random sampling to build approximate tries for efficient string sorting, Journal of Experimental Algorithmics (JEA), 10, 2005
Ranjan Sinha , Justin Zobel, Cache-conscious sorting of large sets of strings with dynamic tries, Journal of Experimental Algorithmics (JEA), v.9 n.es, 2004
Protecting RFID communications in supply chains, Proceedings of the 2nd ACM symposium on Information, computer and communications security, March 20-22, 2007, Singapore
Ranjan Sinha , Justin Zobel , David Ring, Cache-efficient string sorting using copying, Journal of Experimental Algorithmics (JEA), 11, 2006
Gayathri Venkataraman , Sartaj Sahni , Srabani Mukhopadhyaya, A blocked all-pairs shortest-paths algorithm, Journal of Experimental Algorithmics (JEA), 8,
Song Jiang , Xiaodong Zhang, Token-ordered LRU: an effective page replacement policy and its implementation in Linux systems, Performance Evaluation, v.60 n.1-4, p.5-29, May 2005
Allocations for Jobs with Known and Unknown Memory Demands, IEEE Transactions on Parallel and Distributed Systems, v.13 n.3, p.223-240, March 2002
James D. Fix, The set-associative cache performance of search trees, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Bernard M. E. Moret , Tandy Warnow, Reconstructing optimal phylogenetic trees: a challenge in experimental algorithmics, Experimental algorithmics: from algorithm design to robust and efficient software, Springer-Verlag New York, Inc., New York, NY, 2002
Bernard M. E. Moret , David A. Bader , Tandy Warnow, High-Performance Algorithm Engineering for Computational Phylogenetics, The Journal of Supercomputing, v.22 n.1, p.99-111, May 2002
Gerth Stlting Brodal , Rolf Fagerberg , Kristoffer Vinther, Engineering a cache-oblivious sorting algorithm, Journal of Experimental Algorithmics (JEA), 12, 2007 | mergesort;memory performance;quicksort;caches;TLB |
384248 | Finding the right cutting planes for the TSP. | Given an instance of the Traveling Salesman Problem (TSP), a reasonable way to get a lower bound on the optimal answer is to solve a linear programming relaxation of an integer programming formulation of the problem. These linear programs typically have an exponential number of constraints, but in theory they can be solved efficiently with the ellipsoid method as long as we have an algorithm that can take a solution and either declare it feasible or find a violated constraint. In practice, it is often the case that many constraints are violated, which raises the question of how to choose among them so as to improve performance. For the simplest TSP formulation it is possible to efficiently find all the violated constraints, which gives us a good chance to try to answer this question empirically. Looking at random two dimensional Euclidean instances and the large instances from TSPLIB, we ran experiments to evaluate several strategies for picking among the violated constraints. We found some information about which constraints to prefer, which resulted in modest gains, but were unable to get large improvements in performance. | Introduction
Given some set of locations and a distance function, the Traveling Salesman
Problem (TSP) is to find the shortest tour, i.e., simple cycle through all of the
locations. This problem has a long history (see, e.g. [11]) and is a famous example
of an NP-hard problem. Accordingly, there is also a long history of heuristics for
finding good tours and techniques for finding lower bounds on the length of the
shortest tour.
In this paper we focus on one well-known technique for finding lower bounds.
The basic idea is to formulate the TSP as an integer (linear) program (IP),
but only solve a linear programming (LP) relaxation of it. The simplest such
formulation is the following IP:
each pair fi; jg of cities
Objective: minimize
This work was done while the author was at AT&T Labs-Research.
Constraints:
The interpretation of this program is that x ij will tell us whether or not we
go directly from location i to location j. The first constraints say that we must
either go or not go; the second say that we must enter and leave each city exactly
once; and the third guarantee that we get one large cycle instead of several little
(disjoint) ones. The third constraints are called subtour elimination constraints,
and will be the main concern of our work.
We relax this IP to an LP in the standard way by replacing the first constraints
with 1. Observe that any solution to the IP will be a solution
to the LP, so the optimum we find can only be smaller than the original opti-
mum. Thus we get a lower bound, which is known as the Held-Karp bound[5, 6].
Experimental analysis has shown that this bound is pretty good: for random
two dimensional Euclidean instances, asymptotically the bound is only about
0:7% different from the optimal tour length, and for the "real-world" instances
of TSPLIB [12], the gap is usually less than 2% [7]. And if the distances obey the
triangle inequality, the bound will be at least 2=3 of the length of the optimal
tour [13, 15]. It is possible to give more complicated IPs whose relaxations have
smaller gaps, but we did not attempt to work with them for reasons that we will
explain after we have reviewed the method in more detail.
Observe that it is not trivial to plug this linear program into an LP solver,
because there are exponentially many subtour elimination constraints. Neverthe-
less, even in theory, there is still hope for efficiency, because the ellipsoid method
[4] only requires an efficient separation algorithm, an algorithm that takes a solution
and either decides that it is feasible or gives a violated constraint. For
the subtour elimination constraints, if we construct a complete graph with the
set of locations as vertices and the x ij as edge weights, it suffices to determine
whether or not the minimum cut of this graph, the way to separate the vertices
into two groups so that the total weight of edges crossing between the groups is
minimized, is less than two. If it is, the minimum cut gives us a violated constraint
(take the smaller of the two groups as S in the constraint); if not we are
feasible. Many algorithms for finding minimum cuts are known, ranging from
algorithms that follow from the early work on maximum flows in the 1950s [3]
to a recent Monte Carlo randomized algorithm that runs in O(m log 3 n) time on
a graph with m edges and n vertices [9].
Even better, it is possible to find all near-minimum cuts (as long as the graph
is connected) and thus find all the violated subtour elimination constraints. This
leads us to ask an obvious question: which violated constraints do we want to use?
When more than one constraint is violated, reporting certain violated constraints
before others may lead to a shorter overall running time. The primary goal of
this work is to explore this question.
There are several algorithms for finding all near-minimum cuts. They include
a flow-based algorithm due to Vazirani and Yannakakis [14], a contraction-based
algorithm due to Karger and Stein [10], and a tree-packing-based algorithm due
to Karger [9]. We chose to use the Karger-Stein algorithm, primarily because
of an implementation we had available that was not far from what we needed.
We did not try the others. We believe that the time to find the cuts is small
enough compared to the time spent by the LP solver that it would not change
our results significantly.
At this point we should justify use of the simplest IP. Our decision was made
partly on the grounds of simplicity and historical precedent. A better reason is
that with the simplest IP we can use the Karger-Stein minimum cut algorithm
and find all of the violated constraints. One can construct more complicated IPs
that give better bounds by adding more constraints to this simple IP, and there
are useful such constraints that have separation algorithms, but for none of the
sets of extra constraints that people have tried is it known how to efficiently find
all of the violated constraints, so it would be more difficult to determine which
constraints we would like to use. It may still be possible to determine which
constraints to use for a more complicated IP, but we leave that as a subject for
further research. Note that the constraints of the more complicated IPs include
the constraints of the simple IP, so answering the question for the simple IP is a
reasonable first step towards answering the question for more complicated IPs.
We found that it is valuable to consider only sets of small, disjoint constraints.
Relatedly, it seems to be better to fix violations in small areas of the graph
first. This strategy reduces both the number of LPs we have to solve and the
total running time. We note that it is interesting that we got this improvement
using the Karger-Stein algorithm, because in the context of finding one minimum
cut, experimental studies have found that other algorithms perform significantly
better [2, 8]. So our results are a demonstration that the Karger-Stein algorithm
can be useful in practice.
The rest of this paper is organized as follows. In Sect. 2 we give some important
details of the implementations that we started with. In Sect. 3 we discuss
the constraint selection strategies that we tried and the results we obtained. Fi-
nally, in Sect. 4 we summarize the findings and give some possibilities for future
work.
Starting Implementation
In this section we give some details about the implementations we started with.
We will discuss our attempts at improving them in Sect. 3. For reference, note
that we will use n to denote the number of cities/nodes and will refer to the
total edge weight crossing a cut as the value of the cut.
2.1 Main Loop
The starting point for our work is the TSP code concorde written by Applegate,
Bixby, Chv'atal, and Cook [1]. This code corresponds to the state of the art in
lower bound computations for the TSP. Of course it wants to use far more that
the subtour elimination constraints, but it has a mode to restrict to the simple
IP. From now on, when we refer to "the way concorde works", we mean the
way it works in this restricted mode. We changed the structure of this code very
little, mainly just replacing the algorithm that finds violated constraints, but
as this code differs significantly from the theoretical description above, we will
review how it works.
First of all, concorde does not use the ellipsoid method to solve the LP.
Instead it uses the simplex method, which has poor worst-case time bounds but
typically works much better in practice. Simplex is used as follows:
1. start with an LP that has only constraints (1) and (2)
2. run the simplex method on the current LP
3. find some violated subtour elimination constraints and add them to the LP;
if none terminate
4. repeat from 2
Observe that the initial LP describes the fractional 2-matching problem, so
concorde gets an initial solution by running a fractional 2-matching code rather
than by using the LP solver.
Second, it is important to pay attention to how cuts are added to the LP
before reoptimizing. There is overhead associated with a run of the LP solver, so
it would be inefficient to add only one cut at a time. On the other side, since not
many constraints will actually constrain the optimal solution, it would be foolish
to overwhelm the LP solver with too many constraints at one time. Notice also
that if a constraint is not playing an active role in the LP, it may be desirable
to remove it so that the LP solver does not have to deal with it in the future.
Thus concorde uses the following general process for adding constraints to the
LP, assuming that some have been found somehow and placed in a list:
1. go through the list, picking out constraints that are still violated until 250
are found or the end of the list is reached
2. add the above constraints to the LP and reoptimize
3. of the newly added constraints, only keep the ones that are in the basis
Thus at most 250 constraints are added at a time, and a constraint only
stays in the LP if it plays an active role in the optimum when it is added. After
a constraint is kept once, it is assumed to be sufficiently relevant that it is not
allowed to be thrown away for many iterations. In our case, for simplicity, we
never allowed a kept cut to leave again. (Solving this simple IP takes few enough
iterations that this change shouldn't have a large impact.)
Third, just as it is undesirable to work with all of the constraints at once, it
is also undesirable (in practice) to work with all of the variables at once, because
most of them will be 0 in the optimal solution. So there is a similar process of
selecting a few of the variables to work with, solving the LP with only those
variables, and later checking to see if some other variables might be needed, i.e.,
might be non-zero in an optimal solution. The initial sparse graph comes from a
greedy tour plus the 4 nearest neighbors with respect to the reduced costs from
the fractional 2-matching.
Finally, it is not necessary to find minimum cuts to find violated constraints.
If the graph is disconnected, then each connected component defines a violated
constraint. In fact, any set of connected components defines a violated con-
straint, giving a number of violated constraints exponential in the number of
components, so concorde only considers the constraints defined by one compo-
nent. This choice makes sense, because if each connected component is forced to
join with another one, we make good progress, at least halving the number of
components.
Another heuristic concorde uses is to consider the cuts defined by a segment
of a pretty good tour, i.e, a connected piece of a tour. concorde uses heuristics
to find a pretty good tour at the beginning, and the authors noticed that cuts
they found often corresponded to segments, so they inverted the observation
as a heuristic. We mention this heuristic because it is used in the original im-
plementation, which we compare against, but we do not use it in our modified
code.
Finally, a full pseudo-code description of what the starting version of concorde
does:
find an initial solution with a fractional 2-matching code
build the initial sparse graph, a greedy tour nearest in fractional
matching reduced costs
do
do
add connected component cuts (*)
add segment cuts (*)
if connected, add flow cuts (*)
else add connected component cuts
if no cuts added OR a fifth pass through loop,
check 50 nearest neighbor edges to see if they need to be added
while cuts added OR edges added
check all edges to see if they need to be added
while edges added
Note that the lines marked with (*) are where we make our changes, and the
adding of cuts on these lines is as above, which includes calling the LP solver.
2.2 Karger-Stein Minimum Cut Algorithm
The starting implementation of the Karger-Stein minimum cut algorithm (KS) is
the code written by Chekuri, Goldberg, Karger, Levine, and Stein [2]. Again, we
did not make large modifications to this code, but it already differs significantly
from the theoretical description of the algorithm. The original algorithm is easy
to state:
if the graph has less than 7 nodes, solve by brute force
repeat twice:
repeat until only n=
nodes remain:
randomly pick an edge with probability proportional to edge weight
and contract the endpoints
run recursively on the contracted graph
Contracting two vertices means merging them and combining the resulting parallel
edges by adding their weights. It is easy to see that contraction does not
create any cuts and does not destroy a cut unless nodes from opposite sides are
contracted.
The idea of the algorithm is that we are not so likely to destroy the minimum
cut, because by definition there are relatively few edges crossing it. In particular,
the random contractions to n=
nodes give at least a 50% chance of preserving
the minimum cut. Thus if we repeat the contraction procedure twice, there is a
reasonable chance that the minimum cut is preserved in one of the recursive calls,
so there is a moderate
(\Omega (1= log n)) chance that the minimum cut is preserved
in one of the base cases. By repeating the entire procedure O(log n) times, the
success probability can be improved to 1 \Gamma 1=n.
Of course we are not just interested in minimum cuts; we want all of the cuts
of value less than 2. We can find these by doing fewer contractions at a time,
that is, leaving more than n=
nodes. This modification makes cuts that are
near-minimum (which a cut of value 2 hopefully is) also have a good chance of
being found.
As KS is a Monte-Carlo algorithm (there is no easy way to be sure it has given
the right answer) and we did not want to affect the correctness of concorde,
whenever our implementation of KS found no cuts of value less than two, we always
double-checked with concorde's original flow-based cut finding algorithm.
Later, when we refer to implementations that use only KS to find cuts, we will
really mean that they always use KS, unless KS fails to find any cuts. Typically,
by the time KS failed to find any cuts, we were either done or very close to it,
so it is reasonable to ignore the fact that a flow algorithm is always still there.
An important thing to notice in KS is that we have two parameters to play
with. One is how many contractions we do at a time, which governs the depth
and success probability of the recursion. The other is how many times we run the
whole procedure. In order to achieve a specific success probability, we can only
choose one of these. If we are willing to do away with the theoretical analysis and
make a heuristic out of this algorithm, we can choose both. Since we do have a
correctness check in place, making KS a heuristic is a reasonable thing to do. In
particular, we started with the first parameter set so that we would find all cuts
of value less than two with
probability\Omega (1= log n), and the second parameter set
to three. We found that three iterations was sufficient to typically find a good
fraction (approximately two thirds) of the cuts, and this performance seemed
good enough for our purposes. Later, after we had gathered some information
about the cuts, we worried about reducing the time spent by KS and set the
first parameter such that we would only find cuts of value less than one with
probability\Omega (1= log n). Note that regardless of the setting of the first parameter,
the code will always report all the cuts of value less than two that it finds. So
the later version of the code does not disregard higher value cuts as a result of
changing the parameter, it merely has a lower chance of finding them.
The implemented version chooses edges for contraction in one pass, rather
than one at a time. This modification can allow more contractions under certain
good circumstances, but can cause trouble, because it is possible to get unlucky
and have the recursion depth get large. See [2] for a thorough discussion of the
implemented version. A change we made here was to repeat the contraction step
if nothing gets contracted; while this change is an obvious one to make, it likely
throws off the analysis a bit. Since we will make the algorithm a heuristic anyway,
we chose not to worry about exactly what this little change does. Note that
we had to disable many of the Padberg-Rinaldi heuristics used in the starting
implementation, because they only work if we are looking for minimum cuts, not
near-minimum cuts.
We also had to make some modifications so that we could run on disconnected
graphs. If the graph is disconnected, there can be exponentially many minimum
cuts, so we cannot hope to report them all. At first we worked around the problem
of disconnected graphs by forcing the graph to be connected, as the starting
implementation of concorde does. However, later in the study we wanted to try
running KS earlier, so we had to do something about disconnected graphs. Our
new workaround was to find the connected components, report those as cuts,
and run KS in each component. This modification ignores many cuts, because
a connected component can be added to any cut to form another cut of the
same value. We chose this approach because 1) we had to do something, and 2)
other aspects of our experiments, which we describe shortly, suggest that this
approach is appropriate.
One last modification that we made was to contract out paths of edges of
weight one at the beginning. The point of this heuristic is that if any edge on
a path of weight one is in a small cut, then every such edge is in a small cut.
So we would find many cuts that were very similar. Our experiments suggested
that it is more useful to find violated constraints that are very different, so we
used this heuristic to avoid finding some similar cuts.
3 Experiments and Results
3.1 Experimental Setup
Our experiments were run on an SGI multiprocessor (running IRIX 6.2) with
processors. The code was not parallelized, so it only
ran on one processor, which it hopefully had to itself. The machine had 6 Gb of
main memory and 1 Mb L2 cache. The code was compiled with SGI cc 7.2, with
the -O2 optimization option and set to produce 64 bit executables. CPLEX 5.0
was used as the LP solver.
Several processes were run at once, so there is some danger that contention
for the memory bus slowed the codes down, but there was nothing easy we could
do about it, and we do not have reason to believe it was a big problem. In any
case, all the codes were run under similar conditions, so the comparisons should
be fair.
We used two types of instances. One was random two dimensional Euclidean
instances generated by picking points randomly in a square. The running times
we report later are averages over 3 random seeds. The second type of instance
was "real-world", from TSPLIB. We tested on rl11849, usa13509, brd14051,
pla33810, and pla85900.
3.2 Observations and Modifications
We started our study by taking concorde, disabling the segment cuts, and substituting
KS for the flow algorithm. So the first time the algorithm's behavior
changed was after the graph was connected, when KS was first called. At this
point we gathered some statistics for a random 10000 node instance about the
cuts that were found and the cuts that were kept. Figure 1 shows two histograms
comparing cuts found to cuts kept. The first is a histogram of the size of the cuts
found and kept, that is, the number of nodes on the smaller side. The second
shows a similar histogram of the value of the cuts found and kept. Note that the
scaling on the Y-axis of these histograms is unusual.
These histograms show several interesting features. First, almost all of the
kept cuts are very small-fewer than 100 nodes. The found cuts are also very
biased towards small cuts, but not as much. For example, many cuts of size
approximately 2000 were found, but none were kept. A second interesting feature
is that the minimum cut is unique (of value approximately .3), but the smallest
kept cut is of value approximately .6, and most of the kept cuts have value
one. This observation immediately suggests that it is not worthwhile to consider
only minimum cuts, because they are few in number and not the cuts you want
anyway. Furthermore, it appears that there is something special about cuts of
value one, as a large fraction of them are kept.
To try to get a better idea of what was going on, we took a look at the
fractional solution. Figure 2 shows the fractional 2-matching solution for a 200
node instance, which is what the cut finding procedures are first applied to.
Not surprisingly, this picture has many small cycles, but the other structure
that appears several times is a path, with some funny edge weights at the end
that allow it to satisfy the constraints. The presence of these structures suggests
looking at biconnected components in the graph induced by non-zero weight
edges. A long path is in some sense good, because a tour looks locally like a
path, but it is important that the two ends meet, which is to say that the graph
must not only be connected but biconnected.
Cut Size2575Number
of
Cuts100030005000
Cut Size After First Connected
(random 10000 city instance)
Cut
Number
of
Cuts20006000
Cut Value After First Connected
(random 10000 city instance)
Fig. 1. Histograms showing properties of cuts found as compared to cuts kept. Gray
bars represent found cuts and black bars represent cuts. Note that there is unusual
scaling on the Y-axes.
Fig. 2. Picture of a fractional 2-matching (initial solution) for a 200 node instance.
Edge weights are indicated by shading, from white= 0 to black= 1.
We tried various ways of incorporating biconnected components before finding
a method that worked well. There are two issues. One is when to try to find
biconnected components. Should we wait until the graph is connected, or check
for biconnectivity in the connected components? The second issue is what cuts to
report. Given a long path, as above, there is a violated constraint corresponding
to every edge of the path. Or stated more generally, there is a violated constraint
corresponding to every articulation point (a node whose removal would
disconnect the graph). Our first attempt was to look for biconnected components
only once the graph was connected, and to report all the violated constraints.
This approach reduced the number of iterations of the main loop, but took more
time overall. Running biconnected components earlier reduced the number of
iterations further, but also took too long.
So to reduce the number of cuts found, we modified the biconnected components
code to report only constraints corresponding to biconnected components
that had one or zero articulation points. (Note that a biconnected component
with 0 articulation points is also a connected component.) The idea behind this
modification is the same idea that was used to report constraints based on the
connected components. In that context, we said that it made sense to consider
only the constraints corresponding to the individual components, thus picking
out small, disjoint constraints. Likewise, taking biconnected components with
one or zero articulation points picks out small, disjoint constraints. This use of
biconnected components proved valuable; it both reduced the number of iterations
of the outer loop and reduced the overall running time. Accordingly, it is
only this version that we give data for in the results section.
Our experience with KS was similar. Using it to find all of the violated
constraints turned up so many that even though we saved iterations, the total
running time was far worse. And it seemed to be faster to run KS right from the
beginning, not waiting for the graph to become connected. So we generalized the
idea above. We only report the smallest disjoint cuts; that is, we only report a
cut if no smaller (in number of nodes) cut shares any nodes with it. It should be
easy to see that the rules given above for selecting cuts from connected or biconnected
components are special cases of this rule. So our eventual implementation
with KS uses it right from the beginning, and always reports only the smallest
cuts. Note that our handling of disconnected graphs is consistent with finding
smallest cuts. Notice also that reporting only smallest cuts means we do not
introduce a constraint that will force a connected component to join the other
connected components until the component itself satisfies the subtour elimination
constraints. This choice may seem foolish, not introducing a constraint we
could easily find, but what often happens is that we connect the component to
another in the process of fixing violations inside the component, so it would have
been useless to introduce the connectivity constraint.
In this vein, we discovered that it could actually be harmful to force the
graph to be connected before running KS. It would not be surprising if the
time spent getting the graph connected was merely wasted, but we actually saw
instances where the cut problem that arose after the graph was connected was
much harder than anything KS would have had to deal with if it had been
run from the beginning. The result was that finding connected components first
actually cost a factor of two in running time.
Note that the implementation of selecting the smallest cuts was integrated
into KS. We could have had KS output all of the cuts and then looked through
them to pick out the small ones, but because KS can easily keep track of sizes
as it contracts nodes, it is possible to never consider many cuts and save time.
There is also one observation that we failed to exploit. We noticed that
the value histogram shows some preference for keeping cuts that have small
denominator value; most kept cuts had value one, then several more had value
3=2, a few more had 4=3 and 5=3, etc. Unfortunately, we did not come up with a
way to exploit this observation. We tried sorting the cuts by denominator before
adding them to the LP, hoping that we would then get the cuts we wanted first
and be able to quickly discard the others, but were unable to get a consistent
improvement this way. Even when there was an improvement, it was not clear
whether it was entirely due to the fact that cuts of value one got added first.
3.3 Results
We present our data in the form of plots, of which there are three types. One
reports total time, another reports the number of LPs solved, and the third
only considers the time to add cuts (as described above), which counts only the
time to process the lists of cuts and reoptimize, but not the time to find the
cuts. The total time includes everything: the time to add cuts, the time to find
them, the time to add edges, the time to get an initial solution, etc. We also
have two classes of plots for our two classes of inputs, random instances and
instances. All times are reported relative to the square of the number
of cities, as this function seems to be the approximate asymptotic behavior of
the implementations. More precisely, the Y-axes of the plots that report times
are always 1000000 (time in seconds)=n 2 . This scale allows us to see how the
algorithms compare at both large and small problem sizes. Note also that the
X-axes of the TSPLIB plots are categorical, that is, they have no scale. Table 1
summarizes the implementations that appear in the plots.
Short name Description
starting point original concorde implementation
w/o segments original with segment cuts disabled
biconnect using smallest biconnected components instead of connected
KS all cut finding done by KS
KS1 all cut finding done by KS, probabilities set for cuts of value
Table
1. Summary of the implementations for which we report data
There are several things to notice about the plots. First, looking at the time
to add cuts for random instances (Fig. 3), we see that using either biconnected
components or KS consistently improves the time a little bit. Furthermore, using
the version of KS with looser probabilities causes little damage. Unfortunately,
the gain looks like it may be disappearing as we move to larger instances. Looking
at the number of LPs that we had to solve shows a clearer version of the same
results (Fig. 4). Looking at the total time (Fig. 5), we see the difference made by
adjusting the probabilities in KS. The stricter version of KS is distinctly worse
than the original concorde, whereas the looser version of KS is, like the version
with biconnected components, a bit better. Looking at the total time is looks
even more like our gains are disappearing at larger sizes.
The story is similar on the TSPLIB instances (Figs. 6,8,7) Biconnected components
continue to give a consistent improvement. KS gives an improvement
on some instances, but not on the PLA instances.
Number of Cities0.71.11.51.9
Running
Time
squared
Random Instances: Add Cuts Running Time
starting point
w/o segments
biconnect
KS
KS1
Fig. 3.
The strategy of looking for the smallest cuts seems to be a reasonable idea, in
that it reduces the number of iterations and improves the running time a bit,
but the gain is not very big. It also makes some intuitive sense that by giving
Number of Cities50150Number
of
LPs
Solved
Random Instances: LPs Solved
starting point
w/o segments
biconnect
KS
KS1
Fig. 4.
Number of Cities1.52.5Running
Time
squared
Random Instances: Total Running Time
Instance0.51.5Running
Time
squared
Instances: Add Cuts Running Time
starting point
w/o segments
biconnect
KS
KS1
Fig. 6.
Number
of
LPs
Solved
Instances: LPs Solved
Instance0.51.52.5Running
Time
squared
Instances: Total Running Time
starting point
w/o segments
biconnect
KS
KS1
Fig. 8.
an LP solver a smallest region of the graph where a constraint is violated, it will
encourage the solver to really fix the violation, rather than move the violation
around.
It is worth noting that the right cuts are definitely not simply the ones that
are easiest to find. As mentioned above, it was possible to slow the implementation
down significantly by trying to use easy-to-find cuts first.
It is also interesting that it is possible to make any improvement with KS
over a flow based code, because experimental studies indicate that for finding
one minimum cut, it is generally much better to use the flow-based algorithm of
Hao and Orlin. So our study suggests a different result: KS's ability to find all
near-minimum cuts can in fact make it practical in situations where the extra
cuts might be useful.
For future work, it does not seem like it would be particularly helpful to
work harder at finding subtour elimination constraints for the TSP. However,
studies of which constraints to find in more complicated IPs for the TSP could
be more useful, and it might be interesting to investigate KS in other contexts
where minimum cuts are used.
Acknowledgements
Many thanks to David Johnson for several helpful discussions and suggestions,
including the suggestion that I do this work in the first place. Many thanks to
David Applegate for help with concorde and numerous helpful discussions and
suggestions.
--R
On the solution of traveling salesman problems.
Experimental study of minimum cut algorithms.
Geometric Algorithms and Combinatorial Optimization
The traveling-salesman problem and minimum spanning trees
The traveling-salesman problem and minimum spanning trees: Part ii
Asymptotic experimental analysis for the held-karp traveling salesman bound
Practical performance of efficient minimum cut algorithms.
Minimum cuts in near-linear time
A new approach to the minimum cut problem.
Rinooy Kan
Analyzing the held-karp tsp bound: A monotonicity property with applications
Suboptimal cuts: Their enumeration
Heuristic analysis
--TR
Analyzing the Held-Karp TSP bound: a monotonicity property with application
Minimum cuts in near-linear time
Asymptotic experimental analysis for the Held-Karp traveling salesman bound
Suboptimal Cuts | combinatorial optimization;performance;experimentation;algorithms;minimum cut;traveling salesman problem;cutting plane |
384249 | Fast priority queues for cached memory. | The cache hierarchy prevalent in todays high performance processors has to be taken into account in order to design algorithms that perform well in practice. This paper advocates the adaption of external memory algorithms to this purpose. This idea and the practical issues involved are exemplified by engineering a fast priority queue suited to external memory and cached memory that is based on <i>k</i>-way merging. It improves previous external memory algorithms by constant factors crucial for transferring it to cached memory. Running in the cache hierarchy of a workstation the algorithm is at least two times faster than an optimized implementation of binary heaps and 4-ary heaps for large inputs. | Introduction
The mainstream model of computation used by algorithm designers in the last
half century [18] assumes a sequential processor with unit memory access cost.
However, the mainstream computers sitting on our desktops have increasingly
deviated from this model in the last decade [10, 11, 13, 17, 19]. In particular, we
usually distinguish at least four levels of memory hierarchy: A file of multi-ported
registers, can be accessed in parallel in every clock-cycle. The first-level cache can
still be accessed every one or two clock-cycles but it has only few parallel ports
and only achieves the high throughput by pipelining. Therefore, the instruction
level parallelism of super-scalar processors works best if most instructions use
registers only. Currently, most first-level caches are quite small (8-64KB) in order
to be able to keep them on chip and close to the execution unit. The second-level
cache is considerably larger but also has an order of magnitude higher latency.
If it is off-chip, its size is mainly constrained by the high cost of fast static RAM.
The main memory is build of high density, low cost dynamic RAM. Including
all overheads for cache miss, memory latency and translation from logical over
virtual to physical memory addresses, a main memory access can be two orders
of magnitude slower than a first level cache hit. Most machines have separate
caches for data and code so that we can disregard instruction reads as long as
the programs remain reasonably short.
Although the technological details are likely to change in the future, physical
principles imply that fast memories must be small and are likely to be more
expensive than slower memories so that we will have to live with memory hierarchies
when talking about sequential algorithms for large inputs.
The general approach of this paper is to model one cache level and the main
memory by the single disk single processor variant of the external memory model
[22]. This model assumes an internal memory of size M which can access the
external memory by transferring blocks of size B. We use the word pairs "cache
line" and "memory block", "cache" and "internal memory", "main memory"
and "external memory" and "I/O" and "cache fault" as synonyms if the context
does not indicate otherwise. The only formal limitation compared to external
memory is that caches have a fixed replacement strategy. In another paper, we
show that this has relatively little influence on algorithm of the kind we are
considering. Nevertheless, we henceforth use the term cached memory in order
to make clear that we have a different model.
Despite of the far-reaching analogy between external memory and cached
memory, a number of additional differences should be noted: Since the speed
gap between caches and main memory is usually smaller than the gap between
main memory and disks, we are careful to also analyze the work performed
internally. The ratio between main memory size and first level cache size can be
much larger than that between disk space and internal memory. Therefore, we
will prefer algorithms which use the cache as economically as possible. Finally,
we also discuss the remaining levels of the memory hierarchy but only do that
informally in order to keep the analysis focussed on the most important aspects.
In Section 2 we present the basic algorithm for our sequence heaps data
structure for priority queues 1 . The algorithm is then analyzed in Section 3 using
the external memory model. For some m in \Theta(M ), k in \Theta(M=B), any constant
I
can perform I insertions and up to I
deleteMins using I(2R=B
log m+O(1)) key comparisons. In another paper, we show that similar bounds
hold for cached memory with a-way associative caches if k is reduced by O(B 1=a ).
In Section 4 we present refinements which take the other levels of the memory
hierarchy into account, ensure almost optimal memory efficiency and where the
amortized work performed for an operation depends only on the current queue
size rather than the total number of operations. Section 5 discusses an implementation
of the algorithm on several architectures and compares the results to
other priority queue data structures previously found to be efficient in practice,
namely binary heaps and 4-ary heaps.
Related Work
External memory algorithms are a well established branch of algorithmics (e.g.
[21, 20]). The external memory heaps of Teuhola and Wegner [23] and the fish-
spear data structure [9] need \Theta(B) less I/Os than traditional priority queues
like binary heaps. Buffer search trees [1] were the first external memory priority
data structure for representing a totally ordered set which supports insertion of
elements and deletion of the minimal element.
queue to reduce the number of I/Os by another factor of \Theta(log M
the lower bound of O((I=B) log M=B I=M) I/Os for I operations (amortized).
But using a full-fledged search tree for implementing priority queues may be
considered wasteful. The heap-like data structures by Brodal and Katajainen,
Crauser et. al. and Fadel et. al. [3, 7, 8] are more directly geared to priority queues
and achieve the same asymptotic bounds, one [3] even per operation and not in
an amortized sense. Our sequence heap is very similar. In particular, it can be
considered a simplification and reengineering of the "improved array-heap" [7].
However, sequence heaps are more I/O-efficient by a factor of about three (or
more) than [1, 3, 7, 8] and need about a factor of two less memory than [1, 7, 8].
2 The Algorithm
Merging k sorted sequences into one sorted sequence (k-way merging) is an I/O
efficient subroutine used for sorting - both for external [14] and cached memory
[16]. The basic idea of sequence heaps is to adapt k-way merging to the related
but more dynamical problem of priority queues.
Let us start with the simple case, that at most km insertions take place where
m is the size of a buffer which fits into fast memory. Then the data structure
could consist of k sorted sequences of length up to m. We can use k-way merging
for deleting a batch of the m smallest elements from k sorted sequences. The
next m deletions can then be served from a buffer in constant time.
To allow an arbitrary mix of insertions and deletions, we maintain a separate
binary heap of size up to m which holds the recently inserted elements. Deletions
have to check whether the smallest element has to come from this insertion
buffer. When this buffer is full, it is sorted and the resulting sequence becomes
one of sequences for the k-way merge.
Up to this point, sequence heaps and the earlier data structures [3, 7, 8] are
almost identical. Most differences are related to the question how to handle
more than km elements. We cannot increase m beyond M since the insertion
heap would not fit into fast memory. We cannot arbitrarily increase k since
eventually k-way merging would start to incur cache faults. Sequence heaps use
the approach to make room by merging all the k sequences producing a larger
sequence of size up to km [3, 7].
Now the question arises how to handle the larger sequences. We adopt the approach
used for improved array-heaps [7] to employ R merge groups G
holds up to k sequences of size up to mk i\Gamma1 . When group G i overflows,
all its sequences are merged and the resulting sequence is put into group G i+1 .
Each group is equipped with a group buffer of size m to allow batched deletion
from the sequences. The smallest elements of these buffers are deleted in batches
of size m 0 - m. They are stored in the deletion Buffer. Fig. 1 summarizes the
data structure. We now have enough information to explain how deletion works:
DeleteMin: The smallest elements of the deletion buffer and insertion buffer
are compared and the smaller one is deleted and returned. If this empties the
deletion buffer, it is refilled from the group buffers using an R-way merge. Before
k-merge k-merge k-merge
R-merge
insert heap
mk
buffer 3
group
group
buffer 1 mT
group
buffer 2
deletion buffer
Fig. 1. Overview of the data structure for sequence heaps for
the refill, group buffers with less than m 0 elements are refilled from the sequences
in their group (if the group is nonempty).
DeleteMin works correctly provided the data structure fulfills the heap prop-
erty, i.e., elements in the group buffers are not smaller than elements in the
deletion buffer, and, in turn, elements in a sorted sequence are not smaller than
the elements in the respective group buffer. Maintaining this invariant is the
main difficulty for implementing insertion:
Insert: New elements are inserted into the insert heap. When its size reaches m
its elements are sorted (e.g. using merge sort or heap sort). The result is then
merged with the concatenation of the deletion buffer and the group buffer 1.
The smallest resulting elements replace the deletion buffer and group buffer 1.
The remaining elements form a new sequence of length at most m. The new
sequence is finally inserted into a free slot of group G 1 . If there is no free slot
initially, G 1 is emptied by merging all its sequences into a single sequence of size
at most km which is then put into G 2 . The same strategy is used recursively to
necessary. When group GR overflows, R is incremented
and a new group is created. When a sequence is moved from one group
to the other, the heap property may be violated. Therefore, when G 1 through G i
have been emptied, the group buffers 1 through i+1 are merged, and put into G 1 .
The latter measure is one of the few differences to the improved array heap
where the invariant is maintained by merging the new sequence and the group
buffer. This measure almost halves the number of required I/Os.
For cached memory, where the speed of internal computation matters, it s
also crucial how to implement the operation of k-way merging. We propose to
use the "loser tree" variant of the selection tree data structure described by
Knuth [14, Section 5.4.1]: When there are k 0 nonempty sequences, it consists
of a binary tree with k 0 leaves. Leaf i stores a pointer to the current element
of sequence i. The current keys of each sequence perform a tournament. The
winner is passed up the tree and the key of the loser and the index of its leaf
are stored in the inner node. The overall winner is stored in an additional node
above the root. Using this data structure, the smallest element can be identified
and replaced by the next element in its sequence using dlog comparisons.
This is less than the heap of size k assumed in [7, 8] would require. The address
calculations and memory references are similar to those needed for binary heaps
with the noteworthy difference that the memory locations accessed in the loser
tree are predictable which is not the case when deleting from a binary heap.
The instruction scheduler of the compiler can place these accesses well before
the data is needed thus avoiding pipeline stalls, in particular if combined with
loop unrolling.
3 Analysis
We start with an analysis for of the number of I/Os in terms of B, the parameters
and an arbitrary sequence of insert and deleteMin operations
with I insertions and up to I deleteMins. We continue with the number of key
comparisons as a measure of internal work and then discuss how k, m and m 0
should be chosen for external memory and cached memory respectively. Adaptions
for memory efficiency and many accesses to relatively small queues are
postponed to Section 4.
We need the following observation on the minimum intervals between tree
emptying operations in several places:
Lemma 1. Group G i can overflow at most every m(k
Proof. The only complication is the slot in group G 1 used for invalid group
buffers. Nevertheless, when groups G 1 through G i contain k sequences each, this
can only happen if
insertions have taken
place.
In particular, since there is room for m insertions in the insertion buffer,
there is a very simple upper bound for the number of groups needed:
Corollary 1.
log k
I
\Upsilon
groups suffice.
We analyze the number of I/Os based on the assumption that the following
information is kept in internal memory: The insert heap; the deletion buffer; a
merge buffer of size m; group buffers 1 and R; the loser tree data for groups
GR , GR\Gamma1 (we assume that k(B units of memory suffice to store the blocks
of the k sequences which are currently accessed and the loser tree information
itself); a corresponding amount of space shared by the remaining R \Gamma 2 groups
and data for merging the R group buffers. 2
Theorem 1. If
I
log k
I/Os suffice to perform any sequence of I inserts and up to I deleteMins on
a sequence heap.
Proof. Let us first consider the I/Os performed for an element moving on the
following canonical data path: It is first inserted into the insert buffer and then
written to a sequence in group G 1 in a batched manner, i.e, we charge 1=B I/Os
to the insertion of this element. Then it is involved in emptying groups until it
arrives in group GR . For each emptying operation it is involved into one batched
read and one batched write, i.e., we charge 2(R \Gamma 1)=B I/Os for tree emptying
operations. Eventually, it is read into group buffer R. We charge 1=B I/Os for
this. All in all, we get a charge of 2R=B I/Os for each insertion.
What remains to be shown is that the remaining I/Os only contribute lower
order terms or replace I/Os done on the canonical path. When an element travels
through group GR\Gamma1 then 2=B I/Os must be charged for writing it to group
buffer later reading it when refilling the deletion buffer. However, the
2=B I/Os saved because the element is not moved to group GR can pay for this
charge. When an element travels through group buffer i - R \Gamma 2, the additional
saved compared to the canonical path can also pay for the cost
of swapping loser tree data for group G i . The latter costs 2k(B
which can be divided among at least removed in
one batch.
When group buffer i - 2 becomes invalid so that it must be merged with
other group buffers and put back into group G 1 , this causes a direct cost of
O(m=B) I/Os and we must charge a cost of O(im=B) I/Os because these elements
are thrown back O(i) steps on their path to the deletion buffer. Although
an element may move through all the R groups we do not need to charge
O(Rm=B) I/Os for small i since this only means that the shortcut originally
taken by this element compared to the canonical path is missed. The remaining
overhead can be charged to the m(k \Gamma 1)k j \Gamma2 insertions which have filled
group G i\Gamma1 . Summing over all groups, each insertions gets an additional charge
of
Similarly, invalidations of group
buffer 1 give a charge O(1=k) per insertion.
We need O(log inserting a new sequence into the loser tree data
structure. When done for tree 1, this can be amortized over m insertions. For
can be amortized over m(k Lemma 1. For an
2 If we accept O(1=B) more I/Os per operation it would suffice to swap between the
insertion buffer plus a constant number of buffer blocks and one loser tree with k
sequence buffers in internal memory.
element moving on the canonical path, we get an overall charge of O(log k=m)+
Overall we get a charge of 2R=B+O(1=k+logk=m). per insertion.
We now estimate the number of key comparisons performed. We believe this
is a good measure for the internal work since in efficient implementations of
priority queues for the comparison model, this number is close to the number
of unpredictable branch instructions (whereas loop control branches are usually
well predictable by the hardware or the compiler) and the number of key comparisons
is also proportional to the number of memory accesses. These two types
of operations often have the largest impact on the execution time since they are
the most severe limit to instruction parallelism in a super-scalar processor. In
order to avoid notational overhead by rounding, we also assume that k and m
are powers of two and that I is divisible by mk R\Gamma1 . A more general bound would
only be larger by a small additive term.
Theorem 2. With the assumptions from Theorem 1 at most I(log I +dlog Re+
log m+ 4 +m 0 =m+O((log k)=k)) key comparisons are needed. For average case
inputs "log m" can be replaced by O(1).
Proof. Insertion into the insertion buffer takes log m comparisons at worst and
O(1) comparisons on the average. Every deleteMin operation requires a comparison
of the minimum of the insertion buffer and the deletion buffer. The
remaining comparisons are charged to insertions in an analogous way to the
proof of Theorem 1. Sorting the insertion buffer (e.g. using merge sort) takes
log m comparisons and merging the result with the deletion buffer and group
buffer 1 takes comparisons. Inserting the sequence into a loser tree
takes O(log comparisons. Emptying groups takes (R \Gamma 1) log k +O(R=k) comparisons
per element. Elements removed from the insertion buffer take up to
log m comparisons. But those need not be counted since we save all further
comparisons on them. Similarly, refills of group buffers other than R have already
been accounted for by our conservative estimate on group emptying cost.
Group GR only has degree I=(mk me comparisons
per element suffice. Using similar arguments as in the proof of Theorem 1
it can be shown that inserting sequences into the loser trees leads to a charge
of O((log k)=m) comparisons per insertion and invalidating group buffers costs
O((log k)=k) comparisons per insertion. Summing all the charges made yields
the bound to be proven.
For external memory one would choose In
another paper we show that k should be a factor O(B 1=a =ffi) smaller on a-way
associative caches in order to limit the number of cache faults to (1+ ffi) times the
number of I/Os performed by the external memory algorithm. This requirement
together with the small size of many first level caches and TLBs 3 explains why
ranslation Look-aside Buffers store the physical position of the most recently used
virtual memory pages.
we may have to live with a quite small k. This observation is the main reason
why we did not pursue the simple variant of the array heap described in [7] which
needs only a single merge group for all sequences. This merge group would have
to be about a factor R larger however.
Refinements
Memory Management: A sequence heap can be implemented in a memory efficient
way by representing sequences in the groups as singly linked lists of memory
pages. Whenever a page runs empty, it is pushed on a stack of free pages. When
a new page needs to be allocated, it is popped from the stack. If necessary, the
stack can be maintained externally except for a single buffer block. Using pages of
size p, the external sequences of a sequence heap with R groups and N elements
occupy at most N + kpR memory cells. Together with the measures described
above for keeping the number of groups small, this becomes N +kp log k N=m. A
page size of m is particularly easy to implement since this is also the size of the
group buffers and the insertion buffer. As long as guarantees
asymptotically optimal memory efficiency, i.e., a memory requirement of
Many Operations on Small Queues: Let N i denote the queue size before the i-th
operation is executed. In the earlier algorithms [3, 7, 8] the number of I/Os is
bounded by O(
i-I log k N i =m). For certain classes of inputs,
i-I log k N i =m
can be considerably less than I log k I=m. However, we believe that for most applications
which require large queues at all, the difference will not be large enough
to warrant significant constant factor overheads or algorithmic complications.
We have therefore chosen to give a detailed analysis of the basic algorithm first
and to outline an adaption yielding the refined asymptotic bound here: Similar
to [7], when a new sequence is to be inserted into group G i and there is no free
slot, we first look for two sequences in G i whose sizes sum to less than mk
elements. If found, these sequences are merged, yielding a free slot. The merging
cost can be charged to the deleteMins which caused the sequences to get so
small. Now G i is only emptied when it contains at least mk i =2 elements and
the I/Os involved can be charged to elements which have been inserted when G i
had at least size mk i\Gamma1 =4. Similarly, we can "tidy up" a shrinking queue: When
there are R groups and the total size of the queue falls below mk
group GR and insert the resulting sequence into group GR\Gamma1 (if there is no free
slot in group GR\Gamma1 merge any two of its sequences first).
Registers and Instruction Cache: In all realistic cases we have R - 4 groups.
Therefore, instruction cache and register file are likely to be large enough to
efficiently support a fast R-way merge routine for refilling the deletion buffer
which keeps the current keys of each stream in registers.
Second Level Cache: So far, our analysis assumes only a single cache level. Still,
if we assume this level to be the first level cache, the second level cache may have
some influence. First, note that the group buffers and the loser trees with their
group buffers are likely to fit in second level cache. The second level cache may
also be large enough to accommodate all of group G 1 reducing the costs for 2=B
I/Os per insert. We get a more interesting use for the second level cache if we
assume its bandwidth to be sufficiently high to be no bottleneck and then look
at inputs where deletions from the insertion buffer are rare (e.g. sorting). Then
we can choose is the size of the second level cache. Insertions
have high locality if the log m cache lines currently accessed by them fit into first
level cache and no operations on deletion buffers and group buffers use random
access.
High Bandwidth Disks: When the sequence heap data structure is viewed as
a classical external memory algorithm we would simply use the main memory
size for M . But our measurements in Section 5 indicate that large binary heaps
as an insertion buffer may be too slow to match the bandwidth of fast parallel
disk subsystems. In this case, it is better to modify a sequence heap ooptimized
for cache and main memory by using specialized external memory implementations
for the larger groups. This may involve buffering of disk blocks, explicit
asynchronous I/O calls and perhaps prefetching code and randomization for supporting
parallel disks [2]. Also, the number of I/Os may be reduced by using a
larger k inside these external groups. If this degrades the performance of the
loser tree data structure too much, we can insert another heap level, i.e., split
the high degree group into several low degree groups connected together over
sufficiently large level-2 group buffers and another merge data structure.
Deletions of non-minimal elements can be performed by maintaining a separate
sequence heap of deleted elements. When on a deleteMin, the smallest element
of the main queue and the delete-queue coincide, both are discarded. Hereby,
insertions and deletions cost only one comparison more than before, if we charge
a delete for the costs of one insertion and two deleteMins (note that the latter
are much cheaper than an insertion). Memory overhead can be kept in bounds
by completely sorting both queues whenever the size of the queue of deleted
elements exceeds some fraction of the size of the main queue. During this sorting
operation, deleted keys are discarded. The resulting sorted sequence can be put
into group GR . All other sequences and and the deletion heap are empty then.
5 Implementation and Experiments
We have implemented sequence heaps as a portable C++ template class for arbitrary
key-value-pairs. Currently, sequences are implemented as a single array.
The performance of our sequence heap mainly stems on an efficient implementation
of the k-way merge using loser trees, special routines for 2-way, 3-way and
4-way merge and binary heaps for the insertion buffer. The most important optimizations
turned out to be (roughly in this order): Making live for the compiler
use of sentinels, i.e., dummy elements at the ends of sequences and heaps
which save special case tests; loop unrolling.
5.1 Choosing Competitors
When an author of a new code wants to demonstrate its usefulness experimen-
tally, great care must be taken to choose a competing code which uses one of
the best known algorithms and is at least equally well tuned. We have chosen
implicit binary heaps and aligned 4-ary heaps. In a recent study [15], these two
algorithms outperform the pointer based data structures splay tree and skew
heap by more than a factor two although the latter two performed best in an
older study [12]. Not least because we need the same code for the insertion
buffer, binary heaps were coded perhaps even more carefully than the remaining
components - binary heaps are the only part of the code for which we took care
that the assembler code contains no unnecessary memory accesses, redundant
computations and a reasonable instruction schedule. We also use the bottom
up heuristics for deleteMin: Elements are first lifted up on a min-path from
the root to a leaf, the leftmost element is then put into the freed leaf and is
finally bubbled up. Note that binary heaps with this heuristics perform only
log comparisons for an insertions plus a deleteMin on the average
which is close to the lower bound. So in flat memory it should be hard to find a
comparison based algorithm which performs significantly better for average case
inputs. For small queues our binary heaps are about a factor two faster than a
more straightforward non-recursive adaption of the textbook formulation used
by Cormen, Leiserson and Rivest [5].
Aligned 4-ary heaps have been developed at the end using the same basic
approach as for binary heaps, in particular, the bottom up heuristics is also used.
The main difference is that the data gets aligned to cache lines and that more
complex index computations are needed.
All source codes are available electronically under http://www.mpi-sb.mpg.
de/~sanders/programs/.
5.2 Basic Experiments
Although the programs were developed and tuned on SPARC processors, sequence
heaps show similar behavior on all recent architectures that were available
for measurements. We have run the same code on a SPARC, MIPS, Alpha
and Intel processor. It even turned out that a single parameter setting -
works well for all these machines. 4 Figures 2, 3, 4 and 5
respectively show the results.
All measurements use random values. For a
maximal heap size of N , the operation sequence (insert deleteMin insert)
(deleteMin insert deleteMin)
N is executed. We normalize the amortized execution
time per insert-deleteMin-pair - T=(6N) - by dividing by log N . Since
all algorithms have an "flat memory" execution time of c log N +O(1) for some
constant c, we would expect that the curves have a hyperbolic form and converge
4 By tuning k and m, performance improvements around 10 % are possible, e.g., for
the Ultra and the PentiumII, are better.
T(insert))/log
bottom up binary heap
bottom up aligned 4-ary heap
sequence heap
Fig. 2. Performance on a Sun Ultra-10 desktop workstation with 300 MHz Ultra-
processor (1st-level cache:
using Sun Workshop C++ 4.2 with options -fast -O4.501501024 4096 16384 65536 2
T(insert))/log
bottom up binary heap
bottom up aligned 4-ary heap
sequence heap
Fig. 3. Performance on a 180 MHz MIPS R10000 processor. Compiler: CC -r10000
T(insert))/log
bottom up binary heap
bottom up aligned 4-ary heap
sequence heap
Fig. 4. Performance on a 533 MHz DEC-Alpha-21164 processor. Compiler: g++ -O6.501501024 4096 16384 65536 2
T(insert))/log
bottom up binary heap
bottom up aligned 4-ary heap
sequence heap
Fig. 5. Performance on a 300 MHz Intel Pentium II processor. Compiler: g++ -O6.
to a constant for large N . The values shown are averages over at least 10 trials.
(More for small inputs to avoid problems due to limited clock resolution.) In order
to minimize the impact of other processes and virtual memory management,
a warm-up run is made before each measurement and the programs are run on
(almost) unloaded machines.
Sequence heaps show the behavior one would expect for flat memory - cache
faults are so rare that they do not influence the execution time very much. In
Section 5.4, we will see that the decrease in the "time per comparison" is not
quite so strong for other inputs.
On all machines, binary heaps are equally fast or slightly faster than sequence
heaps for small inputs. While the heap still fits into second level cache,
the performance remains rather stable. For even larger queues, the performance
degradation accelerates. Why is the "time per comparison" growing about linearly
in log n? This is easy to explain. Whenever the queue size doubles, there is
another layer of the heap which does not fit into cache, contributing a constant
number of cache faults per deleteMin. For sequence heaps are between
2.1 and 3.8 times faster than binary heaps.
We consider this difference to be large enough to be of considerable practical
interest. Furthermore, the careful implementation of the algorithms makes it
unlikely that such a performance difference can be reversed by tuning or use of
a different compiler. 5 (Both binary heaps and sequence heaps could be slightly
improved by replacing index arithmetics by arithmetics with address offsets.
This would save a single register-to-register shift instruction per comparison
and is likely to have little effect on super-scalar machines.) Furthermore, the
satisfactory performance of binary heaps on small inputs shows that for large
inputs, most of the time is spent on memory access overhead and coding details
have little influence on this.
5.3 4-ary Heaps
The measurements in figures 2 through 5 largely agree with the most important
observation of LaMarca and Ladner [15]: since the number of cache faults is about
halved compared to binary heaps, 4-ary heaps have a more robust behavior for
large queues. Still, sequence heaps are another factor between 2:5 and 2:9 faster
for very large heaps since they reduce the number of cache faults even more.
However, the relative performance of our binary heaps and 4-ary heaps seems to
be a more complicated issue than in [15]. Although this is not the main concern
of this paper we would like to offer an explanation:
Although the bottom up heuristics improves both binary heaps and 4-ary
heaps, binary heaps profit much more. Now, binary heaps need less instead of
more comparisons than 4-ary heaps. Concerning other instruction counts, 4-ary
5 For example, in older studies, heaps and loser trees may have looked bad compared
to pointer based data structures if the compiler generates integer division operations
for halving an index or integer multiplications for array indexing.
heaps only save on memory write instructions while they need more complicated
index computations.
Apparently, on the Alpha which has the highest clock speed of the machines
considered, the saved write instructions shorten the critical path while the index
computations can be done in parallel to slow memory accesses (spill code).
On the other machines, the balance turns into the other direction. In partic-
ular, the Intel architecture lacks the necessary number of registers so that the
compiler has to generate a large number of additional memory accesses. Even
for very large queues, this handicap is never made up for.
The most confusing effect is the jump in the execution time of 4-ary heaps on
the SPARC for N ? 2 20 . Nothing like this is observed on the other machines and
this effect is hard to explain by cache effects alone since the input size is already
well beyond the size of the second level cache. We suspect some problems with
virtual address translation which also haunted the binary heaps in an earlier
version.
5.4 Long Operation Sequences
Our worst case analysis predicts a certain performance degradation if the number
of insertions I is much larger than the size of the heap N . However, in Fig. 6 it
can be seen that the contrary can be true for random keys.2060100
T(insert))/log
Fig. 6. Performance of sequence heaps using the same setup as in Fig. 2
but using different operation sequences: (insert (deleteMin insert)
s
(deleteMin (insert deleteMin)
16g. For essentially
get heap-sort with some overhead for maintaining useless group and deletion
buffers. In Fig. 2 we used
For a family of instances with I = 33N where the heap grows and shrinks
very slowly, we are almost two times faster than for I = N . The reason is that
new elements tend to be smaller than most old elements (the smallest of the old
elements have long been removed before). Therefore, many elements never make
it into group G 1 let alone the groups for larger sequences. Since most work is
performed while emptying groups, this work is saved. A similar locality effect has
been observed and analyzed for the Fishspear data structure [9]. Binary heaps or
4-ary heaps do not have this property. (They even seem to get slightly slower.)
For locality effect cannot work. So that these instances should come
close to the worst case.
6 Discussion
Sequence heaps may currently be the fastest available data structure for large
comparison based priority queues both in cached and external memory. This is
particularly true, if the queue elements are small and if we do not need deletion
of arbitrary elements or decreasing keys. Our implementation approach, in
particular k-way merging with loser trees can also be useful to speed up sorting
algorithms in cached memory.
In the other cases, sequence heaps still look promising but we need experiments
encompassing a wider range of algorithms and usage patterns to decide
which algorithm is best. For example, for monotonic queues with integer keys,
radix heaps look promising. Either in a simplified, average case efficient form
known as calendar queues [4] or by adapting external memory radix heaps [6] to
cached memory in order to reduce cache faults.
We have outlined how the algorithm can be adapted to multiple levels of
memory and parallel disks. On a shared memory multiprocessor, it should also be
possible to achieve some moderate speedup by parallelization (e.g. one processor
for the insertion and deletion buffer and one for each group when refilling group
buffers; all processors collectively work on emptying groups).
Acknowledgements
I would like to thank Gerth Brodal, Andreas Crauser, Jyrki Katajainen and Ulrich
Meyer for valuable suggestions. Ulrich R-ude from the University of Augsburg
provided access to an Alpha processor.
--R
The buffer tree: A new technique for optimal I/O-algorithms
Simple randomized mergesort on parallel disks.
Calendar queues: A fast O(1) priority queue implementation for the simulation event set problem.
Introduction to Algorithms.
On the performance of LEDA-SM
Efficient priority queues in external memory.
External heaps combined with effective buffering.
A priority queue algorithm.
Computer Architecture a Quantitative Ap- proach
An empirical comparison of priority-queue and event set implementa- tions
The 21264: A superscalar alpha processor with out-of-order execution
The Art of Computer Programming - Sorting and Searching
The influence of caches on the performance of heaps.
The influence of caches on the performance of sorting.
First draft of a report on the EDVAC.
TPIE User Manual and Reference
External memory algorithms.
Algorithms for parallel memory I: Two level memories.
The external heapsort.
--TR
An empirical comparison of priority-queue and event-set implementations
Calendar queues: a fast 0(1) priority queue implementation for the simulation event set problem
The External Heapsort
BOTTOM-UP-HEAPSORT, a new variant of HEAPSORT beating, on an average, QUICKSORT (if <italic>n</italic> is not very small)
Fishspear: a priority queue algorithm
The influence of caches on the performance of heaps
Simple randomized mergesort on parallel disks
The influence of caches on the performance of sorting
Worst-Case External-Memory Priority Queues
The Buffer Tree
Accessing Multiple Sequences Through Set Associative Caches
An Experimental Study of Priority Queues in External Memory
External Memory Algorithms
--CTR
Peter Sanders, Presenting data from experiments in algorithmics, Experimental algorithmics: from algorithm design to robust and efficient software, Springer-Verlag New York, Inc., New York, NY, 2002
Roman Dementiev , Peter Sanders, Asynchronous parallel disk sorting, Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures, June 07-09, 2003, San Diego, California, USA
James D. Fix, The set-associative cache performance of search trees, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Joon-Sang Park , Michael Penner , Viktor K. Prasanna, Optimizing Graph Algorithms for Improved Cache Performance, IEEE Transactions on Parallel and Distributed Systems, v.15 n.9, p.769-782, September 2004
Bernard M. E. Moret , David A. Bader , Tandy Warnow, High-Performance Algorithm Engineering for Computational Phylogenetics, The Journal of Supercomputing, v.22 n.1, p.99-111, May 2002
Gerth Stlting Brodal , Rolf Fagerberg , Kristoffer Vinther, Engineering a cache-oblivious sorting algorithm, Journal of Experimental Algorithmics (JEA), 12, 2007 | implementation;data structure;external memory;priority queue;cache;cache efficiency;multi way merging;secondary storage |
500150 | Motion-based segmentation and contour-based classification of video objects. | The segmentation of objects in video sequences constitutes a prerequisite for numerous applications ranging from computer vision tasks to second-generation video coding.We propose an approach for segmenting video objects based on motion cues. To estimate motion we employ the 3D structure tensor, an operator that provides reliable results by integrating information from a number of consecutive video frames. We present a new hierarchical algorithm, embedding the structure tensor into a multiresolution framework to allow the estimation of large velocities.The motion estimates are included as an external force into a geodesic active contour model, thus stopping the evolving curve at the moving object's boundary. A level set-based implementation allows the simultaneous segmentation of several objects.As an application based on our object segmentation approach we provide a video object classification system. Curvature features of the object contour are matched by means of a curvature scale space technique to a database containing preprocessed views of prototypical objects.We provide encouraging experimental results calculated on synthetic and real-world video sequences to demonstrate the performance of our algorithms. | INTRODUCTION
Video object segmentation is required by numerous applications
ranging from high-level vision tasks to second-generation
video coding [25]. The MPEG-4 video coding
standard [10] provides functionality for object-based video
coding. Video information can be encoded in a number of
arbitrarily shaped video object planes.
Automatic content analysis and indexing methods can
benet from object segmentation algorithms. For instance,
it is possible to summarize videos based on the occurrence
and activities of video objects [14].
Algorithms for high-level vision tasks such as shape-based
object recognition [26, 19, 2] depend on information with
regard to object outlines.
We propose an approach to segmenting video object based
on motion cues. Motion estimation is performed by estimating
local orientations in a spatio-temporal neighborhood
with the 3D structure tensor. Thus, information from a
number of consecutive frames is exploited. We present a new
hierarchical algorithm that embeds the tensor-based motion
estimation into a multiresolution framework to allow the calculation
of large displacements. The nal segmentation is
performed by a geodesic active contour model, enabling the
simultaneous detection of multiple objects.
Furthermore, we provide a video object classication system
that categorizes the segmented video objects into several
object classes (e.g. cars, people). This classication
system matches curvature features of the object contour to
a database containing preprocessed views of prototypical objects
The remainder of the paper is organized as follows: After
summarizing related work, Section 3 describes our segmentation
approach. Section 4 introduces the video object clas-
sication system. Section 5 presents experimental results.
Finally, Section 6 oers concluding remarks.
2. RELATED WORK
Various approaches have been proposed in the eld of motion
estimation and segmentation. A number of optical
ow
techniques are reviewed in [3, 4, 18].
Mech and Wollborn [16] estimate a change detection mask
by employing a local thresholding relaxation technique. Regions
of uncovered background are removed from this mask
by using a displacement vector eld.
In [13] an edge map is calculated from the inter-frame dif-
Figure
1: From left to right: (a) Frame 10 from the taxi sequence, (b) optical
ow with Lucas Kanade
ow with 3D structure tensor.
ference image using the Canny edge detector [6]. The edge
map|containing edge pixels from both frames|is compared
to the edge map of the current frame and to a background
reference frame. The nal segmentation is achieved by morphological
operators and an additional lling algorithm.
Meier and Ngan [17] propose two approaches. First, they
combine an optical
ow eld with a morphological operator.
Second, they employ a connected component analysis on the
observed inter-frame dierence in conjunction with a lling
procedure.
Paragios and Deriche [21] propose a statistical framework
based on Gaussian and Laplacian law to detect the moving
object's boundary in combination with boundaries obtained
from the current frame. They integrate the motion detection
and the tracking problems into a geodesic active contour
model.
In object classication, contour-based techniques have been
under study for a long time. Overviews can be found in [22,
15, 8].
One of the more promising contour analysis techniques
is the curvature scale space method (CSS) introduced by
Mokhtarian [20, 19] for still images. Here, the contour of an
already segmented object is compared to a database containing
representations of preprocessed objects. The technique
does not depend on size or rotation angle and is robust to
noise. In [2] a modied CSS technique can be found. Re-
cently, Richter et al. [23] extended the CSS technique to
include the classication of video objects.
3. VIDEO OBJECT SEGMENTATION
In addition to the color and texture information already
available in still images, a video sequence provides temporal
information. While it is hard to extract semantically
meaningful objects based on color and texture cues only,
motion cues facilitate the segregation of objects from the
background.
Consequently, the rst step in our approach is to choose
an appropriate motion detector. Various methods have been
proposed to estimate motion [3, 4, 18]. However, most of
them determine motion parameters on the basis of only
two consecutive frames. Hence, these techniques are sensitive
to noise and require appropriate compensation methods
Figure
1 illustrates this observation for the classical
Lucas Kanade algorithm. We calculated the optical
ow for
frame 10 of the taxi sequence with the Lucas Kanade implementation
used in [3]. The parameters were set to
and 2 > 5 (see [3] for details), motion vectors shorter than
pixel/frame are suppressed in the gure. The taxi sequence
contains four moving objects: the taxi in the mid-
dle, a car on the left, a van on the right and a pedestrian
in the upper left corner. While the motion for the three
main objects is calculated reliably, several misclassications
occur due to noise in the background. Note that the result
can be improved signicantly by preprocessing the sequence
with a 3D Gaussian smoothing lter. However, a drawback
to pre-smoothing is the elimination of small structures, e.g.
the pedestrian in the upper left corner of the taxi sequence
cannot be detected.
In our approach, we employ the 3D structure tensor to
analyze motion [5]. Here, motion vectors are calculated by
estimating local orientations in the spatio-temporal domain.
Figure
1(c) depicts the result for the structure tensor. Here,
background noise is eliminated without pre-ltering and reliable
motion detection is possible. Even small structures
like the pedestrian are identied.
In the following section we describe the structure tensor
technique. Then we present a new algorithm that embeds
the approach into a multiresoultion framework to allow the
detection of large velocities.
3.1 Tensor-based Motion Estimation
Within consecutive frames stacked on top of each other,
a video sequence can be represented as a three-dimensional
volume with one temporal (z) and two spatial (x; y) coor-
dinates. From this perspective, motion can be estimated
by analyzing orientations of local gray value structures [5].
Assuming that illumination does not vary, gray values remain
constant in the direction of motion. Thus, stationary
parts of a scene result in lines of equal gray values in parallel
to the time axis. Moving objects, however, cause iso-gray-
value lines of dierent orientations. Figure 2 illustrates this
observation.
Consequently, moving and static parts on the image plane
can be determined from the direction of minimal gray value
change in the spatio-temporal volume. This direction can be
calculated as the direction n being as much perpendicular
to all gray value gradients in a 3D local neighborhood
Thus, for each pixel at position (x;
Z
where r3 := (@x ; @y ; @z ) denotes the spatio-temporal gradi-
9zxFigure
2: Local orientation of image structures.
Left: Frame 169 (top) and frame 39 (bottom) of
the \hall and monitor" sequence. Right: Slice of
the corresponding spatio-temporal volume taken at
the horizontal line marked by the white lines in the
single frames.
ent, I the three-dimensional volume
and
a 3D neighborhood
around the pixel at position (x;
As described in [5, 9], minimizing Equation 1 is equivalent
to determining the eigenvector to the minimum eigenvalue
of the 3D structure tensor
J =4 Jxx Jxy Jxz
Jxy Jyy Jyz
Jxz Jyz Jzz5 (2)
are calculated within a local neigh-
borhood
from
Z
By analyzing the three eigenvalues 1 2 3 0
of the 3 3 symmetric matrix, we can classify the local
neighborhood's motion. In general, an eigenvalue i 0
indicates that the gray values change in the direction of the
corresponding eigenvector e i . Figure 3 illustrates the relationship
between local structures, eigenvalues, and eigen-vectors
in the two-dimensional case. Consider, for instance,
case 1, where the local
neighborhood
is centered over a
horizontal structure. The gray values within this neighborhood
change in one direction. Consequently, 1 0;
and the eigenvector e1 gives the direction of the gray value
change.
Within the context of a three-dimensional neighborhood
the following observations can be made. All three eigen-values
equal to zero indicate an area of constant gray val-
ues, therefore no motion can be detected. If 1 > 0 and
only in one direction. This
corresponds to a horizontal (or vertical) structure moving
with constant velocity. Consequently, due to the correspondence
problem we can only calculate normal velocity.
Real motion can be calculated if gray values remain constant
in only one direction, hence, 1 > 0; 2 > 0 and
occurs when a structure containing gray value
changes in two directions moves at constant speed.
Finally, if all three eigenvalues are greater than zero, we
y
Figure
3: Local structures, eigenvalues, and eigen-vectors
in two dimensions. Case 1 (case 3): horizontal
(vertical) structure, gray values change in
one direction, i.e., 1 0, case 2: corner,
gray values change in more than one direction, i.e.,
cannot determine the optical
ow due to noise.
In real-world video sequences, however, it is impractical
to compare the eigenvalues to zero, since due to noise in
the sequence small gray value changes always occur. Thus,
we introduce normalized coherence measures c t and cs that
quantify the certainty of the calculations. The coherence
measure c t indicates whether a reliable motion calculation
is possible and is dened by
exp
C
else (4)
where C > 0 denotes a contrast parameter. Areas with
(j1 3 are regarded as almost constant local neighborhoods
[27]. A value of c t near 1.0 indicates that 1 3 ,
therefore, a reliable motion calculation can be performed.
The opposite is true if the c t value approaches zero.
The coherence measure cs ,
exp
C
else
provides information whether normal or real motion can be
determined. Values near 1.0 allow the calculation of real
motion. Otherwise only normal velocities can be specied.
As depicted in Figure 1(c), the structure tensor allows reliable
motion calculations and suppresses background noise
due to the integration of several consecutive frames. The
number of frames used in the motion calculation is determined
by the size of the neighborhood
Setting
for instance, means that the motion calculation for each
pixel is performed within a spatio-temporal area of 777
pixels.
3.2 Multiscale Motion Estimation
The motion detection approach described so far exhibits
problems with sequences containing large velocities. This
results from the xed size of the local neighborhood
Con-
sider an image feature that moves at high velocity. Conse-
quently, it changes its position by a large displacement from
one frame to the next. If the displacement exceeds the size
of the local neighborhood, the motion of the feature cannot
be detected.
To overcome this limitation we developed a new hierarchical
algorithm that embeds the structure tensor technique
in a linear scale-space framework. Hence, the calculations
are performed in a coarse-to-ne manner.
First, a Gaussian pyramid of L levels is constructed from
the video sequence. Let I 1 (x; denote the original sequence
of size (n 1
Then, the coarser levels are constructed
recursively, i.e., for I l is calculated
from I l 1 by spatial smoothing and spatial downsampling
by a factor of two.
Then, for each position
the optical
ow vector is calculated. The calculations
start at the coarsest level L. The position p L within this
level is determined as
within a local
neighborhood
centered at the position p L
the structure tensor J is calculated and the corresponding
eigenvalues are evaluated as described in Section 3.1. If motion
calculation is feasible, a motion vector v
y ) is
determined at this position. Note that due to the subsampling
procedure large displacements are reduced appropriately
and therefore can be captured within the local neighborhood
The motion vector v L determined at the coarsest pyramid
level L serves now as an initial guess g L 1 at the next pyramid
level L 1. Since the spatial dimensions double from
one level to the next, we adapt the initial guess accordingly,
i.e.,
y ). Thus, at this state we know that
at position p L an image feature
moves roughly according to g L 1 .
The goal at level L 1 is now to rene the initial guess.
This is done by (1) compensating for the motion vector
within the local neighborhood around p L 1 and (2)
by calculating a displacement vector d L 1 on the modied
neighborhood. Hence, the motion vector at this level, v L 1 ,
emerges from a combination of initial guess and displace-
ment,
The motion vector v L 1 is used as initial guess for the
consecutive pyramid level and the algorithm repeats until
the highest resolution is reached.
A crucial part of the algorithm is the motion compensation
that must be performed on each level in order to allow
the displacement calculation. Remember the calculations of
the structure tensor elements, e.g. the element Jxx :
Here, spatial derivations are calculated within a spatio-temporal
neighborhood around the position (x; z). If we
consider
patches from three frames of the
video sequence, namely, z 1, z, z+1, are involved in the cal-
culations. Consider now, that an initial guess
for this local neighborhood is available from the previous
pyramid level. Thus, to determine the additional displacement
d, it is rst necessary to compensate for this guess.
Consequently, Equation 6 changes to
i.e., from frame z patch around position
+gy ), from frame z 1 a patch around
and from frame z a patch around (x; y) is used. Obviously,
gx and gy need not be integer values. Thus, bilinear interpolation
is used to determine image values at the subpixel
level. Accordingly, the other elements of the tensor J are
calculated under motion compensation.
The need for motion compensation and the use of bilinear
interpolation techniques in the hierarchical algorithm clearly
aect the performance of the whole method. In order to
improve the eciency it is useful to eliminate those positions
in [0; n 1
y advance where presumably a
reliable motion calculation is not possible.
Again, the structure tensor, used here in the spatial domain
xy
xy J 0
yy
is a reliable indicator for this task. Remember that in the
two-dimensional case (see Figure 3) the eigenvalues 1 and
provide information about the texturedness of the local
neighborhood. Both eigenvalues larger zero indicate a
textured region. With respect to motion estimation it is
probable that this region can be identied in the consecutive
frame, too. Consequently, a full motion vector can
be calculated. If only one eigenvalue is greater than zero,
the area in question contains a horizontal or vertical struc-
ture. Therefore only motion in the direction of the gradient
(normal motion) can be determined. On the other hand,
a uniform region, i.e., no estimation of motion is possible,
results in 1 2 0.
Thus, Shi and Tomasi [24] propose the following reliability
measure:
i.e., a position (x; y) in the image contains a good feature to
track, if the lesser eigenvalue exceeds a predened threshold
T .
However, our purpose is slightly dierent because we want
to calculate any kind of motion occurring in the video se-
quence. Therefore, we modify the reliability measure (Equa-
tion to exclude only uniform regions from the motion
calculation:
exp
C
else.
A small sum of 1 and 2 result in values near zero, while
in all other cases the reliability measure adopts values near
one.
3.3 Motion-based Segmentation
As depicted in Figure 1(c), the motion estimation approach
is able to reliably identify regions of interest, though
some parts of the van are left out due to low contrast. How-
ever, tensor-based motion detection only is not sucient to
provide an accurate segmentation of the objects in question.
We observe two shortcomings that are inherent to this
approach. First, due to areas of constant gray values within
the moving objects we do not receive dense motion vector
elds. In these areas all three eigenvalues are close to zero
and therefore motion cannot be calculated. However, it is
likely that motion can be estimated at the spatial edges of
the moving objects.
Second, the tensor fails to provide the true object boundaries
accurately since the calculations within the neighbor-
hood
blur motion information across spatial edges.
Consequently, we need (1) a grouping step that will integrate
neighboring regions into objects while closing gaps
Figure
4: Tensor-driven geodesic active contour. From left to right: contour after 3000, 6000, 9000, 12000,
iterations. Constant force
and holes and (2) contour renement based on spatial edge
information.
Widely used within this context are active contour mod-
els. Basically, a planar parametric curve C(s) placed around
image parts of interest evolves under smoothness control (in-
ternal energy) and the in
uence of an image force (external
energy).
In the classical explicit snake model [11] the following
functional is minimized
I
ds (11)
where the rst two terms control the smoothness of the planar
curve, while the third attracts the contour to high gradients
of the image.
To obtain a topological
exibility that will allow the simultaneous
detection of multiple objects, we employ geodesic
active contours [12, 7]. The basic idea is to embed the initial
curve as a zero level set into a function
C is represented by the set of points x i with u(x i
to evolve this function under a partial dierential equation.
Using a modied energy term this results in the image
evolution equation [12, 7]
@t
where denotes the curvature of a level set, r := (@x ; @y )
is the spatial gradient, c adds a constant force for faster
convergence, and g represents the external image-dependent
force or stopping function.
By dening an appropriate stopping function g, we can
integrate the tensor-based motion detection into the model.
Choosing
smoothed version of
stops the curve evolution positions are reached
that coincide with \motion pixels". Note that
denotes the 2D velocity available from the motion estimation
step. is a predened velocity threshold compared against
the norm of the motion vector. Hence, our segmentation
scheme assumes|in the current state |a static camera.
In the event of a moving camera, a global camera motion
estimation has to be performed rst. It should then be possible
to compare the motion vectors determined from the
structure tensor to the vectors resulting from the the global
camera parameters [17].
Figure
4 depicts the evolution of the tensor-driven geodesic
active contour. The contour succeeds in splitting up
and detecting the four dierent moving objects.
In order to improve the segmentation results, we employ a
renement procedure based not on motion information but
Figure
5: Contour renement. Left: motion-based
segmentation, right: motion-based segmentation
with contour renement (445 iterations, ~
on the gradient values within a single frame. As can be seen
in
Figure
5 (left), the motion-based segmentation detects
regions that are slightly larger than the moving objects.
Thus, we restart the image evolution process using the
result from the motion-based segmentation as the zero level
set. However, this time a stopping function ~ g based on the
spatial gradient is used:
~
Here, ~
C is a contrast parameter that diminishes the inuence
of low gradient values. Figure 5 depicts the performance
of the renement procedure.
4. VIDEO OBJECT CLASSIFICATION
Our system for object classication consists of two major
parts, a database containing contour-based representations
of prototypical video objects, and an algorithm to match
extracted objects with the database. In the following we
summarize the classication approach, for details see [23].
4.1 Curvature Scale Space Representation
The curvature scale space (CSS) technique [1, 20, 23] is
based on the idea of curve evolution, i.e., basically the deformation
of a curve over time. A CSS image provides a
multi-scale representation of the curvature zero crossings of
a closed planar contour.
Consider a closed planar curve (u),
with the normalized arc length parameter u. The curve
is smoothed by a one-dimensional Gaussian kernel g(u; )
of width . The deformation of the closed planar curve is
represented by
where X(u; ) and Y (u; ) denote the components x(u) and
after convolution with g(u; ).
(a) (b)
(c) (d)
Figure
Construction of the CSS image. Left: (a)-
(f) Object view and smoothed contour after 10, 30,
100, 200 and 300 iterations. The small dots on the
contour mark the curvature zero crossings. Right:
Resulting CSS image.
The curvature (u; ) of an evolved curve can be computed
using the derivatives Xu(u; ), Xuu(u;
and Yuu(u;
A CSS image I(u; ) is dened by
It shows the zero crossings with respect to their position
on the contour and the width of the Gaussian kernel (or the
number of iterations, see Figure 6). During the deformation
process, zero crossings merge as transitions between contour
segments of dierent curvature are equalized. Consequently,
after a certain number of iterations, in
ection points cease to
exist and the shape of the closed curve is convex. Note that
due to the dependence on curvature zero crossings, convex
object views cannot be distinguished by the CSS technique.
Signicant contour properties that are visible for a large
number of iterations result in high peaks in the CSS image.
However, areas with rapidly changing curvatures caused by
noise produce only small local maxima.
In many cases the peaks in the CSS image provide a robust
and compact representation of an object view's contour[19,
20]. Note that a rotation of an object view on the image
plane can be accomplished by shifting the CSS image left
or right in a horizontal direction. Furthermore, a mirrored
object view can be represented by mirroring the CSS image.
A main drawback to the basic CSS technique|where only
the two values (position, height) represent a peak in an CSS
image|is the occurrence of ambiguities. Certain contours
diering signicantly in their visual appearance nevertheless
have similar images. This is due to the fact that shallow
and deep concavities on a contour may result in peaks of
the same height in the CSS image.
presents several approaches to avoiding these
ambiguities, raising the computational costs signicantly.
In our extension [23] we extract the width at the bottom
of the arc-shaped contour corresponding to the peak. The
width species the normalized arc length distance of the two
curvature zero crossings enframing the contour segment represented
by the peak in the CSS image. For each peak in
the CSS image three values have to be stored: the position
of the maximum, its value (iteration or width of the Gaussian
kernel), and the width at the bottom of the arc-shaped
contour. It is sucient to extract the signicant maxima
(above a certain noise level) from the CSS image. For in-
stance, in the example depicted in Figure 6, and assuming
a noise level of only four data triples have to
be stored.
The matching algorithm described in the following section
utilizes the information in the peaks to compare automatically
segmented video objects with prototypical video
objects in the database.
4.2 Object Matching
The objects are matched in two steps. In the rst, each
automatically segmented object in a sequence is compared
to all objects in the database. A list of the best matches
is built for further processing. In the second step, the results
are accumulated and a condence value is calculated.
Based on it, the object class of the object in the sequence is
determined.
In order to nd the most similar object in the database
compared to a query object from a sequence, a matching
algorithm is needed. The general idea is to compare the
peaks in the CSS images of the two objects, based on the
characterization by the triples (height, position, width).
In a rst step, the best position to compare the two
images has to be determined. It might be necessary to
rotate or mirror one of the images so that the peaks are
aligned best. As mentioned before, shifting the CSS
image corresponds to rotation of the original object.
One of the CSS images is shifted so that the highest
peaks in both CSS images are aligned.
A matching peak is determined for each peak in the
rst object. Two peaks may match, if their position
and width are within a certain range. Only for the
highest peaks, does the height also need to be within
a certain range.
If a matching peak is found, the Euclidean distance
of the height and position of the peaks is calculated
and added to the dierence between the images. If
no matching peak can be determined, the height of
the peak in the rst query object is multiplied by a
penalty factor and added to the total dierence.
The matching algorithm might return 1, e.g. if no adequate
rotation could be found or if the highest maxima in
the CSS images do not match within a given tolerance range.
If this is the case, the two objects are signicantly dierent.
All top matches which were recognized are used for accu-
mulation. The object class with a percentage above 75 % is
considered to be the class of the sequence.
5. EXPERIMENTAL RESULTS
We subdivide the experimental results achieved with our
algorithms into three sections. First, we demonstrate the
Figure
7: Left: frame from the synthetic sequence,
right: motion eld calculated by the hierarchical
structure tensor approach (for better visibility the
ow image is subsampled by a factor of 4).
performance of the multiscale structure tensor approach on
a synthetic sequence containing large displacements. Sec-
ond, we provide segmentation results obtained by the tensor-
driven geodesic active contour with respect to two real-world
sequences. Finally, results calculated by the object classi-
cation algorithm are presented.
5.1 Multiscale Motion Estimation Results
To measure the performance of the hierarchical approach
described in Section 3.2, we created a simple synthetic video
sequence for which the displacements from one frame to the
next are known. Figure 7 (left) shows a frame in this sequence
that contains two moving squares. While the upper
square (square 1) moves at a constant velocity of 10 pixels
per frame from the right to the left, the other square (square
moves diagonally upwards at velocity of
Figure
7 (right) depicts the result obtained by our multiresolution
algorithm using 4 pyramid levels. A closer look
at the motion vectors calculated by the algorithm reveals
the following observations:
1. At the corners of the squares the velocity could be
estimated exactly (square 1: 2:
Remember that at a corner moving with
constant speed enough texture is available to allow the
calculation of the full image motion.
2. On account of the pyramidal structure the velocities
at horizontal and vertical structures approximate the
real image motion. In general, full motion calculation
is possible for points near the corners. At the coarsest
pyramid level each point on a horizontal or vertical
structure is near the corner (in our specic example).
Therefore, an initial full motion guess for these points
can be calculated. However, for consecutive pyramid
levels full motion calculation is no longer possible since
the distance to the corners increases. Consequently,
the displacements added to the initial guess rene the
motion estimation only in the normal direction.
3. For pixels in the interior of the squares it is not possible
to calculate image motion. The reliability measure
described in Equation 10 eliminates these points from
the calculations.
The results for the synthetic sequence indicate that|
under specic circumstances|the proposed approach is able
to estimate motion exactly even given the existence of large
displacements.
Object Frame FP FN FP FN
car 7 156 61 9.12 3.73
car
car 9 208
car
van 7 135 448 10.77 28.61
van 8 176 663 15.04 40.01
van 9 288 522 22.15 34.03
van 11 246 800 19.54 44.13
Table
1: Region-based distance for the taxi se-
quence. Columns 3, 4: false positives, false nega-
tives. Columns 5, percentage of mismatched pixels
in comparison to the entire number of pixels of
the manual segmentation.
Object Frame avg.
car 7 1.27 38.12 35.15 14.36 12.38
car 8 1.41 27.27 35.23 18.18 19.32
car 9 1.54 33.82 29.90 8.82 27.45
car 11 1.39 34.98 29.06 13.30 22.66
taxi 9 1.44 44.21 28.95 6.84 20.00
van 7 4.21 20.59 18.24 10.00 51.18
van 8 6.20 11.36 17.05 11.36 60.23
van 9 6.05 11.18 11.18 7.65 70.00
van 11 7.43 9.95 9.42 7.85 72.77
Table
2: Edge-based distance for the taxi sequence.
Column 3: average edge pixel distance. Colums 4-7:
percentages of the distances 0,1,2,3.n.
5.2 Segmentation Results
We applied the segmentation algorithm described above to
two real-world sequences. The rst one is the Hamburg taxi
sequence widely used within the computer vision community
Figure
8 illustrates the performance of our segmentation
approach on this sequence. We employed the standard
structure tensor described in Section 3.1. The parameters
were set as follows: (1) The size of the local neighborhood
was set to 7 7 7. (2) The contrast parameter for the
coherence measures was set to 5. For all positions with a
coherence c t > 0:75 we performed motion estimation, positions
with c t below this value were rejected. Full motion
vectors were calculated for positions with cs > 0:9.
The motion estimates were integrated as an external force
into the geodesic active contour model (see Section 3.3). For
faster convergence we set the external force
that a value for c greater than zero forces the curve to shrink,
while a value smaller zero causes an expansion. Finally, we
employed the contour renement step described in Section
3.3.
In addition to the visual results we provide quantitative
measures in Tables 1 and 2. First, we used the region-based
distance measures FP and FN to compare the automatic segmentation
results to those of a manual segmentation. While
FP contains the number of pixels incorrectly marked as object
pixels by the automatic segmentation (false positives),
FN sums up object pixels missed by the process (false neg-
atives). Second, we employed an edge-based distance mea-
sure. For each contour pixel in the manual segmentation
the distance to the closest contour pixel in the automatic
segmentation was determined.
The following conclusions can be drawn from the mea-
sures: The segmentations of the car and the taxi are accept-
able. While the number of pixels detected by the automatic
segmentation is rather high, the miss rate is fairly low. Fur-
thermore, the edge-based measure indicates that edges of the
automatic and the manual segmentation coincide. However,
the van could not be segmented accurately. Both region-based
and edge-based distance measures return high error
rates.
The second video sequence is a typical \head and shoul-
der" sequence. However, due to the low sampling rate the
displacements of the moving person are large. Hence, we
employed the multiresolution motion estimation with four
pyramid levels and a local neighborhood of size 3 3 3.
To speed up the motion detection we employed the reliability
measure provided in Section 3.2, i.e., positions with a
reliability below 0.9 were rejected. The nal segmentation
was performed by the geodesic active contour model.
Figure
9 depicts the results of the motion estimation and
segmentation for the second sequence. Our segmentation
approach identies the region of interest correctly. How-
ever, the accuracy is less than that for the taxi sequence.
Especially, in areas containing strong but static edges, results
from the hierarchical motion estimation blur across the
moving edges, thus enlarging the segmented region. Tables
3 and 4 underline these observations. Especially the percentages
of exactly matching edges are rather small.
5.3 Classification Results
Our test database [23] consists of ve object classes containing
animals, birds, cars, people, and miscellaneous ob-
jects. For each object class we collected 25 { 102 images from
a clip art library. The clip arts are typical representatives of
their object class with easily recognizable perspectives. The
object class people contains the most objects (102 images).
The contours of humans dier greatly in image sequences,
e. g. the position of the arms and legs makes a great impact
on the contour.
We applied the extended object matching algorithm on
the automatically segmented cars in the Hamburg taxi sequence
(see Figure 8) and in the person sequence (see Figure
9). The CSS matching was performed with the triples
Frame FP FN FP FN
34 336 77 6.55 1.58
Table
3: Region-based distance for the person se-
quence. Columns 2, 3: false positives, false nega-
tives. Columns 4, 5: percentage of mismatched pixels
in comparison to the entire number of pixels of
the manual segmentation.
Frame avg. distance
34 1.20 25.40 41.98 23.53 9.10
38 2.04 12.61 38.71 24.93 23.75
Table
4: Edge-based distance for the person se-
quence. Column 2: average edge pixel distance.
Colums 3-6: percentages of the distances 0,1,2,3.n.
Good Bad Rejected
sequence matches matches frames
car (left) Cars 92% 0% 8%
taxi (center) Cars 68% Misc 8% 24%
van (right) Cars 29% Animals 39% 0%
People 32%
Table
5: Results of the automatically segmented objects
in the Taxi sequence matched to the objects in
the database.
(position, height, width) for each peak in the CSS image.
Table
5 shows the result of the Hamburg taxi sequence.
The perspective and segmentation of the car (left object) is
best suited for recognition. At only 68% the taxi (center
object) cannot be recognized reliably. The van cannot be
recognized by the application.
The last row in Figure 8 shows the four best matches of
the car (left object) in frame 12. The perspective of the
car does not change, so the other frames show similar results
Figure
9 depicts classication results for the person
sequence. The best matches for the frames 25, 33, 34, and
38 are displayed in the last row.
6. CONCLUSIONS
We presented an approach to the segmentation and clas-
sication of video objects. In the motion segmentation step
we integrated the 3D structure tensor into a geodesic active
contour model. While the structure tensor is able to estimate
motion reliably in the presence of background noise,
the active contour groups neighboring regions and closes
holes and gaps. The level set-based implementation allows
the simultaneous detection of multiple objects. To account
for large displacements that cannot be handled by the standard
structure tensor, we developed a new multiresolution
tensor-based algorithm.
A contour-based video object classication system was
presented as an application. The robustness of the curvature
scale space method allows correct classication even in the
presence of segmentation errors. We provided various experimental
results. While the results for the segmentation
algorithm driven by the standard tensor are very encour-
aging, the segmentation obtained in conjunction with the
multiresolution algorithm are less accurate. Nevertheless,
the classication algorithm was able to calculate reasonable
categorizations.
There are, however, several areas that require further de-
velopment. First, the segmentation performance of the multiresolution
approach has to be improved. Second, to provide
a complete segmentation module, it is necessary to integrate
a tracking component.
7.
ACKNOWLEDGMENTS
The authors would like to thank Changick Kim, University
of Washington, for the provision of the \person se-
quence" on his website.
8.
--R
Shape similarity retrieval under a
Enhancing css-based shape retrieval for objects with shallow concavities
Performance of optical ow techniques.
The computation of optical ow.
A computational approach to edge detection.
Geodesic active contours.
ISO/IEC 14496-2
Active contour models.
Conformal curvature ows: from phase transitions to active vision.
A fast and robust moving object segmentation in video sequences.
An integrated scheme for object-based video abstraction
A survey of shape analysis techniques.
A noise robust method for segmentation of moving objects in video sequences.
Extraction of moving objects for content-based video coding
Computation and analysis of image motion: A synopsis of current problems and methods.
Robust and e
Geodesic active contours and level sets for the detection and tracking of moving objects.
Review of algorithms for shape analysis.
Good features to track.
New trends in image and video compression.
--TR
A computational approach to edge detection
Multidimensional Orientation Estimation with Applications to Texture Analysis and Optical Flow
Performance of optical flow techniques
The computation of optical flow
Computation and analysis of image motion
Geodesic Active Contours
Geodesic Active Contours and Level Sets for the Detection and Tracking of Moving Objects
An integrated scheme for object-based video abstraction
Shape Analysis and Classification
A Tensor Approach for Precise Computation of Dense Displacement Vector Fields
A Noise Robust Method for Segmentation of Moving Objects in Video Sequences
--CTR
Xinguo Yu , Changsheng Xu , Hon Wai Leong , Qi Tian , Qing Tang , Kong Wah Wan, Trajectory-based ball detection and tracking with applications to semantic analysis of broadcast soccer video, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Mayur D. Jain , S. Nalin Pradeep, A video surveillance system under varying environmental conditions, Proceedings of the 24th IASTED international conference on Signal processing, pattern recognition, and applications, p.32-37, February 15-17, 2006, Innsbruck, Austria | space;structure tensor;curvature scale;object classification;motion segmentation |
500486 | A Survey of Energy Efficient Network Protocols for Wireless Networks. | Wireless networking has witnessed an explosion of interest from consumers in recent years for its applications in mobile and personal communications. As wireless networks become an integral component of the modern communication infrastructure, energy efficiency will be an important design consideration due to the limited battery life of mobile terminals. Power conservation techniques are commonly used in the hardware design of such systems. Since the network interface is a significant consumer of power, considerable research has been devoted to low-power design of the entire network protocol stack of wireless networks in an effort to enhance energy efficiency. This paper presents a comprehensive summary of recent work addressing energy efficient and low-power design within all layers of the wireless network protocol stack. | Introduction
The rapid expansion of wireless services such as cellular voice, PCS
(Personal Communications Services), mobile data and wireless LANs
in recent years is an indication that signicant value is placed on accessibility
and portability as key features of telecommunication (Salkintzis and Mathiopoulos (Guest Ed.), 2000).
devices have maximum utility when they can be used \any-
where at anytime". One of the greatest limitations to that goal, how-
ever, is nite power supplies. Since batteries provide limited power, a
general constraint of wireless communication is the short continuous
operation time of mobile terminals. Therefore, power management is
y Corresponding Author: Dr. Krishna Sivalingam. Part of the research was
supported by Air Force Oce of Scientic Research grants F-49620-97-1-
0471 and F-49620-99-1-0125; by Telcordia Technologies and by Intel. Part of
the work was done while the rst author was at Washington State Univer-
sity. The authors' can be reached at cej@bbn.com, krishna@eecs.wsu.edu,
pagrawal@research.telcordia.com, jcchen@research.telcordia.com
c
2001 Kluwer Academic Publishers. Printed in the Netherlands.
Jones, Sivalingam, Agrawal and Chen
one of the most challenging problems in wireless communication, and
recent research has addressed this topic (Bambos, 1998). Examples include
a collection of papers available in (Zorzi (Guest Ed.), 1998) and
a recent conference tutorial (Srivastava, 2000), both devoted to energy
ecient design of wireless networks.
Studies show that the signicant consumers of power in a typical
laptop are the microprocessor (CPU), liquid crystal display (LCD),
hard disk, system memory (DRAM), keyboard/mouse, CDROM drive,
oppy drive, I/O subsystem, and the wireless network interface card
(Udani and Smith, 1996, Stemm and Katz, 1997). A typical example
from a Toshiba 410 CDT mobile computer demonstrates that nearly
36% of power consumed is by the display, 21% by the CPU/memory,
18% by the wireless interface, and 18% by the hard drive. Consequently,
energy conservation has been largely considered in the hardware design
of the mobile terminal (Chandrakasan and Brodersen, 1995) and in
components such as CPU, disks, displays, etc. Signicant additional
power savings may result by incorporating low-power strategies into
the design of network protocols used for data communication. This
paper addresses the incorporation of energy conservation at all layers
of the protocol stack for wireless networks.
The remainder of this paper is organized as follows. Section 2 introduces
the network architectures and wireless protocol stack considered
in this paper. Low-power design within the physical layer is brie
y
discussed in Section 2.3. Sources of power consumption within mobile
terminals and general guidelines for reducing the power consumed are
presented in Section 3. Section 4 describes work dealing with energy
ecient protocols within the MAC layer of wireless networks, and
power conserving protocols within the LLC layer are addressed in Section
5. Section 6 discusses power aware protocols within the network
layer. Opportunities for saving battery power within the transport
layer are discussed in Section 7. Section 8 presents techniques at the
OS/middleware and application layers for energy ecient operation.
Finally, Section 9 summarizes and concludes the paper.
2. Background
This section describes the wireless network architectures considered in
this paper. Also, a discussion of the wireless protocol stack is included
along with a brief description of each individual protocol layer. The
physical layer is further discussed.
Survey of Energy Ecient Network Protocols for Wireless 3
2.1. Wireless Network Architectures
Two dierent wireless network architectures are considered in this pa-
per: infrastructure and ad hoc networks. Below, a description of each
system architecture is presented.
2.1.0.1. Infrastructure: Wireless networks often extend, rather than
replace, wired networks, and are referred to as infrastructure networks.
The infrastructure network architecture is depicted in Figure 1. A
hierarchy of wide area and local area wired networks is used as the
backbone network. The wired backbone connects to special switching
nodes called base stations. Base stations are often conventional PCs
and workstations equipped with custom wireless adapter cards. They
are responsible for coordinating access to one or more transmission
for mobiles located within the coverage cell. Transmission
channels may be individual frequencies in FDMA (Frequency Division
Multiple Access), time slots in TDMA (Time Division Multiple Access),
or orthogonal codes or hopping patterns in the case of CDMA (Code
Division Multiple Access). Therefore, within infrastructure networks,
wireless access to and from the wired host occurs in the last hop between
base stations and mobile hosts that share the bandwidth of the
wireless channel.
2.1.0.2. Ad hoc: Ad hoc networks, on the other hand, are multi-hop
wireless networks in which a set of mobiles cooperatively maintain
network connectivity (Macker and Corson, 1998). This on-demand net-work
architecture is completely un-tethered from physical wires. An
example of an ad hoc topology is pictured in Figure 2. Ad hoc networks
are characterized by dynamic, unpredictable, random, multi-hop
topologies with typically no infrastructure support. The mobiles must
periodically exchange topology information which is used for routing
updates. Ad hoc networks are helpful in situations in which temporary
network connectivity is needed, and are often used for military envi-
ronments, disaster relief, and so on. Mobile ad hoc networks have attracted
considerable attention as evidenced by the IETF working group
MANET (Mobile Ad hoc Networks). This has produced various Internet
drafts, RFCs, and other publications (Macker and Corson, 1998,
Macker and Corson, ). Also, a recent conference tutorial presents a
good introduction to ad hoc networks (Vaidya, 2000).
4 Jones, Sivalingam, Agrawal and Chen
2.2. Protocol Layers
This section provides an introduction to the software used in wireless
data network systems. Application programs using the network
do not interact directly with the network hardware. Instead, an application
interacts with the protocol software. The notion of protocol
layering provides a conceptual basis for understanding how a complex
set of protocols work together with the hardware to provide a powerful
communication system. Recently, communication protocol stacks
such as the Infrared Data Association (IrDa) protocol stack for point-
to-point wireless infrared communication and the Wireless Application
Protocol (WAP) Forum protocol stack for enabling developers
to build advanced services across diering wireless network technologies
(Infrared Data Association, 2000, WAP Forum, 2000) have been
developed specically for wireless networks. This paper focuses on the
traditional OSI protocol stack, depicted in Figure 3, for a generic wireless
communication system. The application and services layer occupies
the top of the stack followed by the operating system/middleware,
transport, network, data link, and physical layers. The problems inherent
to the wireless channel and issues related to mobility challenge
the design of the protocol stack adopted for wireless networks. In ad-
dition, networking protocols need to be designed with energy eciency
in mind.
Physical: The physical layer consists of Radio Frequency (RF) circuits,
modulation, and channel coding systems. From an energy ecient per-
spective, considerable attention has already been given to the design of
this layer (Chandrakasan and Brodersen, 1995).
Data Link: The data link layer is responsible for establishing a reliable
and secure logical link over the unreliable wireless link. The data link
layer is thus responsible for wireless link error control, security (en-
cryption/decryption), mapping network layer packets into frames, and
packet retransmission.
A sublayer of the data link layer, the media access control (MAC)
protocol layer is responsible for allocating the time-frequency or code
space among mobiles sharing wireless channels in a region.
Network: The network layer is responsible for routing packets, establishing
the network service type (connection-less versus connection-
oriented), and transferring packets between the transport and link
layers. In a mobile environment this layer has the added responsibility
of rerouting packets and mobility management.
Survey of Energy Ecient Network Protocols for Wireless 5
Transport: The transport layer is responsible for providing ecient and
reliable data transport between network endpoints independent of the
physical network(s) in use.
OS/Middleware: The operating system and middleware layer handles
disconnection, adaptivity support, and power and quality of service
(QoS) management within wireless devices. This is in addition to the
conventional tasks such as process scheduling and le system management
Application: The application and services layer deals with partitioning
of tasks between xed and mobile hosts, source coding, digital signal
processing, and context adaptation in a mobile environment. Services
provided at this layer are varied and application specic.
The next section further examines the low-power research completed
at the physical layer.
2.3. Physical Layer
In the past, energy ecient and low-power design research has centered
around the physical layer due to the fact that the consumption of
power in a mobile computer is a direct result of the system hardware.
Research addresses two dierent perspectives of the energy problem:
(i) an increase in battery capacity and (ii) a decrease in the amount of
energy consumed at the wireless terminal.
The primary problem concerning energy in wireless computing is
that battery capacity is extremely limited. The focus of battery technology
research has been to increase battery power capacity while
restricting the weight of the battery. However, unlike other areas of
computer technology such as micro-chip design, battery technology has
not experienced signicant advancement in the past
fore, unless a breakthrough occurs in battery technology, an attainable
goal of research would be a decrease in the energy consumed in the
wireless terminal (Lettieri and Srivastava, 1999).
Low-power design at the hardware layer uses dierent techniques
including variable clock speed CPUs (Govil et al., 1995),
ash memory
(Marsh et al., 1994), and disk spindown (Douglis et al., 1994). Numerous
energy ecient techniques for the physical layer are discussed in
(Chandrakasan and Brodersen, 1995). Although the above techniques
have resulted in considerable energy savings, other venues should also
be explored to improve energy eciency. One way to achieve this for
future wireless networks is to design the higher layers of the protocol
stack with energy eciency as an important goal.
6 Jones, Sivalingam, Agrawal and Chen
3. Power Consumption Sources and Conservation
Mechanisms
This section rst presents the chief sources of power consumption with
respect to the protocol stack. Then, it presents an overview of the main
mechanisms and principles that may be used to develop energy ecient
network protocols.
3.1. Sources of Power Consumption
The sources of power consumption, with regard to network opera-
tions, can be classied into two types: communication related and
computation related.
Communication involves usage of the transceiver at the source, intermediate
(in the case of ad hoc networks), and destination nodes.
The transmitter is used for sending control, route request and re-
sponse, as well as data packets originating at or routed through the
transmitting node. The receiver is used to receive data and control
packets { some of which are destined for the receiving node and some
of which are forwarded. Understanding the power characteristics of
the mobile radio used in wireless devices is important for the ecient
design of communication protocols. A typical mobile radio may exist
in three modes: transmit, receive, and standby. Maximum power is
consumed in the transmit mode, and the least in the standby mode.
For example, the Proxim RangeLAN2 2.4 GHz 1.6 Mbps PCMCIA card
requires 1.5W in transmit, 0.75W in receive, and 0.01W in standby
mode. In addition, turnaround between transmit and receive modes
(and vice-versa) typically takes between 6 to microseconds. Power
consumption for Lucent's 15 dBm 2.4 GHz 2 Mbps Wavelan PCMCIA
card is 1.82W in transmit mode, 1.80W in receive mode, and 0.18W in
standby mode. Thus, the goal of protocol development for environments
with limited power resources is to optimize the transceiver usage for a
given communication task.
The computation considered in this paper is chie
y concerned with
protocol processing aspects. It mainly involves usage of the CPU and
main memory and, to a very small extent, the disk or other components.
Also, data compression techniques, which reduce packet length (and
hence energy usage), may result in increased power consumption due
to increased computation.
There exists a potential tradeo between computation and communication
costs. Techniques that strive to achieve lower communication
costs may result in higher computation needs, and vice-versa.
Survey of Energy Ecient Network Protocols for Wireless 7
Hence, protocols that are developed with energy eciency goals should
attempt to strike a balance between the two costs.
3.2. General Conservation Guidelines and Mechanisms
The following discussion presents some general guidelines that may
be adopted for an energy ecient protocol design, and Figure 3 lists
areas in which conservation mechanisms are benecial. Examples are
provided in which these guidelines have been adopted. Some mechanisms
are better suited for infrastructure networks and others for ad
hoc networks.
Collisions should be eliminated as much as possible within the MAC
layer since they result in retransmissions. Retransmissions lead to un-necessary
power consumption and to possibly unbounded delays. Re-transmissions
cannot be completely avoided in a wireless network due
to the high error-rates. Similarly, it may not be possible to fully eliminate
collisions in a wireless mobile network. This is partly due to user
mobility and a constantly varying set of mobiles in a cell. For example,
new users registering with the base station may have to use some form
of random access protocol. In this case, using a small packet size for
registration and bandwidth request may reduce energy consumption.
The EC-MAC protocol (Sivalingam et al., 2000) is one example that
avoids collisions during reservation and data packet transmission.
In a typical broadcast environment, the receiver remains on at all
times which results in signicant power consumption. The mobile radio
receives all packets, and forwards only the packets destined for the
receiving mobile. This is the default mechanism used in the IEEE 802.11
wireless protocol in which the receiver is expected to keep track of channel
status through constant monitoring. One solution is to broadcast a
schedule that contains data transmission starting times for each mobile
as in (Sivalingam et al., 2000). This enables the mobiles to switch to
standby mode until the receive start time. Another solution is to turn
the transceiver whenever the node determines that it will not be receiving
data for a period of time. The PAMAS protocol (Singh and Raghavendra, 1998)
uses such a method.
Furthermore, signicant time and power is spent by the mobile radio
in switching from transmit to receive modes, and vice versa. A
protocol that allocates permission on a slot-by-slot basis suers substantial
overhead. Therefore, this turnaround is a crucial factor in the
performance of a protocol. If possible, the mobile should be allocated
contiguous slots for transmission or reception to reduce turnaround,
resulting in lower power consumption. This is similar to buering writes
to the hard disk in order to minimize seek latency and head move-
8 Jones, Sivalingam, Agrawal and Chen
ment. Also, it is benecial for mobiles to request multiple transmission
slots with a single reservation packet when requesting bandwidth in
order to reduce the reservation overhead. This leads to improved band-width
usage and energy eciency. The scheduling algorithms studied
in (Chen et al., 1999b) consider contiguous allocation and aggregate
packet requests.
Assuming that mobiles transmit data transmission requests to the
base station, a centralized scheduling mechanism that computes the
system transmission schedule at the base station is more energy ef-
cient. A distributed algorithm in which each mobile computes the
schedule independently may not be desirable because mobiles may
not receive all reservation requests due to radio and error constraints,
and schedule computation consumes energy. Thus, computation of the
transmission schedule ought to be relegated to the base station, which
in turn broadcasts the schedule to each mobile. Most reservation and
scheduling based protocols require the base station to compute the
schedule.
The scheduling algorithm at the base station may consider the node's
battery power level in addition to the connection priority. This allows
trac from low-power mobiles that may be dropped due to depletion
of power reserves to be transmitted sooner. Such a mechanism has
been studied in (Price, 2000, Price et al., 2001). Also, under low-power
conditions, it may be useful to allow a mobile to re-arrange allocated
slots among its own
ows. This may allow certain high-priority trac to
be transmitted sooner rather than waiting for the originally scheduled
time. Such mobile-based adaptive algorithms have been considered in
(Damodaran and Sivalingam, 1999) and (Chen, 2000) in the context of
energy eciency and channel error compensation.
At the link layer, transmissions may be avoided when channel conditions
are poor, as studied in (Zorzi and Rao, 1997a). Also, error control
schemes that combine Automatic Repeat Request (ARQ) and Forward
mechanisms may be used to conserve power
(i.e. tradeo retransmissions with ARQ versus longer packets with
FEC) as in (Lettieri et al., 1997).
Energy ecient routing protocols may be achieved by establishing
routes that ensure that all nodes equally deplete their battery power,
as studied in (Woo et al., 1998, Chang and Tassiulas, 2000). This helps
balance the amount of trac carried by each node. A related mechanism
is to avoid routing through nodes with lower battery power, but this
requires a mechanism for dissemination of node battery power. Also,
the periodicity of routing updates can be reduced to conserve energy,
but may result in inecient routes when user mobility is high. Another
method to improve energy performance is to take advantage of the
Survey of Energy Ecient Network Protocols for Wireless 9
broadcast nature of the network for broadcast and multicast trac as in
(Singh et al., 1999, Wieselthier et al., 2000). In (Ramanathan and Rosales-Hain, 2000),
the topology of the network is controlled by varying the transmit power
of the nodes, and the topology is generated to satisfy certain network
properties.
At the OS level, the common factor to all the dierent techniques
proposed is suspension of a specic sub-unit such as disk, memory, dis-
play, etc. based upon detection of prolonged inactivity. Several methods
of extending battery lifetime within the operating system and middle-ware
layer are discussed in (Tiwari et al., 1994, Chandrakasan and Brodersen, 1995,
Mehta et al., 1997). Other techniques studied include power-aware CPU
scheduling (Weiser et al., 1994, Lorch and Smith, 1997) and page allocation
(Lebeck et al., 2000). Within the application layer, the power
conserving mechanisms tend to be application specic { such as database
access (Imielinski et al., 1994, Alonso and Ganguly, 1993) and video processing
(Chandrakasan and Brodersen, 1995, Gordon et al., 1996, Agrawal et al., 1998).
A summary of software strategies for energy eciency is presented in
(Lorch and Smith, 1998).
4. MAC Sublayer
The MAC (Media Access Control) layer is a sublayer of the data link
layer which is responsible for providing reliability to upper layers for
the point-to-point connections established by the physical layer. The
MAC sublayer interfaces with the physical layer and is represented
by protocols that dene how the shared wireless channels are to be
allocated among a number of mobiles. This section presents the details
of three specic MAC protocols: IEEE 802.11 (IEEE, 1998), EC-MAC
(Sivalingam et al., 2000), and PAMAS (Singh and Raghavendra, 1998).
4.1. IEEE 802.11 Standard
The IEEE 802.11 (IEEE, 1998) protocol for wireless LANs is a multiple
access technique based on CSMA/CA (Collision Sense Multiple
Access/Collision Avoidance), and is derived from the MACA protocol
described in (Karn, 1990). The basic protocol is dened as follows. A
mobile with a packet to transmit senses the transmission channel for
activity. The mobile captures the channel and transmits all pending
data packets if the channel is not busy. Otherwise, the mobile defers
transmission and enters the backo state. The time period that follows
is called the contention window and consists of a pre-determined
number of transmission slots. The mobile randomly selects a slot in
Jones, Sivalingam, Agrawal and Chen
the contention window, and continuously senses the medium until its
selected contention slot. The mobile enters the backo state again if
it detects transmission from some other mobile during that period.
However, if no transmission is detected, the mobile transmits the access
packet and captures the channel. Extensions to the basic protocol include
provisions for MAC-level acknowledgements and request-to-send
(RTS)/clear-to-send (CTS) mechanisms.
The IEEE 802.11 (IEEE, 1998) standard recommends the following
technique for power conservation. A mobile that wishes to conserve
power may switch to sleep mode and inform the base station of this
decision. The base station buers packets received from the network
that are destined for the sleeping mobile. The base station periodically
transmits a beacon that contains information about such buered pack-
ets. When the mobile wakes up, it listens for this beacon, and responds
to the base station which then forwards the packets. This approach
conserves power but results in additional delays at the mobile that
may aect the quality of service (QoS). A comparison of power-saving
mechanisms in the IEEE 802.11 and HIPERLAN standards is presented
in (Woesner et al., 1998). Presented in (Dhaou, 1999) is a load-sharing
method for saving energy in an IEEE 802.11 network. Simulation results
indicate total power savings of 5 { 15 %.
Experimental measurements of per-packet energy consumption for
an IEEE 802.11 wireless network interface is reported in (Feeney, 1999b).
This work uses the Lucent WaveLAN card for its experiments. The cost
of sending and receiving a packet is measured for a network using UDP
point-to-point (or unicast) and broadcast trac with varying packet
sizes. The energy cost is studied in terms of xed cost per packet
which re
ects MAC operation and incremental cost that depends on
packet size. The results show that both point-to-point and broadcast
trac transmission incur the same incremental costs, but point-to-point
transmission incurs higher xed costs because of the MAC coordina-
tion. The reception of point-to-point trac maintains higher xed costs
since the receiver must respond with CTS and ACK messages. However,
incremental costs of packet reception were identical for both trac
types. The study also measures power consumption for non-destination
mobiles that are in range of the sender and receiver. These experiments
are a valuable source of information and represent an important step
in expanding the knowledge of energy ecient protocol development.
4.2. EC-MAC Protocol
Although the IEEE 802.11 standard addresses energy eciency, it was
not one of the central design issues in developing the protocol. The EC-
Survey of Energy Ecient Network Protocols for Wireless 11
MAC (Energy Conserving-Medium Access Control) protocol (Sivalingam et al., 2000,
Chen et al., 1999a), on the other hand, was developed with the issue
of energy eciency as a primary design goal. The EC-MAC protocol is
dened for an infrastructure network with a single base station serving
mobiles in its coverage area. This denition can be extended to an ad
hoc network by allowing the mobiles to elect a coordinator to perform
the functions of the base station. The general guidelines outlined in the
previous section and the need to support QoS led to a protocol that
is based on reservation and scheduling strategies. Transmission in EC-
MAC is organized by the base station into frames as shown in Figure 4,
and each slot equals the basic unit of wireless data transmission.
At the start of each frame, the base station transmits the frame
synchronization message (FSM) which contains synchronization information
and the uplink transmission order for the subsequent reservation
phase. During the request/update phase, each registered mobile
transmits new connection requests and status of established queues
according to the transmission order received in the FSM. In this phase,
collisions are avoided by having the BS send the explicit order of reservation
transmission. New mobiles that have entered the cell coverage
area register with the base station during the new-user phase. Here,
collisions are not easily avoided and hence this may be operated using a
variant of Aloha. This phase also provides time for the BS to compute
the data phase transmission schedule. The base station broadcasts a
schedule message that contains the slot permissions for the subsequent
data phase. Downlink transmission from the base station to the mobile
is scheduled considering the QoS requirements. Likewise, the uplink
slots are allocated using a suitable scheduling algorithm.
Energy consumption is reduced in EC-MAC because of the use of a
centralized scheduler. Therefore, collisions over the wireless channel are
avoided and this reduces the number of retransmissions. Additionally,
mobile receivers are not required to monitor the transmission channel
as a result of communication schedules. The centralized scheduler
may also optimize the transmission schedule so that individual mobiles
transmit and receive within contiguous transmission slots. The priority
round robin with dynamic reservation update and error compensation
scheduling algorithm described in (Chen et al., 1999b) provides
for contiguous slot allocation in order to reduce transceiver turnaround.
Also, scheduling algorithms that consider mobile battery power level
in addition to packet priority may improve performance for low-power
mobiles. A family of algorithms based on this idea is presented in
(Kishore et al., 1998, Price et al., 2001).
The frames may be designed to be xed or variable length. Fixed
length frames are desirable from the energy eciency perspective, since
Jones, Sivalingam, Agrawal and Chen
a mobile that goes to sleep mode will know when to wake up to receive
the FSM. However, variable length frames are better for meeting
the demands of bursty trac. The EC-MAC studies used xed length
frames.
The energy eciency of EC-MAC is compared with that of IEEE
802.11 and other MAC protocols in (Chen et al., 1999a). This comparative
study demonstrates how careful reservation and scheduling of
transmissions avoids collisions that are expensive in energy consumption
4.3. PAMAS Protocol
While the EC-MAC protocol described above was designed primarily
for infrastructure networks, the PAMAS (Power Aware Multi-Access)
protocol (Singh and Raghavendra, 1998) was designed for the ad hoc
network, with energy eciency as the primary design goal.
The PAMAS protocol modies the MACA protocol described in
(Karn, 1990) by providing separate channels for RTS/CTS control packets
and data packets. In PAMAS, a mobile with a packet to transmit
sends a RTS (request-to-send) message over the control channel, and
awaits the CTS (clear-to-send) reply message from the receiving mobile.
The mobile enters a backo state if no CTS arrives. However, if a CTS
is received, then the mobile transmits the packet over the data channel.
The receiving mobile transmits a \busy tone" over the control channel
enabling users tuned to the control channel to determine that the data
channel is busy.
Power conservation is achieved by requiring mobiles that are not
able to receive and send packets to turn o the wireless interface.
The idea is that a data transmission between two mobiles need not
be overheard by all the neighbors of the transmitter. The use of a
separate control channel allows for mobiles to determine when and for
how long to power o. A mobile should power itself
no packets to transmit and a neighbor begins transmitting a packet not
destined for it, and (ii) it does have packets to transmit but at least one
neighbor-pair is communicating. Each mobile determines the length of
time that it should be powered through the use of a probe protocol,
the details of which are available in (Singh and Raghavendra, 1998).
Theoretical bounds on power savings for random, line, and fully connected
topologies are also presented. The results from simulation and
analysis show that between 10% to 70% power savings can be achieved
for fully connected topologies.
Survey of Energy Ecient Network Protocols for Wireless 13
5. LLC Sublayer
In this section, we focus on the error control functionality of the logical
link control (LLC) sublayer. The two most common techniques used for
error control are Automatic Repeat Request (ARQ) and Forward Error
Correction (FEC). Both ARQ and FEC error control methods waste
network bandwidth and consume power resources due to retransmission
of data packets and greater overhead necessary in error correction. Care
must be exercised while adopting these techniques over a wireless link
where the error rates are high due to noise, fading signals, and disconnections
caused from mobility. A balance needs to be maintained within
this layer between competing measures for enhancing throughput, reli-
ability, security, and energy eciency. For example, channel encoding
schemes for enhancing channel quality tend to reduce the throughput
as more redundancy is added to the transmitted information. Also,
increasing transmitted power to improve the channel to interference
ratio depletes battery energy.
Recent research has addressed low-power error control and several
energy ecient link layer protocols have been proposed. Three such
protocols are described below.
5.1. Adaptive Error Control with ARQ
An ARQ strategy that includes an adaptive error control protocol is
presented and studied in (Zorzi and Rao, 1997a, Zorzi and Rao, 1997b).
First, though, the authors propose a new design metric for protocols
developed specically for the wireless environment and three guidelines
in designing link layer protocols to be more power conserving. The new
design metric introduced in (Zorzi and Rao, 1997a) is the energy e-
ciency of a protocol which is dened as the ratio between total amount
of data delivered and total energy consumed. Therefore, as more data
is successfully transmitted for a given amount of energy consumption,
the energy eciency of the protocol increases.
The following guidelines in developing a protocol should be considered
in order to maximize the energy eciency of the protocol.
1. Avoid persistence in retransmitting data.
2. Trade o number of retransmission attempts for probability of
successful transmission.
3. Inhibit transmission when channel conditions are poor.
The energy ecient protocol proposed in (Zorzi and Rao, 1997a,
Zorzi and Rao, 1997b) incorporates a probing protocol that slows down
14 Jones, Sivalingam, Agrawal and Chen
data transmission when degraded channel conditions are encountered.
The ARQ protocol works as normal until the transmitter detects an
error in either the data or control channel due to the lack of a received
acknowledgement (ACK). At this time the protocol enters a probing
mode in which a probing packet is transmitted every t slots. The probe
packet contains only a header with little or no payload and therefore
consumes a smaller amount of energy. This mode is continued until
a properly received ACK is encountered, indicating the recovered status
of both channels. The protocol then returns to normal mode and
continues data transmission from the point at which it was interrupted.
Using a Markov model based analysis and a recursive technique, the
probing protocol is compared to traditional ARQ schemes, and
the tradeo between performance and energy eciency is investigated.
The results show that under slow fading channel conditions the proposed
protocol is superior to that of standard ARQ in terms of energy
eciency, increasing the total number of packets that can be trans-
mitted. The analysis also demonstrates that an optimal transmission
power in respect to energy eciency exists. Using a high transmission
power to maximize the probability of a successful transmission may
not be the best strategy. Although decreasing the transmission power
results in an increased number of transmission attempts, it may be
more ecient than attempting to maximize the throughput per slot.
The conclusion reached is that although throughput is not necessarily
maximized, the energy eciency of a protocol may be maximized by
decreasing the number of transmission attempts and/or transmission
power in the wireless environment.
5.2. Adaptive Error Control with ARQ/FEC Combination
The above error control scheme included only ARQ strategies. However,
the energy ecient error control scheme proposed in (Lettieri et al., 1997)
combines ARQ and FEC strategies. The authors describe an error
control architecture for the wireless link in which each packet stream
maintains its own time-adaptive customized error control scheme based
on certain set up parameters and a channel model estimated at run-
time. The idea behind this protocol is that there exists no energy
ecient \one-size-ts-all" error control scheme for all trac types and
channel conditions. Therefore, error control schemes should be customized
to trac requirements and channel conditions in order to
obtain more optimal energy savings for each wireless connection.
The dynamic error control protocol described in (Lettieri et al., 1997)
operates as follows. Service quality parameters, such as packet size and
QoS requirements, used by the MAC sublayer and packet scheduler
Survey of Energy Ecient Network Protocols for Wireless 15
are associated with each data stream. These parameters are further
used to select an appropriately customized combination of an ARQ
scheme (Go-Back-N, Cumulative Acknowledgement (CACK), Selective
Acknowledgement
etc.) and FEC scheme. In order to keep
energy consumption at a minimum, the error control scheme associated
with each stream may need to be modied as channel conditions change
over time. Studies based on analysis and simulation under dierent scenarios
were presented as a guideline in choosing an error control scheme
to achieve low energy consumption while trading
channel conditions, trac types, and packet sizes. The authors extend
their research in (Lettieri and Srivastava, 1998) to include a protocol
for dynamically sizing the MAC layer frame, depending upon wireless
channel conditions.
5.3. Adaptive Power Control and Coding Scheme
Finally, a dynamic power control and coding protocol for optimizing
throughput, channel quality, and battery life is studied in (Agrawal et al., 1996,
Narendran et al., 1997). This distributed algorithm, in which each mobile
determines its own operating point with respect to power and error
control parameters, maintains the goal of minimizing power utilization
and maximizing capacity in terms of the number of simultaneous con-
nections. Power control, as dened by the authors, is the technique
of controlling the transmit power so as to aect receiver power, and
ultimately the carrier-to-interference ratio (CIR).
The energy ecient power control and coding protocol operates in
the following manner. Each transmitter operates at a power-code pair
in which the power level lies between a specied minimum and maximum
and the error code is chosen from a nite set. The algorithm is
iterative in nature with the transmitter and receiver, at each iteration,
cooperatively evaluating channel performance and determining if an
adjustment in the power-code pair is necessary. The time between each
iteration is referred to as a timeframe. After each timeframe the receiver
involved in the data transmission evaluates the channel performance by
checking the word error rate (WER). If the WER lies within an acceptable
range, the power-code pair is retained; otherwise a new power-code
pair is computed by the transmitter. The basic frame of the algorithm
can be modied such that optimal levels of control overhead and channel
quality are traded o. Also, variations of the base algorithm include
the evaluation of the average WER, rather than the instantaneous
WER in each timeframe, in determining channel quality and in the
evaluation of anticipated channel performance. The latter adaptation of
the algorithm attempts to predict changes in error rates due to mobility
Jones, Sivalingam, Agrawal and Chen
by sampling the received powers and extrapolating the values to the
next timeframe. If the predicted WER is not within acceptable ranges,
then the power-code pair is adapted to avoid unsatisfactory channel
conditions.
A study of the dynamic power control and coding protocol was performed
through simulation of a cellular system with roaming mobiles.
Simulation results indicate that the proposed dynamic power control
and coding protocol supports better quality channels as compared to
schemes that use xed codes; therefore power-control alone does not
perform as well as an adaptive power-control/FEC protocol.
The next section discusses energy ecient routing protocols within
the network layer.
6. Network Layer
The main functions of the network layer are routing packets and congestion
control. In wireless mobile networks, the network layer has the
added functionality of routing under mobility constraints and mobility
management including user location, update, etc. In this section, we
present energy ecient routing algorithms developed for wireless ad
hoc networks. Energy ecient routing does not apply to infrastructure
networks because all trac is routed through the base station.
As mentioned earlier, in ad hoc networks the mobiles cooperate to
maintain topology information and use multi-hop packet routing. The
problem of routing is complicated due to user mobility resulting in
frequently changed network topologies. The rate of topology change
depends on many factors including user mobility speeds and terrain
characteristics. Typical routing algorithms for ad hoc networks consider
two dierent approaches:
1. Use frequent topology updates resulting in improved routing, but
increased update messages consume precious bandwidth.
2. Use infrequent topology updates resulting in decreased update mes-
sages, but inecient routing and occasionally missed packets results
Typical metrics used to evaluate ad hoc routing protocols are shortest-
hop, shortest-delay, and locality stability (Woo et al., 1998). However,
these metrics may have a negative eect in wireless networks because
they result in the overuse of energy resources of a small set of mobiles,
decreasing mobile and network life. For example, consider the wireless
network in Figure 5. Using shortest-hop routing, trac from mobile A
Survey of Energy Ecient Network Protocols for Wireless 17
to mobile D will always be routed through mobile E, which will drain
the energy reserves of E faster. If mobile E's battery becomes fully
drained, then mobile F is disconnected from the network and communication
to and from F is no longer possible. By using a routing algorithm
that takes into account such issues, trac from A to D may not always
be routed through E, but through mobiles B and C, extending network
life. Consequently, it is essential to consider routing algorithms from
an energy ecient perspective, in addition to traditional metrics. Such
research is described in the following paragraphs.
6.1. Unicast Traffic
Unicast trac is dened as trac in which packets are destined for
a single receiver. In (Woo et al., 1998), routing of unicast trac is
addressed with respect to battery power consumption. The authors'
research focuses on designing protocols to reduce energy consumption
and to increase the life of each mobile, increasing network life as well.
To achieve this, ve dierent metrics are dened from which to study
the performance of power-aware routing protocols.
Energy consumed per packet: It is easy to observe that if energy consumed
per packet is minimized then the total energy consumed is also
minimized. Under light loads, this metric will most likely result in the
shortest-hop path. As network load increases, this is not necessarily
the case because the metric will tend to route packets around areas of
congestion in the network.
Time to network partition: Given a network topology, a minimal set
of mobiles exist such that their removal will cause the network to
partition. Routes between the two partitions must go through one of
the \critical" mobiles; therefore a routing algorithm should divide the
work among these mobiles in such a way that the mobiles drain their
power at equal rates.
Variance in power levels across mobiles: The idea behind this metric is
that all mobiles in a network operate at the same priority level. In this
way, all mobiles are equal and no one mobile is penalized or privileged
over any other. This metric ensures that all mobiles in the network
remain powered-on together for as long as possible.
Cost per packet: In order to maximize the life of all mobiles in the
network, metrics other than energy consumed per packet need to be
used. When using these metrics, routes should be created such that
mobiles with depleted energy reserves do not lie on many routes. To-
gether, these metrics become the \cost" of a packet, which needs to be
minimized.
Jones, Sivalingam, Agrawal and Chen
Maximum mobile cost: This metric attempts to minimize the cost experienced
by a mobile when routing a packet through it. By minimizing
the cost per mobile, signicant reductions in the maximum mobile cost
result. Also, mobile failure is delayed and variance in mobile power
levels is reduced due to this metric.
In order to conserve energy, the goal is to minimize all the metrics
except for the second which should be maximized. As a result,
a shortest-hop routing protocol may no longer be applicable; rather,
a shortest-cost routing protocol with respect to the ve energy e-
ciency metrics would be pertinent. For example, a cost function may
be adapted to accurately re
ect a battery's remaining lifetime. The
premise behind this approach is that although packets may be routed
through longer paths, the paths contain mobiles that have greater
amounts of energy reserves. Also, energy can be conserved by routing
trac through lightly loaded mobiles because the energy expended in
contention and retransmission is minimized.
The properties of the power-aware metrics and the eect of the
metrics on end-to-end delay are studied in (Woo et al., 1998) using
simulation. A comparison of shortest-hop routing and the power-aware
shortest-cost routing schemes was conducted. The performance measures
were delay, average cost per packet, and average maximum node
cost. Results show that usage of power-aware metrics result in no extra
delay over the traditional shortest-hop metric. This is true because
congested paths are often avoided. However, there was signicant improvement
in average cost per packet and average maximum mobile
cost in which the cost is in terms of the energy ecient metrics dened
above. The improvements were substantial for large networks and
heavily-loaded networks. Therefore, by adjusting routing parameters
a more energy ecient routing scheme may be utilized for wireless
networks.
The above approach to routing in wireless ad hoc networks requires,
at the least, that every mobile have knowledge of the locations of every
other mobile and the links between them. This creates signicant
communication overhead and increased delay. Research completed in
(Stojmenovic and Lin, 2000) addresses this issue by proposing localized
routing algorithms which depend only on information about the source
location, the location of neighbors, and location of the destination.
This information is collected through GPS receivers which are included
within each mobile. Therefore, excessive network communication is not
required which, the authors report, more than makes up for the extra
energy consumed by the GPS units.
Survey of Energy Ecient Network Protocols for Wireless 19
A new power-cost metric incorporating both a mobile's lifetime and
distance based power metrics is proposed, and using the newly dened
metric, three power-aware localized routing algorithms are developed:
power, cost, and power-cost. The power algorithm attempts to minimize
the total amount of power utilized when transmitting a packet,
whereas the cost algorithm avoids mobiles that maintain low battery
reserves in order to extend the network lifetime. Finally, the power-cost
routing algorithm is a combination of the two algorithms. Experiments
validated the performance of these routing algorithms.
6.2. Broadcast Traffic
In this section broadcast trac, which is dened as trac in which
packets are destined for all mobiles in the system, is considered. With
a single transmission, a mobile is able to broadcast a packet to all
immediate neighbors. However, each mobile needs to receive a packet
only once. Intermediate mobiles are required to retransmit the packet.
The key idea in conserving energy is to allow each mobile's radio to
turn after receiving a packet if its neighbors have already received a
copy of the packet. Addressed in (Singh et al., 1999) is the routing of
broadcast trac in terms of power consumption.
The broadcast technique used in traditional networks is a simple
ooding algorithm. This algorithm gathers no global topology infor-
mation, requiring little control overhead and completes the broadcast
with minimum number of hops. However, the
ooding algorithm is not
suitable for wireless networks because many intermediate nodes must
retransmit packets needlessly which leads to excessive power consump-
tion. Therefore, the authors of (Singh et al., 1999) propose that it is
more benecial to spend some energy in gathering topology information
in order to determine the most ecient broadcast tree.
In order to increase mobile and network life, any broadcast algorithm
used in the wireless environment should focus on conserving energy
and sharing the cost of routing among all mobiles in the system. One
way to conserve power is by ensuring that a transmission reaches as
many new nodes as possible. A broadcast tree approach is presented
in (Singh et al., 1999), in which the tree is constructed starting from
a source and expanding to the neighbor that has the lowest cost per
outgoing degree, where the cost associated with each mobile increases
as the mobile consumes more power. Therefore, priority for routing
packets through the broadcast tree is given to nodes that have consumed
lower amounts of power and nodes that have more neighbors
which have not already received the data transmission. Since mobile
costs continuously change, broadcast transmissions originating from
Jones, Sivalingam, Agrawal and Chen
the same source may traverse dierent trees, as they are determined
based on current costs of nodes.
Simulations were conducted in order to study the performance of the
proposed power-aware broadcast protocol as compared to traditional
ooding in terms of energy savings as well as delay. Averaged over
a period of time, the power-aware protocol demonstrates very little
dierence in broadcast delay. However, results indicate that savings
in energy consumption of 20% or better are possible using the power-aware
broadcast algorithm, with greater savings in larger networks and
networks with increased trac loads.
The construction of energy ecient broadcast and multicast trees for
the wireless environment is also studied in (Wieselthier et al., 2000).
The authors state that mobiles may experience greater energy conservation
if routing decisions are combined with decisions concerning
transmission power levels. An algorithm is presented for determining
the minimum-energy source-based tree for each broadcast/multicast
session request. The algorithm is based on the concept that there exists
an optimal point in the trade-o between reaching greater number of
mobiles in a single hop by using higher transmission power versus reaching
fewer mobiles but using lower power levels. Performance results
demonstrate that the combination of routing and transmission power
decisions provide greater energy conservation. A similar idea concerning
the incorporation of transmission power levels into routing algorithms
is also presented for unicast trac in (Chang and Tassiulas, 2000).
In (Feeney, 1999a), a simulation based comparison of energy consumption
for two ad hoc routing protocols { Dynamic Source Routing
(DSR) (Johnson et al., 2000) and Ad hoc On Demand Vector routing
(AODV) (Perkins et al., 2000) protocols is presented. The analysis
considers the cost for sending and receiving trac, for dropped pack-
ets, and for routing overhead packets. User mobility is modeled in the
analysis. The observations indicate that energy spent on receiving and
discarding packets can be signicant. Also, the costs of
ooding-based
broadcast trac and MAC control were seen to be signicant. For DSR,
results show that the cost of source routing headers was not very high,
but operating the receiver in promiscuous mode for caching and route
response purposes resulted in high power consumption. Results also
indicate that since AODV generates broadcast trac more often, the
energy cost is high given that broadcast trac consumes more energy.
Refer to (Feeney, 1999a) for more detailed results.
The next section presents work related to improving transport protocol
performance in the wireless environment.
Survey of Energy Ecient Network Protocols for Wireless 21
7. Transport Layer
The transport layer provides a reliable end-to-end data delivery service
to applications running at the end points of a network. The most
commonly used transport protocol for wired networks, where underlying
physical links are fairly reliable and packet loss is random in
nature, is the Transmission Control Protocol (TCP) (Postel, 1981).
However, due to inherent wireless link properties, the performance
of traditional transport protocols such as TCP degrades signicantly
over a wireless link. TCP and similar transport protocols resort to
a larger number of retransmissions and frequently invoke congestion
control measures, confusing wireless link errors and loss due to hando
as channel congestion. This can signicantly reduce throughput and
introduce unacceptable delays (Caceres and Iftode, 1995). As stated
increased retransmissions unnecessarily consume battery energy
and limited bandwidth.
Recently, various schemes have been proposed to alleviate the eects
of non congestion-related losses on TCP performance over networks
with wireless links. These schemes, which attempt to reduce retrans-
missions, are classied into three basic groups: (i) split connection
protocols, (ii) link-layer protocols, and (iii) end-to-end protocols.
connection protocols completely hide the wireless link from
the wired network by terminating the TCP connections at the base
station as shown in Figure 6. This is accomplished by splitting each
connection between the source and destination into two separate
connections at the base station. The result is one TCP connection
between the wired network and the base station and a second TCP connection
between the base station and the mobile. The second connection
over the wireless link may use modied versions of TCP that enhance
performance over the wireless channel. Examples of split connection
protocols include Indirect-TCP (Bakre and Badrinath, 1995), Berkeley
Snoop Module (Balakrishnan et al., 1995), and M-TCP (Brown and Singh, 1997).
Figure
7 depicts the link layer approach which attempts to hide
link related losses from the TCP source by using a combination of
local retransmissions and forward error correction over the wireless
link. Local retransmissions use techniques that are tuned to the characteristics
of the wireless channel to provide signicant increase in
performance. One example of a link layer protocol is the AIRMAIL
protocol (Ayanoglu et al., 1995), which employs a combination of both
FEC and ARQ techniques for loss recovery.
Finally, end-to-end protocols include modied versions of TCP that
are more sensitive to the wireless environment. End-to-end protocols
require that a TCP source handle losses through the use of such mecha-
22 Jones, Sivalingam, Agrawal and Chen
nisms as selective acknowledgements and explicit loss notication (ELN).
Selective acknowledgements allow the TCP source to recover from multiple
packet losses, and ELN mechanisms aid the TCP source in distinguishing
between congestion and other forms of loss.
7.1. Energy Consumption Analysis of TCP
The procotols described previously generally achieve higher through-put
rates over the wireless channel than standard TCP because the
protocols are better able to adapt to the dynamic mobile environ-
ment. However, the performance of a particular protocol is largely
dependent upon various factors such as mobility handling, amount of
overhead costs incurred, frequency and handling of disconnections, etc.
Therefore, performance and energy conservation may range widely for
these protocols depending upon both internal algorithm and external
environmental factors. Although the above protocols, along with many
others proposed in research, have addressed the unique needs of designing
transport protocols in the wireless environment which may or may
not lead to greater energy eciency, they have not directly addressed
the idea of a low-power transport protocol.
The energy consumption of Tahoe, Reno, and New Reno versions
of TCP is analyzed in (Zorzi and Rao, 2000). Energy consumption is
the main parameter studied with the objective of measuring the effect
of TCP transmission policies on energy performance. The energy
eciency of a protocol is dened as the average number of successful
transmissions per energy unit, which can be computed as the average
number of successes per transmission attempt. A two-state Markov
packet error process is used in the performance evaluation of a single
transceiver running the various versions of TCP on a dedicated
wireless link. Results of the study demonstrate that error correlation
signicantly aects the energy performance of TCP and that congestion
control algorithms of TCP actually allow for greater energy savings by
backing waiting during error bursts. It is also seen that energy
eciency may be quite sensitive to the version of TCP implemented
and the choice of protocol parameters.
The same versions of TCP were studied in (Tsaoussidis et al., 2000a)
in terms of energy/throughput tradeos. Simulation results show that
no single TCP version is most appropriate within wired/wireless heterogenous
networks, and that the key to balancing energy and through-put
performance is through the error control mechanism. Using these
results, the authors propose a modied version of TCP, referred to as
TCP-Probing, in (Tsaoussidis and Badr, 2000).
Survey of Energy Ecient Network Protocols for Wireless 23
In TCP-Probing, data transmission is suspended and a probe cycle
is initiated when a data segment is delayed or lost, rather than immediately
invoking congestion control. A probe cycle consists of an exchange
of probe segments between sender and receiver. Probe segments are implemented
as extensions to the TCP header and carry no payload. The
sender monitors the network through the probe cycle which terminates
when two consecutive round-trip-times (RTT) are successfully
measured. The sender invokes standard TCP congestion control if persistent
error conditions are detected. However, if monitored conditions
indicate transient random error, then the sender resumes transmission
according to available network bandwidth. Simulation results provided
in (Tsaoussidis and Badr, 2000) indicate that TCP-Probing achieves
higher throughput rates while consuming less energy. Therefore, the
authors believe that TCP-Probing provides a universal error control
mechanism for heterogenous wired/wireless networks. The authors also
present in (Tsaoussidis et al., 2000b) an experimental transport proto-
col, called Wave and Wait Protocol (WWP), developed specically for
a wireless environment with limited power.
8. OS/Middleware and Application Layers
This section addresses research completed at the OS/middleware and
application layers with respect to energy eciency.
8.1. OS/Middleware
One important advantage of integrating wireless communication with
computing is that it facilitates user mobility and connectivity to the
network. Mobility, directly or indirectly, impacts the design of operating
systems, middleware, le systems, and databases. It also presents a
new set of challenges that result from power constraints and voluntary
disconnections. To be consistent with xed counterparts like PCs and
workstations, mobile computers need to process multimedia informa-
tion. However, such processing is expensive in terms of both bandwidth
and battery power. In general, the majority of the techniques used in
the design of today's applications to conserve bandwidth also conserve
battery life.
The main function of an operating system is to manage access to
physical resources like CPU, memory, and disk space from the applications
running on the host. To reduce power dissipation, CPUs used
in the design of portable devices can be operated at lower speeds by
scaling down the supply voltage (Chandrakasan and Brodersen, 1995).
Jones, Sivalingam, Agrawal and Chen
Due to the quadratic relationship between power and supply voltage,
halving the supply voltage results in one fourth of the power being
consumed. To maintain the same throughput, the reduction in circuit
speed can be compensated by architectural techniques like pipelining
and parallelism. These techniques increase throughput resulting in an
energy ecient system operating at a lower voltage but with the same
throughput. The operating system is active in relating scheduling and
delay to speed changes.
Another technique of power management at this layer is predictive
shutdown (Chandrakasan and Brodersen, 1995). This method exploits
the event driven nature of computing in that sporadic computation
activity is triggered by external events and separated by periods of
inactivity. A straightforward means of reducing average energy consumption
is to shut down the system during periods of inactivity. How-
ever, preserving the latency and throughput of applications requires
intelligent activity-based predictive shutdown strategies.
In (Lebeck et al., 2000), a study of dierent page placement algorithms
that exploit the new power management features of memory
technology is presented. The study considers DRAM chips that support
dierent power modes: active, standby, nap and powerdown. Trace-driven
and execution-driven simulations show that improvement of 6%
to 55% in the Energy Delay metric are obtained using power-aware
page allocation mechanisms that operate in conjunction with hardware
policies.
CPU scheduling techniques that attempt to minimize power consumption
are presented in (Weiser et al., 1994, Lorch and Smith, 1997).
The impact of software architecture on power consumption is studied
in (Tiwari et al., 1994, Mehta et al., 1997).
8.2. Application Layer
The application layer in a wireless system is responsible for such things
as partitioning of tasks between the xed and mobile hosts, audio and
video source encoding/decoding, and context adaptation in a mobile
environment. Energy eciency at the application layer is becoming an
important area of research as is indicated by industry. APIs such as Advanced
Conguration and Power Interface (Intel Corporation, Microsoft and Toshiba Corporation, 2000)
and power management analysis tools such as Power Monitor (Intel Corporation, 2000)
are being developed to assist software developers in creating programs
that are more power conserving. Another power management tool developed
at Carnegie Mellon University is PowerScope (Flinn and Satyanarayanan, 1999).
PowerScope maps energy consumption to program structure, producing
a prole of energy usage by process and procedure. The authors
Survey of Energy Ecient Network Protocols for Wireless 25
report a 46% reduction in energy consumption of an adaptive video
playing application by taking advantage of the information provided
by PowerScope. This section summarizes some of the research being
conducted at the application layer with respect to power conservation.
Load Partitioning: Challenged by power and bandwidth constraints,
applications may be selectively partitioned between the mobile and base
station (Weiser et al., 1994, Narayanaswamy et al., 1996). Thus, most
of the power intensive computations of an application are executed at
the base station, and the mobile host plays the role of an intelligent terminal
for displaying and acquiring multimedia data (Narayanaswamy et al., 1996).
Proxies: Another means of managing energy and bandwidth for applications
on mobile clients is to use proxies. Proxies are middleware that
automatically adapt the applications to changes in battery power and
bandwidth. A simple example of proxy usage during multimedia transmissions
in a low-power or low bandwidth environment is to suppress
video and permit only audio streams. Another example is to direct a
le to be printed at the nearest printer when the host is mobile. Proxies
are either on the mobile or base station side of the wireless link.
Databases: Impact of power eciency on database systems is considered
by some researchers. For example, energy eciency in database design
by minimizing power consumed per transaction through embedded indexing
has been addressed in (Imielinski et al., 1994). By embedding
the directory in the form of an index, the mobile only needs to become
active when data of interest is being broadcast (the system architecture
consists of a single broadcast channel). When a mobile needs a piece of
information an initial probe is made into the broadcast channel. The
mobile is then able to determine the next occurrence of the required
index and enters probe wait mode while it waits for the index to be
broadcast. After receiving the index information relevant to the required
data, the mobile enters bcast wait mode while it waits for the
information to be broadcast. Access time is dened as the sum of the
two waiting periods, probe wait and bcast wait. The goal of the authors
is to provide methods to combine index information together with data
on the single broadcast channel in order to minimize access time. The
authors propose two such strategies which are further described in
(Imielinski et al., 1994). Also, energy ecient query optimization for
database systems is described in (Alonso and Ganguly, 1993).
Video Processing: Multimedia processing and transmission require considerable
battery power as well as network bandwidth. This is especially
true for video processing and transmission. However, reducing the eec-
tive bit rate of video transmissions allows lightweight video encoding
26 Jones, Sivalingam, Agrawal and Chen
and decoding techniques to be utilized thereby reducing power con-
sumption. Under severe bandwidth constraints or low-power situations,
video frames can even be carefully discarded before transmission while
maintaining tolerable video quality.
In (Agrawal et al., 1998), research on processing encoded video for
transmission under low battery power conditions is presented. The basic
idea of this work is to decrease the number of bits transmitted over the
wireless link in response to low-power situations. The challenge is to
accomplish this goal while preserving or minimally degrading the video
quality. Decreasing the number of transmitted bits reduces the energy
consumption due to reduced transmitter usage. In fact, several studies
have shown that transmission accounts for more than a third of the
energy consumption in video processing and exchange in a portable
device. The reduction in the number of bits can be achieved in one
of two ways: (i) reducing the number of bits in the compressed video
stream generated by the video encoder, and (ii) discarding selected
packets at the wireless network interface card (WNIC).
The rst approach is possible only if two conditions are satised.
The portable device must be encoding a video stream as opposed to
transmitting a stored video, and the application must be able to modify
parameters in the video encoder. The second approach is possible if the
WNIC is
exible and sensitive to battery power conditions. Further
details on how the dierent encoding schemes aect the choice of discarding
may be found in (Agrawal et al., 1998). Also, a testbed implementing
this research was developed, and preliminary results reported
in (Mahadevan, 1999).
Power-aware video processing is an important and exciting topic.
There are several approaches for developing ecient encoding schemes
that will impact performance and energy consumption as in (Liu and Zarki, 1998,
Swann and Kingsbury, 1998). However, a complete discussion is not
presented due to space constraints.
9.
Summary
As wireless services continue to add more capabilities such as multi-media
and QoS, low-power design remains one of the most important
research areas within wireless communication. Research must focus on
decreasing the amount of energy consumed by the wireless terminal.
Power conservation has typically been considered at the physical layer.
However, most of the energy savings at the physical layer have already
been achieved. Therefore, the key to energy conservation in wireless
Survey of Energy Ecient Network Protocols for Wireless 27
communications lies within the higher levels of the wireless protocol
stack. This paper describes research completed at the data link, net-
work, transport, OS/middleware, and application protocol layers that
have addressed energy eciency for wireless networks. However, power
conservation within the wireless protocol stack remains a very crucial
research area for the viability of wireless services in the future.
Acknowledgements
The authors wish to thank the reviewers and the Editor for their
valuable suggestions and comments that helped improve the paper;
and Stephanie Lindsey and Harini Krishnamurthy who assisted with
editing the paper.
28 Jones, Sivalingam, Agrawal and Chen
Correspondence Information
Prof. Krishna Sivalingam
Boeing Associate Professor of Computer Science
School of Elect. Engg. & Computer Science
Building
Washington State University
Pullman,
Phone: 509 335 3220
Fax: 253-295-9458 (Please email to addr below after sending Fax)
Survey of Energy Ecient Network Protocols for Wireless 29
Author Biographies:
Christine Jones received her M.S. in Computer Science from Washington
State University, Pullman in 2000 and her B.S. in Computer
Science from Whitworth College in 1998. She is presently with BBN
Technologies in Cambridge, MA, USA.
Photo for Jones is included as a JPEG le.
Krishna M. Sivalingam (ACM '93, IEEE SM '00 M '95) received
his Ph.D. and M.S. degrees in Computer Science from State University
of New York at Bualo in 1994 and 1990 respectively. While at SUNY
Bualo, he was a Presidential Fellow from 1988 to 1991. Prior to that,
he received the B.E. degree in Computer Science and Engineering in
1988 from Anna University, Madras, India. He is Boeing Associate Professor
of Computer Science in the School of Electrical Engineering and
Computer Science, at Washington State University, Pullman, where
he was an Assistant Professor from 1997 to 2000. Earlier, he was an
Assistant Professor at University of North Carolina Greensboro from
1994 until 1997. He has conducted research at Lucent Technologies'
Bell Labs in Murray Hill, NJ, and at AT&T Labs in Whippany, NJ.
His research interests include wireless networks, optical wavelength
division multiplexed networks, and performance evaluation. He has
served as a Guest Co-Editor for a special issue of the IEEE Journal
on Selected Areas in Communications on optical WDM networks. He
is co-recipient of the Best Paper Award at the IEEE International
Conference on Networks 2000 held in Singapore. He has published an
edited book on optical WDM networks in 2000. His work is supported
by AFOSR, Laboratory for Telecommunication Sciences, NSF, Cisco,
Bellcore, Alcatel, Intel, and Washington Technology Center. He holds
three patents in wireless networks and has published several papers
including journal publications. He has served on several conference
committees including ACM Mobicom 2001, Opticom 2000, ACM Mobicom
99, MASCOTS 99, and IEEE INFOCOM 1997. He is a Senior
Member of IEEE and a member of ACM. Email: krishna@eecs.wsu.edu
Photo of Krishna Sivalingam is enclosed in hard-copy.
Prathima Agrawal is Assistant Vice President of the Internet Architecture
Research Laboratory and Executive Director of the Computer
Networking Research Department at Telcordia Technologies (formerly
Bellcore), Morristown, NJ. She is also an adjunct Professor of Electrical
and Computer Engineering, Rutgers University, NJ. She worked
for 20 years in AT&T/Lucent Bell Laboratories in Murray Hill, NJ,
where she was Head of the Networked Computing Research Depart-
ment. Presently, she leads the ITSUMO joint research project between
Telcordia and Toshiba Corp. ITSUMO is a third generation wireless
Jones, Sivalingam, Agrawal and Chen
access system for multimedia communication over end-to-end packet
networks. The ITSUMO project received the Telcordia CEO Award for
2000. Dr. Agrawal received her Ph.D. degree in Electrical Engineering
from the University of Southern California. Her research interests are
computer networks, mobile and wireless computing and communication
systems and parallel processing. She has published over 150 papers and
has received or applied for more than 50 U.S. patents. Dr. Agrawal is
a Fellow of the IEEE and a member of the ACM. She is the recipient
of an IEEE Third Millennium Medal in 2000. She chaired the IEEE
Fellow Selection Committee during 1998-2000.
Photo of Agrawal is enclosed as a JPEG le.
Jyh-Cheng Chen received the B.S. degree in information science
from Tunghai University, Taichung, Taiwan, in 1990, the M.S. degree in
computer engineering from Syracuse University, Syracuse, NY, in 1992,
and the Ph.D. degree in electrical engineering from the State University
of New York, Bualo, in 1998.
Since August 1998, he has been a Research Scientist in Applied
Research at Telcordia Technologies, Morristown, NJ. At Telcordia, he
is one of the key architects and implementers of the ITSUMO (Inter-
net Technologies Supporting Universal Mobile Operation) project, in
which he has been working on QoS for mobile and wireless IP net-
works, IP-based base station design, SIP-based mobility management,
and multimedia applications, etc. He received the 2000 Telcordia CEO
Award that was intended to recognize and honor the Company's most
exceptional teams and individuals. While working toward his Ph.D.,
he also worked on energy ecient MAC protocols for wireless ATM
networks at AT&T Labs, Whippany, NJ, and non-destructive coating
thickness measurement at ASOMA-TCI Inc., N. Tonawanda, NY.
Dr. Chen is a member of IEEE and ACM.
Email: jcchen@research.telcordia.com
Photo of Chen is enclosed as a JPEG le.
Survey of Energy Ecient Network Protocols for Wireless 31
--R
Computer Communication Review 27(5)
Low Power Digital CMOS Design.
Kluwer Academic Publishers.
IEEE Transactions on Multimedia 1(2)
Master's thesis
http://www.
Infrared Data Association:
http://www.
http://developer.
Microsoft and Toshiba Corporation:
In: ARRL/CRRL Amateur Radio 9 th Computer Networking Conference.
Master's thesis
ACM/Baltzer
http://www.
http://www.
IEEE Personal Communications 5(3)
(Guest
--TR
Energy efficient indexing on air
Power analysis of embedded software
AIRMAIL: a link-layer protocol for wireless networks
Comparing algorithm for dynamic speed-setting of a low-power CPU
Design of a low power video decompression chip set for portable applications
Control and Energy Consumption in Communications for Nomadic Computing
Low power error control for wireless links
Techniques for low energy software
Scheduling techniques for reducing processor energy use in MacOS
Improving reliable transport and handoff performance in cellular wireless networks
Adaptive source rate control for real-time wireless video transmission
Power-aware routing in mobile ad hoc networks
PAMASMYAMPERSANDmdash;power aware multi-access protocol with signalling for ad hoc networks
Design and analysis of low-power access protocols for wireless and mobile ATM networks
Performance comparison of battery power consumption in wireless multiple access protocols
Power aware page allocation
Energy efficiency of TCP in a local wireless environment
Low Power Digital CMOS Design
Mobile ad hoc networking and the IETF
Adaptive Scheduling at Mobiles for Wireless Networks with Multiple Priority Traffic and Multiple Transmission Channels
PowerScope
Energy/Throughput Tradeoffs of TCP Error Control Strategies
TCP-probing
Architecture and algorithms for quality of service support and energy-efficient protocols for wireless/mobile networks
--CTR
T. Simunic, Power Saving Techniques for Wireless LANs, Proceedings of the conference on Design, Automation and Test in Europe, p.96-97, March 07-11, 2005
Subalakshmi Venugopal , Wesley Chen , T. D. Todd , Krishna Sivalingam, A rendezvous reservation protocol for energy constrained wireless infrastructure networks, Wireless Networks, v.13 n.1, p.93-105, January 2007
Z. Sun , X. Jia, Energy Efficient Hybrid ARQ Scheme under Error Constraints, Wireless Personal Communications: An International Journal, v.25 n.4, p.307-320, July
Luca Negri , Mariagiovanna Sami , David Macii , Alessandra Terranegra, FSM--based power modeling of wireless protocols: the case of bluetooth, Proceedings of the 2004 international symposium on Low power electronics and design, August 09-11, 2004, Newport Beach, California, USA
Mauro Sanctis , Simone Quaglieri , Ernestina Cianca , Marina Ruggieri, Energy Efficiency of Error Control for High Data Rate WPAN, Wireless Personal Communications: An International Journal, v.34 n.1-2, p.189-209, July 2005
Jari Korhonen , Ye Wang, Power-efficient streaming for mobile terminals, Proceedings of the international workshop on Network and operating systems support for digital audio and video, June 13-14, 2005, Stevenson, Washington, USA
Mohammed I. Alghamdi , Tao Xie , Xiao Qin, PARM: a power-aware message scheduling algorithm for real-time wireless networks, Proceedings of the 1st ACM workshop on Wireless multimedia networking and performance modeling, October 13-13, 2005, Montreal, Quebec, Canada
Rui Zhang , Hang Zhao , Miguel A. Labrador, The Anchor Location Service (ALS) protocol for large-scale wireless sensor networks, Proceedings of the first international conference on Integrated internet ad hoc and sensor networks, May 30-31, 2006, Nice, France
Pierpaolo Bergamo , Alessandra Giovanardi , Andrea Travasoni , Daniela Maniezzo , Gianluca Mazzini , Michele Zorzi, Distributed power control for energy efficient routing in ad hoc networks, Wireless Networks, v.10 n.1, p.29-42, January 2004
Stephanie Lindsey , Cauligi S. Raghavendra, Energy efficient all-to-all broadcasting for situation awareness in wireless ad hoc networks, Journal of Parallel and Distributed Computing, v.63 n.1, p.15-21, January
Ning Li , Jennifer C. Hou, Localized topology control algorithms for heterogeneous wireless networks, IEEE/ACM Transactions on Networking (TON), v.13 n.6, p.1313-1324, December 2005
Davide Bertozzi , Anand Raghunathan , Luca Benini , Srivaths Ravi, Transport Protocol Optimization for Energy Efficient Wireless Embedded Systems, Proceedings of the conference on Design, Automation and Test in Europe, p.10706, March 03-07,
Ghassen Ben Brahim , Bilal Khan, Budgeting power: packet duplication and bit error rate reduction in wireless ad-hoc networks, Proceeding of the 2006 international conference on Communications and mobile computing, July 03-06, 2006, Vancouver, British Columbia, Canada
Q. Gao , K. J. Blow , D. J. Holding , Ian Marshall, Analysis of energy conservation in sensor networks, Wireless Networks, v.11 n.6, p.787-794, November 2005
Hongyan Lei , Arne A. Nilsson, An M/G/1 queue with bulk service model for power management in wireless LANs, Proceedings of the 2nd ACM international workshop on Performance evaluation of wireless ad hoc, sensor, and ubiquitous networks, October 10-13, 2005, Montreal, Quebec, Canada
Yuvraj Agarwal , Curt Schurgers , Rajesh Gupta, Dynamic power management using on demand paging for networked embedded systems, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Horst F. Wedde , Muddassar Farooq , Thorsten Pannenbaecker , Bjoern Vogel , Christian Mueller , Johannes Meth , Rene Jeruschkat, BeeAdHoc: an energy efficient routing algorithm for mobile ad hoc networks inspired by bee behavior, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Mauro Caporuscio , Damien Charlet , Valerie Issarny , Alfredo Navarra, Energetic performance of service-oriented multi-radio networks: issues and perspectives, Proceedings of the 6th international workshop on Software and performance, February 05-08, 2007, Buenes Aires, Argentina
Guoqiang Wang , Yongchang Ji , Dan C. Marinescu , Damla Turgut, A routing protocol for power constrained networks with asymmetric links, Proceedings of the 1st ACM international workshop on Performance evaluation of wireless ad hoc, sensor, and ubiquitous networks, October 04-04, 2004, Venezia, Italy
MoonBae Song , Sang-Won Kang , KwangJin Park, On the design of energy-efficient location tracking mechanism in location-aware computing, Mobile Information Systems, v.1 n.2, p.109-127, April 2005
Piyush Naik , Krishna M. Sivalingam, A survey of MAC protocols for sensor networks, Wireless sensor networks, Kluwer Academic Publishers, Norwell, MA, 2004
Weifa Liang , Yuzhen Liu, On-line disjoint path routing for network capacity maximization in energy-constrained ad hoc networks, Ad Hoc Networks, v.5 n.2, p.272-285, March, 2007
Kyriakos Mouratidis , Dimitris Papadias , Spiridon Bakiras , Yufei Tao, A Threshold-Based Algorithm for Continuous Monitoring of k Nearest Neighbors, IEEE Transactions on Knowledge and Data Engineering, v.17 n.11, p.1451-1464, November 2005
Marcel Busse , Thomas Haenselmann , Wolfgang Effelsberg, TECA: a topology and energy control algorithm for wireless sensor networks, Proceedings of the 9th ACM international symposium on Modeling analysis and simulation of wireless and mobile systems, October 02-06, 2006, Terromolinos, Spain
Tao Wu , Subir Biswas, A Self-Reorganizing Slot Allocation protocol for multi-cluster sensor networks, Proceedings of the 4th international symposium on Information processing in sensor networks, April 24-27, 2005, Los Angeles, California
Rajgopal Kannan , Sudipta Sarangi , S. Sitharama Iyengar, Sensor-centric energy-constrained reliable query routing for wireless sensor networks, Journal of Parallel and Distributed Computing, v.64 n.7, p.839-852, July 2004
Tao Wu , Subir Biswas, Minimizing inter-cluster interference by self-reorganizing MAC allocation in sensor networks, Wireless Networks, v.13 n.5, p.691-703, October 2007
Jung-hi Min , Hojung Cha , Jongho Nang, Energy management for interactive applications in mobile handheld systems, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Ronny Krashinsky , Hari Balakrishnan, Minimizing energy for wireless web access with bounded slowdown, Wireless Networks, v.11 n.1-2, p.135-148, January 2005
Ronny Krashinsky , Hari Balakrishnan, Minimizing energy for wireless web access with bounded slowdown, Proceedings of the 8th annual international conference on Mobile computing and networking, September 23-28, 2002, Atlanta, Georgia, USA
Stephanie Lindsey , Krishna M. Sivalingam , Cauligi S. Raghavendra, Power optimization in routing protocols for wireless and mobile networks, Handbook of wireless networks and mobile computing, John Wiley & Sons, Inc., New York, NY, 2002
Stylianos Mamagkakis , Dimitrios Soudris , Francky Catthoor, Middleware design optimization of wireless protocols based on the exploitation of dynamic input patterns, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Leslie D. Fife , Le Gruenwald, Research issues for data communication in mobile ad-hoc network database systems, ACM SIGMOD Record, v.32 n.2, p.42-47, June
Wesam Al Mobaideen , Hani Mahmoud Mimi , Fawaz Ahmad Masoud , Emad Qaddoura, Performance evaluation of multicast ad hoc on-demand distance vector protocol, Computer Communications, v.30 n.9, p.1931-1941, June, 2007
Maruti Gupta , Suresh Singh, Greening of the internet, Proceedings of the conference on Applications, technologies, architectures, and protocols for computer communications, August 25-29, 2003, Karlsruhe, Germany
Vladimir I. Zadorozhny , Panos K. Chrysanthis , Prashant Krishnamurthy, A framework for extending the synergy between MAC layer and query optimization in sensor networks, Proceeedings of the 1st international workshop on Data management for sensor networks: in conjunction with VLDB 2004, August 30-30, 2004, Toronto, Canada
Jie Gao , Li Zhang, Tradeoffs between stretch factor and load balancing ratio in routing on growth restricted graphs, Proceedings of the twenty-third annual ACM symposium on Principles of distributed computing, July 25-28, 2004, St. John's, Newfoundland, Canada
Ravi Jain , David Molnar , Zulfikar Ramzan, Towards understanding algorithmic factors affecting energy consumption: switching complexity, randomness, and preliminary experiments, Proceedings of the 2005 joint workshop on Foundations of mobile computing, September 02-02, 2005, Cologne, Germany
Xiuzhen Cheng , Fang Liu , Fengguang An, SeGrid: a secure grid framework for sensor networks, EURASIP Journal on Wireless Communications and Networking, v.2006 n.2, p.79-79, April 2006
Jianping Pan , Y. Thomas Hou , Lin Cai , Yi Shi , Sherman X. Shen, Topology control for wireless sensor networks, Proceedings of the 9th annual international conference on Mobile computing and networking, September 14-19, 2003, San Diego, CA, USA
Hai Huang , Padmanabhan Pillai , Kang G. Shin, Design and implementation of power-aware virtual memory, Proceedings of the USENIX Annual Technical Conference on USENIX Annual Technical Conference, p.5-5, June 09-14, 2003, San Antonio, Texas
Suzanne Rivoire , Mehul A. Shah , Parthasarathy Ranganathan , Christos Kozyrakis, JouleSort: a balanced energy-efficiency benchmark, Proceedings of the 2007 ACM SIGMOD international conference on Management of data, June 11-14, 2007, Beijing, China
Kanishka Lahiri , Sujit Dey , Anand Raghunathan, Communication-Based Power Management, IEEE Design & Test, v.19 n.4, p.118-130, July 2002
Dongkyun Kim , J. J. Garcia-Luna-Aceves , Katia Obraczka , Juan-Carlos Cano , Pietro Manzoni, Routing Mechanisms for Mobile Ad Hoc Networks Based on the Energy Drain Rate, IEEE Transactions on Mobile Computing, v.2 n.2, p.161-173, January
G. Anastasi , M. Conti , E. Gregori , A. Passarella, A performance study of power-saving polices for Wi-Fi hotspots, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.45 n.3, p.295-318, 21 June 2004
Chris Barrett , Martin Drozda , Madhav V. Marathe , S. S. Ravi , James P. Smith, A mobility and traffic generation framework for modeling and simulating ad hoc communication networks, Scientific Programming, v.12 n.1, p.1-23, January 2004
Jianping Pan , Lin Cai , Y. Thomas Hou , Yi Shi , Sherman X. Shen, Optimal Base-Station Locations in Two-Tiered Wireless Sensor Networks, IEEE Transactions on Mobile Computing, v.4 n.5, p.458-473, September 2005 | mobile computing;network protocols;low-power design;energy efficient design;wireless networks;power aware protocols |
500504 | Frozen development in graph coloring. | We define the 'frozen development' of coloring random graphs. We identify two nodes in a graph as frozen if they are of the same color in all legal colorings and define the collapsed graph as the one in which all frozen pairs are merged. This is analogous to studies of the development of a backbone or spine in SAT (the Satisfiability problem). We first describe in detail the algorithmic techniques used to study frozen development. We present strong empirical evidence that freezing in 3-coloring is sudden. A single edge typically causes the size of the graph to collapse in size by 28%. We also use the frozen development to calculate unbiased estimates of probability of colorability in random graphs. This applies even where this probability is infinitesimal such as 10-300, although our estimates might be subject to very high variance. We investigate the links between frozen development and the solution cost of graph coloring. In SAT, a discontinuity in the order parameter has been correlated with the hardness of SAT instances, and our data for coloring are suggestive of an asymptotic discontinuity. The uncolorability threshold is known to give rise to hard test instances for graph-coloring. We present empirical evidence that the cost of coloring threshold graphs grows exponentially, when using either a specialist coloring program, or encoding into SAT, or even when using the best of both techniques. Hard instances seem to appear over an increasing range of graph connectivity as graph size increases. We give theoretical and empirical evidence to show that the size of the smallest uncolorable subgraphs of threshold graphs becomes large as the number of nodes in graphs increases. Finally, we discuss some of the issues involved in applying our work to the statistical mechanics analysis of coloring. | Introduction
A phase transition has been identied for many NP-complete problems and is
frequently correlated with a high frequency of hard instances. This contrasts
with randomly chosen instances from other regions of the problem space, where
most often such instances are easy. Recently the techniques of statistical mechanics
have been applied to the analysis of this transition, and have yielded
insights into the nature of the region and its relation to hardness. (See [9] for
an overview and many references).
One of the more recent eorts has been the identication of the nature of the
order parameter at the transition. In some problems (k-SAT, k > 2) there
is evidence that this parameter is discontinuous [16,18,19]. These problems
typically have a high frequency of hard instances. On the other hand problems
with a continuous parameter tend not to have a high frequency of hard
instances. This has been explored most thoroughly in the case of 2
[17,18]. For the Hamiltonian Cycle problem, the order parameter may also be
continuous [28] which corresponds to a very low frequency of hard instances
at the transition [26].
For SAT one measure of the order parameter is the backbone [18] which is the
number of variables that are frozen to a particular value under all satisfying
assignments. A closely related notion is the spine [2]. For k-SAT, k > 2, the
evidence indicates that this measure jumps from zero to a xed fraction of the
n variables at the transition. Parkes showed that at the satisability threshold,
many variables are frozen although some are almost free [21]. Similarly, for
Hamiltonian Cycle we can use as variables the number of edges that must
appear in all Hamiltonian Cycles of the graph. In contrast to the K-SAT case,
only a few edges are frozen in satisable instances at the transition.
For k-coloring (we consider almost exclusively) it is more di-cult to
dene and measure the order parameter, due in large part to the symmetry of
the solutions. However, it is clear that if we add an edge to a colorable graph
and it then becomes uncolorable, then in the graph without the edge the pair
of vertices must have been colored with the same color in every coloring of
the graph. Such a pair we refer to as frozen for reasons that will become
apparent in the next section. We adopt the number of such pairs as our spine,
an approximation to the order parameter for coloring, as it mimics closely the
backbone and spine used in other problems. Our spine shows strong evidence
of a discontinuous jump at the transition. We discuss some of the possible
consequences for the statistical mechanics analysis of coloring, in particular
on the likelihood of discontinuous behavior in possible order parameters.
Instances at or near thresholds are used for benchmarking algorithms in many
domains, for example satisability [15] and constraint satisfaction problems
[22,24]. However there is no simple correlation between hard instances and
phase transitions. For example, there is a classic phase transition in the solvability
of random Hamiltonian cycle instances [11] but hard instances do not
seem to be found there [25]. We provide extensive evidence that 3-coloring
problems are hard at the phase transition and beyond. We attempt to understand
why phase transitions so often yield hard instances. We believe, and the
evidence supports, that the reason for the exponential growth is the disappearance
of small k 1-critical subgraphs at the threshold. There is no known way
of e-ciently noticing and verifying the existence of large critical subgraphs of
non-k-colorable graphs.
1.1 Overview of the paper
In Section 2.1 we present the model, which we call the full frozen develop-
ment, used to study the properties of the phase transition. Naive methods for
studying frozen development would be prohibitively costly, but in Section 2.2
we present a detailed overview of the techniques we use to calculate frozen
development. In theory these allow the computation to be performed using
O(n 2 log(n)) calls to a graph coloring program, and in practice they allow us
to do the complete computation for n up to 225,
In Section 3 we present the empirical results of the study on our frozen development
model, showing strong evidence that indeed our backbone measures
show a discontinuous jump at the threshold. In particular we show evidence
that a xed fraction of pairs become frozen under the addition of O(1) edges.
We show in Section 3.2 that this jump can be seen as a sudden collapse in the
size of the graph. If we consider the graph induced by the equivalence relation
dened by frozen pairs, the addition of a single edge typically reduces the size
of the graph by 28%. As a nal application of frozen development, we report
in Section 3.3 on new estimates on the probability of colorability into regions
where this probability can be as low as 10 300 .
In Section 4 we present empirical evidence, using two distinct complete al-
gorithms, to show that indeed there is exponential growth in di-culty near
the threshold as n increases. This evidence is a mixture of results measuring
di-culty exactly at the threshold graph from the frozen development process,
and of further graphs generated from the G np model for larger values of n.
In Section 5 we present theoretical and empirical evidence that the smallest
4-critical subgraphs of a non-3-colorable graph are large near the threshold
asymptotically. We correlate this with the hardness of instances empirically
observed. We wonder whether a discontinuity in a backbone is suggestive of
such large critical structures in general.
In the conclusions (Section 6) we discuss the possibility that the discontinuity
exhibited by our measure may not correspond to a discontinuous order parameter
as measured by a minimization of violated edges over all partially correct
3-colorings. If this should prove to be the case, it could have implications for
renements needed in the analysis as applied to this problem.
Frozen Development and Algorithmics
2.1 Full Frozen Development
For purposes of this study, we consider a model based on the set of
un-
ordered pairs of distinct vertices
at random one of the
permutations of the set of pairs, calling this
the input sequence . We build a graph of m edges by choosing the rst m
pairs from the input sequence. For a given sequence we can determine the
smallest value t() (or t when is understood) such that the graph on the rst
t() edges is not k-colorable: we call t() the threshold index of the sequence
. We dene the average threshold T (n) as the average over the set of all input
sequences of the minimum values t.
Using this model gives several advantages in studying the phase transition,
including an expected reduction in variance on computing T (n) for a given
sample size. One of the conjectures on the coloring phase transition says that
for each k, T (n)=n converges to a constant k as n !1.
In practice we do not want to stop measurement at the threshold index; after
all if some other edge had occurred at this point in the sequence we might be
able to keep on going. In the full frozen development method when a pair m
is added as an edge that renders the graph non-k-colorable, then the edge is
deleted and we move on to the next edge. This gives us a way to smoothly
extend frozen measures beyond the threshold until all pairs have been consid-
ered. At that point, the graph is a maximal k-colorable graph.
To make this more precise we rst present a few denitions.
Given a k-colorable graph G, we say that a pair fu; vg is frozen i for every
coloring c of G, c[v]. The name frozen means that this edge cannot be
in any k-colorable graph containing G as a subgraph.
The spine (or backbone) 2 of a colorable graph G is
We dene the sequence of graphs G
, inductively.
We follow Bollobas et al [2] by dening the scapegoats of this sequence as those
pairs i which are not included because they occur in the spine of an earlier
graph. The name arises because each scapegoat could be held responsible for
uncolorability if it were added to the graph. That is, the set of scapegoats for
a sequence is
Note that the rst scapegoat encountered is the threshold for the sequence.
For m < t the graph Gm is the graph formed by the rst m pairs of the
sequence. Notice that G ( n) is a k-partite complete graph.
For each Gm we record all those pairs which are frozen by m. That is, each
remaining pair is tested and recorded as frozen with crystallization
by Gm and not by Gm 1 . Notice that the
threshold index t is exactly the smallest index m such that m is frozen for
some index ( m ) < m. Also, we see that B(Gm
mg.
We can also compute the smallest value m f of m such that there is some
pair (u; v) of vertices (necessarily with index in the sequence greater than m f )
frozen the same. There may be more than one pair which are forced to be the
same color by the addition of the m f th edge. We call m f the rst edge causing
a frozen pair, or the rst frozen for short. We dene
Clearly, for every sequence , m f () t(). Thus,
suming it exists) must be bounded above by . Elsewhere we have presented
evidence to strongly suggest that as n !1,
We choose the term spine because in the full frozen development it most closely
approximates the spine dened in [2]. While not identical, it shares the important
property of monotonicity with the earlier denition.
Analogously to frozen pairs, we can measure for each m the number of pairs
such that for all colorings of Gm , c[u] 6= c[v]. Such a
pair is said to be free. We observe that free is not quite the complement of
frozen. When a pair is frozen it cannot appear in a k-colorable graph which
is a super graph of the current one. When a pair is free its addition makes no
dierence to the set of legal colorings of any super-graph. Thus, in calculating
frozen development we can skip adding these edges, because they simply do
not matter to the set of colorings. 3 As for frozen pairs, the free pairs are
also part of the crystallization process. We associate with each free pair the
crystallization index when it rst became free.
Our program for the full frozen development process on a sequence produces
as output the type of each pair
. This type is either frozen,
Eective means that for all i < m, m is neither frozen nor free
in G i . Thus the edge is eective in the sense that it is added to the sequence
of graphs without causing uncolorability, and it does reduce the number of
colorings of Gm to less than those of Gm 1 . 4
For each non-eective pair, we record the index at which it rst becomes
frozen or free with the index of eective pairs being set to themselves. With
this information, it is easy to compute the growth of the spine, and of the free
set, as well as other statistics such as the location of the rst frozen and the
threshold.
2.2 E-cient Computation of Full Frozen Development
We turn our attention to the outline of the method we use to compute the full
frozen development. The reader not interested in these details but only in the
empirical results can safely skip to the next section.
Our approach can be broadly classed as a forward scanning method since for
each Gm we create, we scan forward over remaining pairs to determine whether
each is free, frozen or as yet not crystallized. We outline several optimizations
that make this task amenable up to As a result, empirical indications
are that this runs in about O(n 2 log n) calls to our underlying coloring
algorithm. At the end of this section we show that the full frozen development
can indeed be performed in at most O(n 2 log n) calls.
3 When we test the hardness of coloring, all free edges must be added. The free
edges could make a big dierence to a coloring algorithm, for example by preventing
errors high in the search tree.
4 Note that our denitions ensure that eective edges are not free, even though
(trivially) c[u] 6= c[v] in Gm if vg.
To determine whether a pair is frozen at index m, we test if the
not, the pair is frozen.
To test whether the pair is free, we merge u to v. A merge requires
v to be deleted and then for every edge fv; zg that was in Gm , we add an edge
fu; zg to Gm - (uv) if it is not already present. If Gm - (uv) is not k-colorable
then the pair fu; vg is free.
These two tests are helpful because they replace an examination of all colorings
to single calls to a graph coloring program. This coloring program can be one
independently developed and highly optimized, and is freed from the space
demand of storing all colorings.
The rst step in our process is nding the threshold of the sequence. For
some purposes, for example when n is much greater than 225, this is the only
step we perform. To nd the threshold index t, we do a binary search to
nd the smallest i such that the graph with the edge set is
not k-colorable and set This requires O(log n) calls to the coloring
program.
To compute the full frozen development using a naive implementation, we
would call a coloring program for each
and for each pair
fu; vg later in the sequence . This would result in
stead, we have taken advantage of the monotonicity of graph colorings to
greatly reduce this number. The rst family of properties we use to reduce
calls are straightforward:
(respectively frozen) at index m =) fu; vg not frozen (not free)
at index m.
uncolorable so we
can skip m .
the set of edges free (frozen) at index
the set of edges free (frozen) at index m 1. Again we can skip m .
We can make further improvements by making use of the coloring found by our
coloring program when the graph given to it is colorable. In particular, we can
greatly reduce the number of calls to the coloring program while calculating
the frozen pairs at a particular index m. We do this by making use of every
coloring found, whether when adding edges or merging nodes. We have the
following facts:
m.
c[u] 6= c[v] =) fu; vg not frozen at index m.
any coloring c 0 for Gm + fu; vg is also a coloring of Gm .
any coloring c 0 for Gm -(uv) can be extended to a coloring for Gm by setting
These facts are more useful than they might appear. When we need to test Gm ,
i.e. we are not skipping pair m , we test the rst pair whose
status is not known from smaller indices. If Gm+fu; vg is uncolorable, we have
determined that the pair is frozen. If however it is colorable, we can use the
coloring c 1 and the rst two facts above to rule out one of the possibilities for
each If the merge Gm - (uv) gives another colorable graph (in which
case m is known to be eective) we use the new coloring c 2 . For any future
pair fz; wg such that c 1 [z] 6= c 1 [w] but c 2 vice versa, fz; wg is
eective since it can be neither free nor frozen at index m, meaning that we
need not use our coloring program to test either Gm
Typically we will nd many such pairs, greatly reducing the number of calls.
The preceding paragraph gives the information we can glean from the rst pair
at index m. The minimum information we have
about each future pair is that only one possibility of free or frozen remains.
This cuts the number of calls by at least half. As we test each unresolved pair
fz; wg for frozenness, the result will give us signicant amounts of information.
If the pair is frozen, the pair is frozen for all indices i > m. If the pair is not
frozen, we obtain a new coloring. If we are testing fz; wg for being frozen,
all previous colorings must have but the new coloring must have
the case for free is similar. In either case, the new coloring must
be dierent from all previous colorings found for Gm . Barring rare cases, this
will determine some future pairs as being neither free nor frozen.
These optimizations give a dramatic reduction in the number of coloring calls
necessary. In fact, almost every coloring attempt gives us signicant amounts
of information: either that a certain pair is free or frozen for many indices, or
that many pairs are neither free nor frozen at a certain index. We have still
further advantages from the well known easy-hard-easy pattern of coloring,
with hard instances concentrated at the threshold. While we do have to perform
many coloring calls near the threshold, a large number of calls are either
signicantly before or after the threshold: these calls are usually cheap.
Finding the threshold as the rst step also gives another advantage. Typically
the threshold occurs just after a large jump in the number of frozen pairs, as
we will see in the next section. A coloring is an equivalence relation, and so
is frozen. If fu; vg and fv; wg are frozen, then taking the transitive closure
shows that fu; wg is also a frozen pair, and this without a call to the coloring
program. On the rst forward scan from the threshold we typically nd many
frozen pairs, and so many pairs never need be checked. The sharper the rise
in the frozen pairs, the more eective this is.
The net result is that we can test full frozen development in 3-coloring up to
graphs with 225 nodes.
Despite major advantages, our approach may not be optimal. Instead of doing
the forward scan, we can use a backward scanning approach. After using a
binary search to nd t() we know that all pairs are eective or
free, and so require no further testing for frozen. Inductively, after we have
determined the values for the sequence up to m 1, we test Gm 1 +m . If this
is not colorable, then m is frozen. We may then do a binary search on the set
of graphs G adding m to each in turn, to determine ( m ). On the
other hand, if Gm is not frozen, we can merge the pair of vertices in m to see
if it is free. If so, we can do a binary search on the graphs G
the merge on m applied to each in turn, to determine
Otherwise the edge is eective, and requires no further action. This method
will do at most O(n 2 log n) calls to the coloring algorithm. As with the forward
scanning method, many of the potential calls can be eliminated by keeping
track of certain colorings, and taking transitive closures. We leave practical
investigation of that technique for future work. The forward scanning method
is more eective for computing certain partial frozen development information.
3 Empirical Evidence for an Asymptotic Discontinuity
We now report empirical results of full frozen development on 3-coloring, using
the algorithmic techniques developed above. We will show dramatic evidence
strongly suggestive of an asymptotic discontinuity in behavior. Further re-search
is necessary to conrm whether this discontinuity will be apparent in
the order parameter for graph coloring.
3.1 Sudden Freezing in 3-coloring
Our rst empirical observation is that the freezing process is not gradual.
Instead, we see that a large number of pairs freeze at the same index. That is,
the addition of one edge can cause many pairs to become frozen simultaneously.
Typically, just before the threshold, there is a sequence of from 1 to 5 edges
which cause on average 16% or more of the pairs to be frozen. This behavior is
typical of the big jump we might expect if there is an asymptotic discontinuity
in behavior.
In gure 1 we show how the number of frozen and free pairs grows as the
edge density increases. We ran the full frozen development on 100 random
permutations at each value of in steps of 25. This is clearly
Frozen
(n
choose
Edges/n
Frozen Pairs751251752250.10.30.50.7
Free
(n
choose
Edges/n
Free Pairs75125175225
Fig. 1. The ratio of frozen pairs and free pairs to n
plotted against the ratio m=n.
typical of a phase transition, and the sharpness suggests that there will be a
discontinuity at 1. The range over which these curves are plotted is the
entire range over which any dierence in the number of frozen or free pairs
occurred. For the larger values of n it is of course a tiny fraction of the set of
pairs.
In table 1 we present data on the size of the spine at the threshold, and as
one might expect the threshold typically occurs after the big jump, although
occasionally it occurs by sheer accident after only one or two pairs are frozen.
The spine is monotonic, like the spine of Borgs et al [2]. That is, once a
pair is in the spine it remains there as new edges are added. This is dierent
from the backbone of Monasson et al [19], dened with respect to optimizing
assignments. The closest analogy in coloring would be a measure with respect
to all colorings minimizing the number of violated edges. A frozen pair in a
Frozen Pairs at the Threshold
Min Max
100 703.41 335.94 0.14 26 1339
200 3010.74 1283.94 0.15 19 5255
Table
The number of frozen pairs at the threshold. Notice that the number is on average
about 14% of the total pairs, which can be compared to gure 1
colorable graph might become unfrozen after the threshold where no coloring
satises all edges, if some optimal colorings violate the edge and others don't.
We prefer the monotonic measure we use, not least because it makes possible
the O(n 2 log(n)) procedure reported in Section 2.2, but it is possible that
less dramatic behavior would be observed with a backbone-like measure. We
discuss this issue further in the conclusions.
3.2 Collapse in Graph Structure
We have already noted that frozen denes an equivalence relation on G. We
write F for this relation, so that u F v i the pair fu; vg is frozen in G.
This gives us the natural concept of the graph induced by the equivalence
relation, which we denote G=F and call the collapsed graph. If
is the
equivalence class of u. By denition, for every pair of vertices in the collapsed
graph not joined by an edge there is a k-coloring of the graph that makes
the vertices dierent colors. That is, there are no frozen pairs in the collapsed
graph. There may, however, be free pairs of vertices.
The collapsed graph gives us an alternative view of freezing. It addresses
a potential objection, that the big jump in freezing may occur only due to
transitivity. That is, a large number of frozen pairs may occur only because
two equivalence classes merge. We will show that the collapsed graph shows
dramatic behavior, as did the raw number of frozen pairs. This cannot be solely
because of large equivalence classes merging, since that would only entail a
reduction of one in the size of the collapsed graph.
Vertices
Remaining
Sequence Index
Fig. 2. Collapse in fteen 200 vertex instances. The y-axis represents the number of
vertices remaining in the collapsed graph. The x-labels indicate that the x-axis is
the sequence index, that is the number of edges that would be added if none were
skipped.
We can analyze the nature of the collapsed graphs using the same frozen
developments reported in gure 1. We report the sequence of the number of
vertices in the collapsed graph for 3-coloring examples. That is, we report the
number of vertices in Gm =F for each m. Notice that we do not add edges to
the collapsed graph: we calculate the frozen development as before and then
the size of the collapsed graph. In gure 2 we show the collapse in a sample
of fteen 200-vertex graphs, graphs 1-15 of our sample of 100. In each case
there is indeed a sudden collapse near the threshold in each sequence. Each of
these instances drops rapidly somewhere in the range 430 to 470, which when
divided by 200 gives a ratio of 2.15 to 2.35 as we might expect. The threshold
graph typically occurs very shortly after the big drop given that there are so
many frozen pairs lying around after the big drop (the threshold pair must be
a frozen pair). The marked points on the curves represent the eective edges,
that is those edges that were neither frozen nor free in the sequence when
encountered and so were actually added to the graph. The number of eective
edges actually required to cause the catastrophic drop is even smaller than
the range of sequence indices indicates.
If we compute the average number of vertices in the collapsed graph over
the sample set, we no longer see a sudden drop. In this respect, the average
is not a good indicator of what is happening in the individual instances. To
show this, in gure 3 we plot the average number of vertices remaining in the
collapsed graph for each index, together with another set of twenty 200-vertex
graphs. The average is taken over all 100 samples, and the individual graphs
are 41-60 from our sample. Almost all instances exhibit a very narrow range
over which they drop from a fairly well dened region at the top to another at
the bottom. And all have at least one large drop caused by a single edge. The
average really only re
ects the percentage of instances that have dropped so
far, not how fast they drop.2060100140180400 420 440 460 480 500
Vertices
Remaining
Sequence Index
"avg.200"
Fig. 3. A set of twenty sample collapses with with the average of the
entire 100 samples overlaid. Notice that the average does not closely approximate
the shape of any of the curves, even the most atypical case at the far left.
We can use alternative measures to see how fast the size of collapsed graphs
drop on individual permutations. To do so we nd, for each permutation's
full frozen development, the one edge that caused the maximum drop in the
number of vertices on that sequence. Call this the maxdrop of the experiment.
We can now study each permutation's behavior relative to its own maxdrop.
We show this in gure 4. As before the y-axis is the average size of collapsed
graph, but the x-axis is now the number of edges in the sequence before or
after the maxdrop, with the maxdrop edge index represented by
see from gure 4 that the mean maxdrop is from 72% to 44%, a drop of 28%
of the vertices on a single edge. This is represented by a near-vertical jump in
the average curve.
The behavior of the maxdrop is remarkably consistent as we change n. In
gure 5 we plot in the form of error bars for each n the mean, minimum,
and maximum maxdrop of the set of samples. The average of 28% is very
consistent. Even the minimum is consistent, with the smallest fraction at 10%
That is, in every one of the 800 trials, there was a single edge that
caused a fractional drop of at least 10% in the size of the collapsed graph. We
also show, in gure 7 the average location of the maxdrop edge. As expected
the average index of the maxdrop edge appears to be converging towards 2.3.
Another way of measuring the graph collapse is to compute intervals over
which certain drops in the size of collapsed graphs occur. In gure 6 we plot
Vertices
Remaining
Centered on Maxdrop / n
Average of 100 Trials
Fig. 4. The rst ten experiments from gure 2 and the average of all 100 experiments,
but with the samples and average aligned at the index of the maxdrop edge.0.20.61
Maximum
Edge
Number of Vertices
Fig. 5. The average maximum drop on a single edge. The error bars represent the
minimum maximum drop and the maximum maximum drop over 100 samples.
the index (as a ratio to n) where the collapsed graph rst has fewer than 80%
of its original vertices. In contrast, in gure 7 we plot the actual (top) and
relative number of edges required to cause the graph to drop from 0:9n down
to 0:4n vertices. This 50% interval seems not only to be converging relative
to n but even in absolute terms. In fact, there are specic instances where an
edge caused the drop from above 90% to below 40%, causing the measured
number of edges to be zero.
We nd it intuitively surprising that a single edge would cause a xed fraction
of vertices to collapse. However, our empirical evidence strongly suggests that
this occurs in almost every instance. Accordingly we conjecture that it will be
true in the limit. Further theoretical research is necessary to conrm or refute
Maximum
Drop/n
Number of Vertices
of
80%
Mark
Number of Vertices
Rank 100
Rank 90
Rank 11
Fig. 6. (Top) The average index of the maximum drop edge. The error bars indicate
minimum and maximum index over the entire set. (Bottom) The index (as a ratio
to n) where the collapse rst has fewer than 80% of its original vertices. The top
and bottom lines correspond to the full range over the 100 samples while the center
lines bound the 80% of the samples in the middle, a simple minded attempt to
remove the high variance extremes. Note the convergence apparent here, and also
note that in no instance did the rst drop below 80% exceed 2:35n edges.
this conjecture.
Our conjecture is highly suggestive of an asymptotic discontinuity in an order
parameter, as previously seen in SAT. However, on this point we must be less
denite, because it is not yet known what the order parameter for coloring is.
Further, as we have seen, some measures of graph collapse (such as the mean
size of graphs) do not seem to have a discontinuity. The empirical evidence
of sudden collapse in graph is certainly worthy of further investigation from a
statistical mechanics viewpoint.
Average
edges
in
gap
Number of Vertices n
90-40 gap0.20.61
Fraction
Remaining
Before
Edge
Number of Vertices
Fig. 7. (Top) The average over 100 samples of the number of edges in the gap
between the rst drop below 90% and the rst drop below 40%. While noisy, the
actual number of edges seems to be decreasing with n. For example, for 100 vertex
graphs the addition of about 11 edges reduce the collapsed graph from 90 vertices to
40, while only 8 are necessary for to reduce collapsed graphs from 180 to 80
vertices. (Bottom) The minimum, average and maximum of the number of vertices
in the graph at the top of the maximum drop. The average here too seems fairly
consistent with the 72% observed before for that the drop on one
edge is almost equal to the total drop up to that point on average; i.e. 100-72=28.
However, the range can extend anywhere from under 40% to 100%.
3.3 New Estimates of Probability of Colorability
As well as investigating the region where threshold graphs occur, we show
that frozen development is also useful for gaining a more accurate picture of
the phase transition in colorability. The main advantage is that we can get
unbiased estimates of the probability in colorability both where the probability
is very close to one and where it is very close to zero. Additionally we see a
variance reduction compared to simple generate and test, but this reduction
does not repay the computational expense of the method.
Recall that in frozen development we have a permutation of all
pairs
of vertices. Also, recall that an edge i is a scapegoat exactly if the edge i
is not in G i (). The rst scapegoat occurs at the threshold, and thereafter
we see more scapegoats. Any sequence can be reduced to a scapegoat-free
subsequence dened simply as the set of non-scapegoats in
in the same order in which they occur in . Thus, R() denes the edges in
the k-partite complete graph G ( n) ().
For example, in 3-coloring, the sequence of edges 1-2,1-3,1-4,2-3,2-4,3-4,4-5,5-
reduces to omitted as it completes
a 4-clique. Many dierent sequences reduce to the same scapegoat-free
subsequence. In this example, the sequence
reduces to the same subsequence.
We can use the frozen development to calculate exactly the probability of colorability
over all sequences 0 with the same scapegoat free subsequences as
Note that in this case We use the observation
that this probability is also an unbiased estimator of the probability
of colorability. 5 We explain these steps in detail before presenting empirical
results.
Given a sequence and its scapegoat-free subsequence R(), there is a family
R() of sequences dened by R()g. Given some index i,
we can ask for a given 0 2 R() whether or not the threshold index, t( 0 ), is
greater than i or not; that is, if the rst i edges in 0 form a colorable graph.
They will do so if and only if the rst scapegoat in 0 occurs at index i
more. Given i, it is natural to ask what is the probability of colorability over
R(), i.e. the value of Prft( 0 assuming that each sequence
equally likely.
Our calculation of the entire frozen development of a sequence allows us to
exactly calculate the probability R()g. We do so
by induction, using the following equations.
5 We thank David Wilson for pointing this out to us.
The last step follows because the condition implies that none of
the rst i 1 edges in 0 can be scapegoats. This also implies that the rst
edges in 0 are also the rst i 1 elements in the scapegoat-free subsequence
R(). The probability required in the nal line can be calculated very simply
if we have performed a full frozen development. The possible scapegoats are
exactly the pairs frozen in the frozen development of up to and including the
index in corresponding to the i 1 th element of R(). Writing P C
;i for the
conditional probability Prf 0
have
In words, the denition of F (i) is the number of frozen pairs in up to the
location in where the i th element of R() appears. Note that the value of
F (i) is the same for all sequences 2 R().
We implemented a Perl script to calculate these numbers, given the frozen
development over a permutation of pairs. To illustrate the nature of typical
behavior, we show results in gure 8 of a single instance with
rst calculated the conditional probability P C
;i We see a big jump in the conditional
probability. This corresponds to the sudden jump in the number of
frozen pairs. The probability of a random pair of nodes being consistent with
any coloring is 2. The conditional probability falls below this value because
we are dealing with permutations of pairs, or in other words sampling without
replacement from the set of edges. We were then able to calculate the unconditional
probability of colorability P;i for sequences with the same reduced
subsequence as , and this is shown in gure 8. Immediately after the conditional
probability goes signicantly away from 1, the absolute probability
inevitably collapses. After that it decays exponentially, as can be seen in the
log plot. This plot shows the advantage of our method: these probabilities are
the exact probability of sequences with the same scapegoat-free subsequence
being colorable with this number of edges. No such estimate was previously
possible when the probability declined below about 10 9 , since samples of a
billion would have been necessary, and even this would give only estimates
instead of exact probabilities.
For random instances, the probability P;i is of little direct interest, although
it provides us with a valuable tool for studying non-random problems in the
future. For random instances, the subject of this paper, the probability we are
interested in is the global probability colorableg. Fortunately,
the probability P;i is an unbiased estimator of P i . That is, E(P;i
Single sequence Conditional Probability n=2251e-3001e-2001e-1001
Single sequence probability of colorability n=225
Fig. 8. (Left) The conditional probability P C
;i for sequences with the same reduced
subsequence as a particular for plotted against i=n. (Right) The overall
probability of colorability P;i for sequences with the same reduced subsequence as
, plotted on a log scale against i=n.
where the expectation is over all sequences .
To justify the claim of unbiased estimation, rst note that
rst i edges in a random permutation are colorableg
This follows because over all permutations, each distinct graph G with i edges
occurs exactly the same number of times, in particular exactly i!(n i)! times.
We can easily prove that P;i gives an unbiased estimator of this equivalent
statement of P i . The key to the proof is that the families R()form a disjoint
union of all sequences . To formalixe the simple proof, we introduce the
indicator function I ;i , which takes the value 1 if the rst i edges in the
sequence are colorable, and 0 if not. We also extend the notation P;i to
P R();i in the obvious way.
rst i edges in a random permutation are colorableg
I ;i
P;i
The combined result of this development is that we can introduce a new
methodology for empirical estimation of probability of colorability. Instead
of mere 'generate and test', in which we generate a sample of graphs from G n;i
and test them for colorability, we instead generate random permutations and
calculate the entire frozen development. From this, we can calculate for each
sequence and index i the value P;i and use this as an estimate of P i . Since
this is an unbiased estimate, we can repeat this procedure for a sample of
permutations and use the average to estimate the probability of colorability.
We can now reuse our data on frozen development to present estimates of probability
of colorability up to with a sample size of 100 permutations
in each case. The results are shown in Figures 9 to 10.
Our experiments suggest that the technique is not very useful as a variance
reduction technique where probability of colorability is signicantly away from
both 1 and 0. Where P i 0:5, the sample standard deviation of our estimates
of P i was about 0.4. We would expect a standard deviation of 0.5 using generate
and test. Using a sample only about 50% bigger, we could obtain the
same accuracy of estimation from generate and test, very much cheaper than
by calculating full frozen development.
The value of our experiments is in giving estimates of probability in the highly
colorable and the highly uncolorable region. This is particularly notable in the
highly uncolorable region. We remind the reader that the estimates given are
unbiased estimates of the true probabilities, even where values are as low as
. For the rst time, we can use empirical data to picture the decay
in probability of colorability. One caveat is necessary. We do have estimates
of probability, but it is likely that these estimates may well have very high
variance. For example, if we estimate P the true value of P i may
be sequences gives an estimate of P
currently have no way of judging how accurate these estimates are. David
Wilson suggested that the estimates might give the right order of magnitude
of the logarithm, as in the example just given [27].0.10.30.50.70.91
e/n vs Mean +/- std error
Fig. 9. A broad view of the phase transition in 3-COL for varying n, plotted according
to the method introduced here. Error bars show +/- one standard error,
i.e. sample standard deviation divided by the square root of sample size. In this
case the sample was of 100 random permutations at each n. This gure shows little
that would not be seen in a more conventional plot, except that the errorbars are
slightly narrower than would be expected.
4 Hardness of Coloring Problems at Thresholds
We have seen a sudden jump in graph behavior. If this does connect with
an asymptotic discontinuity in an order parameter, by analogy with SAT we
should see that 3-colorability shows hard behavior at its threshold. Accord-
ingly, we investigate this question in this section. Our evidence is strongly in
favor of the belief that graph 3-coloring does indeed become hard at its thresh-
old. This is also consistent with previous results from the literature [3,10]. In
this section we present new empirical results to further support this claim.
In gure 11 we plot the growth in the average number of search nodes used to
determine the colorability of the colorable (i.e. graphs with t() 1 edges) and
uncolorable graphs at the threshold (t()). To obtain the threshold graphs for
used our frozen development binary search to locate the threshold
but did not compute the frozen sets. There are 100 instances at each n.
Both the median and average costs clearly exhibit exponential growth.
Notice that for small n the cost of the colorable graph is greater than the cost
of the uncolorable graphs with one more edge. This changes as n gets larger,
2.36 2.38 2.4 2.42 2.44 2.46 2.48 2.5
e/n vs Mean +/- std error
n=1251e-251e-151e-052.2 2.4 2.6 2.8 3 3.2
e/n vs Mean +/- std error
Fig. 10. (Top) A close up of the region where all n give very similar behavior. All
error bars seem to cross at about e=n 2:4. (Bottom)A close up of the highly
insoluble region. Note the logarithmic scale, showing that beyond about
we are obtaining estimates of colorability entirely beyond the reach of conventional
techniques.
with the uncolorable cost being more than twice as great at
reason is that for the smaller values of n there is a high probability that the
threshold graph contains a 4-clique, which is detected during initial pruning
and so the search uses no backtrack nodes. As the analysis in Section 5 shows,
these disappear with increasing n.
To study larger n, to get a picture of the distribution at various densities
and to ensure that the problem is hard for algorithms other than our Smallk
program, a series of experiments were run on graphs from the class G np . In
the rst experiments random graphs were generated with the edge parameter
ranging from 2.29 to 2.31, which is near the conjectured threshold. Recall
Average
Number
of
Search
Nodes
Number of Vertices
Uncolorable
Colorable t-1464102416384
50 100 150 200 250 300 350
Median
Number
of
Search
Nodes
Number of Vertices
Uncolorable
Colorable t-1
Fig. 11. The average (Top) and median (Bottom) number of search nodes used by
Smallk for threshold graphs.
that the expected number of edges is E
. Each graph was then
tested using the Smallk program and the Ntab back program. The Ntab back
is a SAT solver which implements the tableau method[5] with back jumping.
It has been shown to be very e-cient on short clauses, such as those generated
by the conversion we describe next.
There are many ways to represent a coloring instance as a SAT instance.
In preliminary tests we tried several of these and settled on the one in g-
ure 12 (referred to as Version 2.5) as being the one that performed best with
Ntab back.
Version 2.5 considered here and used in the experiments begins to cross the
line from a pure representation to one in which some solution information is
carried in the representation. On considering the results for
the symmetry breaking, it became evident that one of the di-culties faced by
Variables to the set of variables
U. The meaning is that if v i is true then vertex v gets
color i.
Color Vertex to the set of
clauses C. The meaning is that if the clause is true
then at least one of the colors is assigned to the vertex.
Edge Check 8 fu; vg 2 E, for each
to C. The meaning is that the same color
cannot be assigned to both end points of an edge.
Only One Color 8v 2 V; 81 i < j k add h
to C. This means at most one of the colors can be
assigned to v.
Symmetry Breaking For exactly one edge
add the two clauses hu 1 i and hv 2 i to C.
Variables: nk
Clauses: 2 1-clauses; mk
Fig. 12. Version 2.5 used to represent a k-coloring instance ki as an
instance (U; C) of SAT. Here
the SAT search engine especially on unsolvable instances, was that it had no
way of preventing searches that basically relabeled the color classes.
In Version 2.5 symmetry breaking information is presented to SAT. Basically
an edge is detected in G and the two vertices are forced to distinct colors 1
and 2. This is a trivial addition to a conversion program since edges must be
checked anyway. However, it does open a can of worms if our goal is to check
the relative e-ciency of two approaches. If we are allowed to add arbitrarily to
the converter, then with su-cient eort we could simply encode the solution,
which the SAT program would merely have to output. Of course we could add
the time for conversion to the time for the SAT routine, but this opens up
other questions of e-ciency of the conversion.
Similarly, the \Only One Color" clauses are not strictly necessary. But again
preliminary tests indicated these 2-clauses occasionally helped on the harder
instances for small values of k, and seldom caused Ntab back to require more
time. A number of other conversions were also tested, but none were competitive
with this for
Using this conversion on the instances of 3-coloring considered here, usually
the two programs were within a factor of 8 or less in run time, with both
\winning" equally often. Conversion times were not considered, being only a
tiny fraction of the search time. Rarely, one or the other program would take
considerably longer than the other, up to a factor of 50 or more. This typically
occurred on instances that were solvable, where one program would get lucky
and solve the instance very quickly. For k > 4, Smallk was almost invariably
faster, frequently by orders of magnitude.
For this section the minimum time from the two programs was extracted. 6
This minimum time is used in as the indicator for the growth rate of the solving
cost. 7 In gure 13, we present evidence that the cost is growing exponentially,
even at the 25% rank.416642561024
Min
Time
Number of Vertices
50% All 100 Instances2.300.25141664400 450 500 550
Min
Time
Number of Vertices
25% All 100 Instances2.30Fig. 13. The growth rate in the cost of coloring or proving uncolorable. There are
100 instances at each of 400, 450 and 500, but only 25 instances at 550. Three edge
densities are reported, from 2.29 to 2.31. (Top) The 50% rank. (Bottom) The 25%
rank.
6 In two cases Ntab back incorrectly asserted that there was no solution. In these
instances the time from Smallk was used, but in others we continue to use the
Ntab back time on the assumption that a debugged Ntab back would probably give
similar results to the version we used.
7 For an accurate estimate of resource usage, this time should be multiplied by 2.
There is a large dierence between the typical cost of solvable and unsolvable
instances. In gure 14 the instances are split into those that were colorable
and those that were not. For n 500 a time limit of one hour was placed
on each program. This time limit was exceeded by both programs in three
instances: one instance at 2:30, and two instances at
2:31. These are included as unsatisable instances in gure
14. For was removed, but there are only 25 instances
at each data point. Both satisable and unsatisable instances appear to be
increasing in di-culty at an exponential rate, with an almost constant ratio
in median times between the two cases.1162564096
Min
Time
Number of Vertices
50% Split SAT and Rest
2.29 Rest
Rest
2.31 Rest
2.29 SAT
2.31 SAT
Fig. 14. Splitting the instances into those that are satisable and those that were
not at the 50% rank indicates that both are growing exponentially, although the
satisable instances are more than an order of magnitude easier. \Rest" indicates
that in three cases both programs exceeded the one hour time limit, but were
assumed to be unsatisable.
To obtain a better picture of where the hard instances are, sampling was done
ranging from 2:64. The prole for three
ranks is plotted in gure 15. We also indicate the number of unsatisable
instances at each sample point, demonstrating that the plotted region encompasses
the entire transition from colorable to uncolorable for these instances.
These gures support the contention that there is a broad band of hard instances
at In fact, the frequent hard instances occur well beyond the
2.31 threshold where 50% of the instances become unsolvable to
beyond.
As further evidence a series of tests were run at 700. Extrapolating from
gure 13 we would expect the median cost at 2:31 to be in the region 10-
hours for these sample points, making extensive unbounded tests infeasible.
2.15 2.2 2.25 2.3 2.35 2.4 2.45 2.5 2.55 2.6 2.65
Min
Time
Edges /n
Profile at n=500
75%
50%
2.15 2.2 2.25 2.3 2.35 2.4 2.45 2.5 2.55 2.6 2.65
Number
of
Instances
Edges /n
UnFin
UNSAT
Fig. 15. (Top) The cost of the 25%, 50% and 75% ranks at
were taken at every 0:01. There are 100 instances at
each sample point. (Bottom) The number of instances that were not satisable. The
little tops indicate the number of instances on which both Smallk and Ntab back
exceeded the one hour time limit. It is likely that most of these were unsatisable
as well.
Instead, we opted for a time limit of one hour, using only the Smallk program. 8
The results show that indeed the median cost is greater than 1 hour in many
cases. Given that the median cost at does not exceed 121 seconds
for any , growth to a median in excess of 3600 seconds at re
ects
a growth rate of at least n 10 over this range, su-cient to convince us that
the instances are typically hard. We thus believe that the presentation in
gure 16 is a reasonable representation of the di-culty region at
8 For the harder instances it is rare for the two programs to be out by more than a
factor of 4 to 8, so we do not expect signicantly dierent results if Ntab back were
also used.
This evidence is consistent with the conjecture that the hard region is in fact
growing in width (as a ratio to n) as n increases, with 50% or more of the
instances being hard from less than 2.3 to 2.5 or more.51525
2.15 2.2 2.25 2.3 2.35 2.4 2.45 2.5 2.55 2.6 2.65
Number
Timed
Out
Edges /n
Profile N=700
Fig. 16. The number of instances that were not completed in one hour,
instances at each sample.
5 Critical Graphs and Thresholds
The most fundamental question that must be addressed in understanding
the link between the nature of the order parameter and the di-culty algorithms
have in solving instances is the nature of the constructs that cause
the algorithms di-culty. For graph coloring, almost all search algorithms proceed
by restricting the set of colorings in some way, either by assigning colors
to selected vertices, or by restricting the available colors for vertices, or by
modifying the graph in some way that re
ects a restricted set of colorings
such as merging two independent vertices or adding an edge. Some algorithms
use a combination of these techniques.
When a graph is uncolorable, all such methods must eventually fail. If a graph
is colorable, then the increasing restrictions can create a situation in which
there is no solution. In either case, the graph (with possibly some coloring
restrictions) will contain a subgraph which is su-cient to show uncolorability.
The e-ciency of these algorithms depends on nding and verifying (at least
implicitly) at least one such uncolorable substructure.
A graph E) is k 1-edge-critical if it is not k-colorable, but every
subgraph on a proper subset E 0 E is k-colorable. We use the term critical to
mean edge-critical. Clearly, such a G is k+1-colorable. It is apparent from the
preceding paragraphs that the size, number and structure of critical subgraphs
of an uncolorable graph are of paramount importance to the e-ciency of
most search algorithms. Studying critical substructures can help us understand
hard instances [12,7,1]. We observe that an instance cannot be hard if it is
unsolvable for reasons detected by an algorithm in polynomial time: in the case
of coloring this occurs when instances contain a small critical subgraph. For
example, non 3-colorable graphs are easy to detect if they contain a 4-clique.
It is well known (see e.g. Palmer [20] chapter 3) that the threshold probability
for 4-cliques for graphs drawn from the G n;p class is 1=n 2=3 . This means O(n 4=3 )
is an upper boundary on the number of edges in random graphs that may be
hard, since programs such as Smallk easily detect 4-cliques.
In the appendix we show that 4-critical subgraphs are of
order
Note that this probability corresponds
to an expected number of edges
Although this result suggests the likelihood that the hard region extends well
beyond the threshold asymptotically, the n 4=3 boundary has little to do with
the actual range of hard instances encountered in our empirical study. If we
want the smallest critical subgraphs to be of order z, that is n " z, and we
choose then we would require n z 250 according to this formula.
Thus, even to get a high probability that the minimum critical subgraph is of
order greater than 4 (i.e. to avoid 4-cliques) would require n > 2 500 .
Using techniques similar to those in [4] it can be shown that for
smallest critical subgraph will be
n) with high probability. Indeed for
1=n 11=12 the smallest resolution proof (under the straightforward reduction
to SAT) will be super polynomial [14].
Although large critical subgraphs are necessary for hard non-colorable in-
stances, being large is not su-cient. Starting with 4-cliques, arbitrarily large
4-critical graphs can be constructed using only the Hajos join construction
[8] 9 which, even when embedded as subgraphs in larger graphs, could be recognized
immediately using techniques similar to those in Smallk as not having
a 3-coloring. The reason is that in these graphs there are always at least two
near-4-cliques (n4c's; 4-cliques with one edge removed). In n4c's the two independent
vertices are frozen, that is they must be the same color. These lead
to chains of collapse that show immediately that the graph is not 3-colorable.
Thus, the structure of the critical graphs must be such that they are not easily
recognized. It is known that the threshold for n4c's is 1=n 4=5 [20, chapter 3].
Thus, for random graphs with o(n 6=5 ) edges these will disappear asymptot-
9 Given two non-colorable graphs G 1 with edge fx; yg and G 2 with edge fv; wg,
the join construction creates a new graph by deleting the two edges, merging the
two vertices x and v, and adding the edge fy; wg.
ically. By considering the expectation formula, we nd that for
1)(n 2)(n 3)=4) 1=5 the expected number of n4c's is approximately one.
The ratio of the expected number of edges to vertices at
is 2.45, which is a little greater than the threshold for three coloring.
This range compares well to the top of the hump in gure 16. However, hard
instances occur well above even this boundary. Apparently the presence of a
few n4c's is not enough to make the coloring task easy.
What may not be apparent from the analysis is that as we increase the number
of nodes, when the 4-cliques disappear we see a sudden jump in the size of
4-critical graphs to ones
with
n) edges.
Critical Subgraphs of Threshold
Small Cases Remaining Cases
Table
Size distributions of the critical subgraphs of threshold graphs, 3. 100 samples
were taken at each n.
In table 2 we show a breakdown of certain critical subgraphs of the threshold
non-3-colorable graphs. We have separated out very small critical graphs, that
is 4-cliques and other small graphs with up to 12 edges, and other critical
graphs. Two features of table 2 are particularly important. First, we see a
reduction in the number of small critical graphs, from more than 10% of cases
at to only a few percent. At there are three occurrences of
4-cliques, but the next smallest critical graph we observed contains 36 edges.
This data seems consistent with the hypothesis that there will be a sudden
jump in the size of the critical subgraphs. Second, the growth in size of the
larger critical graphs is striking. Indeed, these graphs are growing faster than
linearly with n. That is not possible asymptotically, since the threshold occurs
at O(n). We take this as evidence that critical graph size is O(n), and we
assume that the apparent superlinear growth is towards some xed ratio of
critical graph size to n.
The critical graphs we report above may be unrepresentative. Those we report
are found by randomly permuting the edges of the threshold graph, then deleting
the edges in order, replacing each edge that causes the graph to become
colorable. Thus, they may be may be either unusually large or small, although
there is a straight forward argument that smaller ones will be the most prob-
able. It would be interesting to determine the minimum critical subgraph in
each instance. Unfortunately this will likely prove di-cult, possibly almost as
hard as listing all critical graphs. Further investigation is needed on how to
compute all critical graphs [23].
The critical set of edges in G is the set of edges that occur in every critical sub-graph
of the graph. Thus, these act as a lower bound on the size of the smallest
critical subgraph, although the smallest critical subgraph can be arbitrarily
larger than the critical set of the graph. Note that an edge is in the critical
set i its removal from G makes G uncolorable. This gives a straightforward
way of computing the critical set.
By computing critical sets, we have a lower bound on the size of the smallest
critical graph. All edges in the critical set must be examined by any algorithm
conrming uncolorability. In table 3 we report the number of edges in critical
sets at the threshold, and in gure 17 we show the distribution of sizes of
critical sets for 225. The minimum often obtains its minimum possible
value of 1, since the scapegoat edge is always critical. Despite this, we see
a mean size of critical set of 0:35n. This suggests that the critical set is
growing linearly, as is the critical graph size.
Size of Critical Sets
200
Table
The critical sets, that is the number of edges that must occur in every critical
subgraph of the threshold non-3-colorable graphs. Notice that there is a distinction
between scapegoats and edges in the critical set of the threshold graph. Given the
sequence used to generate the graph, the scapegoat is uniquely the last edge
added to form the uncolorable threshold graph. This edge must be in the critical
set, because its removal makes the graph colorable. There may be many other edges
in the critical set.
Number of Edges
Histogram of Threshold Sets at n=225
Fig. 17. Distribution of the number of the number of edges in the critical set at
These edges are those in the threshold graph whose removal makes the
graph colorable. There is always at least one such edge, i.e. the scapegoat added as
the last edge in the threshold grpah.
The result that critical graphs are O(n) at the threshold and that they grow
very quickly to this size once the graphs are large enough to prevent 4-cliques
means that more intelligent algorithms cannot be expected to perform significantly
better, unless some remarkable new method for proving uncolorability
with structures other than subgraphs is discovered. Thus phase transition
instances can be expected to be hard for all algorithms. We have reported
experiments elsewhere with 4-coloring and triangle-free random graphs for 3-
coloring [6]. In both cases almost all critical graphs were hard, and there was
very little distance between the rst frozen and threshold points. We conjecture
that similar results will apply to other NP-complete problems in which
phase transitions provide hard instances. It would be especially interesting to
study problems such as the TSP, in which the analogue of critical graphs are
not so obvious.
The fact that there are large (O(n)) critical sets may also have implications
for statistical mechanical analysis. We have dened a threshold with respect
to the frozen development process as the edge which when added causes the
graph to become non-k-colorable. Let us refer to this as the t 0 threshold. The
subscript 0 refers to the fact that we will 3-color the graph with no edge
violations
We suggest that we can take the frozen development measure for a sequence
of thresholds, t We say that a graph is (k; v)-colorable if it can be
k-colored by violating at most v edges.
An edge is violated by a coloring c if both endpoints receive the same color.
We say that a pair of vertices (x; y) is (k; v)-frozen under (k; v)-colorings if
(1) G is (k; v)-colorable,
and
(2) for every (k; v)-coloring c of G we have
Note that if we set then this is exactly the same frozen we have used
previously. Also, if a pair is frozen in Gm , then it is frozen for every (k; v)-
colorable graph Gm 0
in the sequence where m 0 > m.
Now we are ready to dene the multiple thresholds. We dene the (k; v)-
threshold as
In an analogous manner we can also dene (k; v)-scapegoats, the (k; v)-spine
and so on to complete a (k; v) frozen development process. Although in principle
we can now use our frozen development process, the cost of doing so will
be very high.
One indication about (3; 1)-colorings we do have comes from the size of the
(3; 0)-critical sets. Note that on the index before the threshold graph, that is
the last 3-colorable graph at t 0 1, any of the approximately 2:3n edges may
be violated in a (3; 1)-coloring. However, at the next edge every (3; 1)-coloring
must violate exactly one of the edges in the (3; 0)-critical set. The data in Table
3 show that this is approximately 0:35n on average. However, an average of
0:35n nevertheless represents a large increase in the freedom of a (3; 1)-coloring
over a (3; 0)-coloring. Each of the edges in the threshold set gives a distinct
set of 3-colorings of the graph when it is the one edge violated, since that pair
of vertices receive the same color only in those colorings. We expect that at t 0
the set of (3; 1)-frozen pairs will be much less than the number of (3; 0)-frozen
pairs. Each separate violation will result in a dierent set of pairs being frozen
the same. Similarly, at t i the number of (3; i +1)-frozen pairs may be less than
the number of (3; i)-frozen pairs.
When statistical mechanical models are used to study the phase transition
typically the order parameter is based on some measure of coloring similarity
taken with respect to the minimization of violated edges, corresponding to
clauses in SAT [16]. Under our notation, the measure of coloring similarity
would be taken at min v such that G is (3; v)-colorable. We have used the
number of frozen pairs as the measure of coloring similarity, but the above
discussion should apply to any similarity measure.
Thus, we believe it is possible that the sudden jump we see in the similarity
measure for the (3; 0)-spine may not be evident if the measure were instead
taken with respect to minimum violation colorings. This raises the possibility
that the absence of a discontinuity in the order parameter may not necessarily
correlate to easy instances.
We also do not have a solid theoretical link showing that the presence of
a large jump implies that the minimal critical structures are large and lack
properties that make them easily identiable. Although there are interesting
empirical connections, supported by this research, in our opinion neither the
necessity nor su-ciency of rst order transitions for hardness have been shown.
In fact, although large critical graphs and discontinuity at the threshold are
both evident in our empirical and theoretical results, we still have only weak
explanations that link the two phenomena.
6 Conclusions
Our contributions in this paper are in two areas: the frozen development of
graph coloring as we add edges to graphs; and the reasons why graphs found
near the colorability threshold are hard to color.
We have described the notion of 'full frozen development' of graphs. This gives
rise to a denition of the spine of a graph, analogous to the backbone or spine
of a satisability problem. We have shown that the full frozen development can
be calculated in O(n 2 log n) calls to a graph coloring program, and reported
the practical algorithm we use to calculate full development up to
We also showed that this can be used to calculate unbiased estimates of the
probability of colorability in regions even where this probability is
and we reported empirical estimates of probability using this method.
We have reported a number of novel empirical results on the development of
the spine. We showed empirically that the spine of a graph shows a dramatic
jump, usually just before the threshold in colorability. Since this measure is
based on elements (pairs) which are quadratic in the number of vertices, we
also converted this measure to one of counting the number of equivalence
classes forced by the set of three colorings. This results in what we call a
'collapsed' graph, which shows a sharp drop in size corresponding to the jump
in frozen pairs. The collapse is always as dramatic as the freezing. Every
instance we studied showed a collapse of at least 10% in size when a single
edge is added, and the average was 28%.
In terms of the di-culty of coloring, we reported empirical results demonstrating
that there is a widening range over which the best programs available,
including a conversion to SAT, show exponentially (in n) increasing median
time. This median growth rate is 2 n=25 over the range examined. We then
analyzed the nature of critical graphs at thresholds, theoretically and empir-
ically. Unless remarkable new methods are discovered, large critical graphs
correspond to hard instances, because algorithms have to investigate a large
number of edges to prove uncolorability. We observed that at small n, very
small critical graphs such as 4-cliques do occur, but that when these disappear
there is a sudden jump to critical graphs of size O(n). This result agrees
with prior conjectures, and is supported by theoretical results on hardness of
We are aware that the spine we have introduced may not be the order parameter
for 3-COL. Our denition is similar to the spine in SAT dened by Borgs
et al [2], although not exactly analogous. A minimum edge violation coloring
approximation, would be closer to the backbone measure of Monasson et al
[19]. If we used a backbone-like measure, this sharp (discontinuous) change
might be reduced or eliminated. The empirical evidence supporting this is
based on the critical sets, that is the set of edges such that the removal of any
one of them would make the uncolorable graph colorable. At the threshold,
this set is on average large, possibly 0:35n or larger. This means that allowing
one edge to be violated might cause the t 0 threshold to exhibit few or no frozen
pairs. To verify this conjecture we will probably need to make our programs
more e-cient, as the number of colorings needed could be signicant.
We hope that these results will contribute to the analysis using statistical mechanics
of phase transitions in graph coloring. In particular, the occurrence of
a jump in freezing and a collapse in graph size is highly suggestive of a discontinuity
of an the order parameter. In satisability, such a discontinuity has
been correlated with the hardness of 3-SAT instances. Since we have reported
results suggesting both a discontinuity and the hardness of 3-COL instances,
we hope that future investigations will uncover a link like that found in SAT.
--R
Where the really hard problems are.
Many hard examples for resolution.
Experimental results on the crossover point in random 3-SAT
Well out of reach: why hard problems are hard.
The satis
Remarks on the graph colour theorem of Haj
Can't get no satisfaction.
The hardest constraint problems: A double phase transition.
The main properties of random graphs with a large number of vertices and edges.
A new look at the easy-hard-easy pattern of combinatorial search di-culty
Determining the chromatic number of a graph.
Personal communication
Hard and easy distributions of SAT problems.
Statistical mechanics of the random k-sat model
Phase transition and search cost in the 2
Determining computational complexity from characteristic 'phase transitions'.
Graphical Evolution: An introduction to the Theory of Random Graphs.
Clustering at the phase transition.
An empirical study of phase transitions in binary constraint satisfaction problems.
Finding all muses
Locating the phase transition in binary constraint satisfaction problems.
The Gnm phase transition is not hard for the Hamiltonian cycle problem.
Personal communication
personal communication.
--TR
Graphical evolution: an introduction to the theory of random graphs
Many hard examples for resolution
The hardest constraint problems
Experimental results on the crossover point in random 3-SAT
3-coloring in time 0(1.3446^n)
--CTR
Roger Mailler, Comparing two approaches to dynamic, distributed constraint satisfaction, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Syed Ali , Sven Koenig , Milind Tambe, Preprocessing techniques for accelerating the DCOP algorithm ADOPT, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Karen Meagher , Brett Stevens, Covering arrays on graphs, Journal of Combinatorial Theory Series B, v.95 n.1, p.134-151, September 2005
Adam Beacham , Joseph Culberson, On the complexity of unfrozen problems, Discrete Applied Mathematics, v.153 n.1, p.3-24, 1 December 2005
Peter Jonsson , Andrei Krokhin, Recognizing frozen variables in constraint satisfaction problems, Theoretical Computer Science, v.329 n.1-3, p.93-113, 13 December 2004
Gabriel Istrate , Stefan Boettcher , Allon G. Percus, Spines of random constraint satisfaction problems: definition and connection with computational complexity, Annals of Mathematics and Artificial Intelligence, v.44 n.4, p.353-372, August 2005
Stefan Boettcher, Evolutionary local-search with extremal optimization, Neural, Parallel & Scientific Computations, v.10 n.2, p.249-258, June 2002
Paul Beame , Joseph Culberson , David Mitchell , Cristopher Moore, The resolution complexity of random graphk-colorability, Discrete Applied Mathematics, v.153 n.1, p.25-47, 1 December 2005 | spine;threshold phenomena;frozen development;graph coloring;backbone |
500505 | Constructing an asymptotic phase transition in random binary constraint satisfaction problems. | The standard models used to generate random binary constraint satisfaction problems are described. At the problem sizes studied experimentally, a phase transition is seen as the constraint tightness is varied. However, Achlioptas et al. showed that if the problem size (number of variables) increases while the remaining parameters are kept constant, asymptotically almost all instances are unsatisfiable. In this paper, an alternative scheme for one of the standard models is proposed in which both the number of values in each variable's domain and the average degree of the constraint graph are increased with problem size. It is shown that with this scheme there is asymptotically a range of values of the constraint tightness in which instances are trivially satis-able with probability at least 0.5 and a range in which instances are almost all unsatisfiable; hence there is a crossover point at some value of the constraint tightness between these two ranges. This scheme is compared to a similar scheme due to Xu and Li. | Introduction
Following the paper by Cheeseman, Kanefsky and Taylor [2] in 1991, phase transition
phenomena in NP-complete problems, including constraint satisfaction problems
(CSPs), have been widely studied. In CSPs, using randomly-generated in-
stances, a phase transition is seen as the tightness of the constraints is varied; when
the constraints are loose, there are many solutions and it is easy to nd one; when
the constraints are tight, it is easy to prove that there are no solutions. Hard instances
occur over the range of values of the constraint tightness corresponding to
an abrupt fall in the probability that an instance has a solution from near 1 to near
0. The peak in average cost of solving an instance occurs, empirically, at or near
the crossover point, where the probability of a solution is 0.5.
A number of possible models for generating random CSPs are discussed in section
2. However, Achlioptas et al. [1] showed that these standard models do not have
a phase transition in the same sense as other NP-complete problems such as graph
colouring, because 'asymptotically, almost all instances generated will not have a
solution'. They proposed an alternative model for generating random instances,
described below. However, in this paper, it is shown that the di-culty is not
necessarily with the way in which the standard models generate instances, but in
the way that the parameters change as the problem size increase, and it is proved
that for one of the standard models, there will asymptotically be a crossover point
when the constraint tightness is within a certain range.
Random Binary CSPs
A constraint satisfaction problem (CSP) consists of a set of variables X= fx
for each variable, a nite set D i of possible values (its domain); and a set of con-
straints, each of which consists of a subset fx of X and a relation R
informally, the constraint species the allowed tuples of values for
the variables it constrains. A solution to a CSP is an assignment of a value from
its domain to every variable such that all the constraints are satised; a constraint
is satised if the tuple of values assigned to the variables it constrains is in the
constraint relation. A k-ary constraint constrains k of the problem variables; in a
binary CSP, all the constraints are binary.
Instances of binary CSPs can easily be generated randomly since a binary CSP
can be represented by a constrant graph and a set of constraint matrices. The
constraint graph of a binary CSP is a graph in which there is a node representing
each variable and for every constraint there is an edge linking the aected variables.
constraint relation between a pair of variables with m 1 and m 2 values in their
respective domains can be represented by an of Boolean values, in
which a 0 indicates that the constraint does not allow the simultaneous assignment
of the corresponding pair of values, and 1 that it does. Each entry of 0 in a constraint
matrix corresponds to a nogood, i.e. a forbidden pair of variable-value assignments.
Random binary CSPs can be generated using a simple model described by the tuple
is the number of variables, m is the uniform domain size, p 1 is
a measure of the density of the constraint graph, and p 2 is a measure of the tightness
of the constraints. We rst generate a constraint graph G, and then for each edge in
this graph, we choose pairs of incompatible values for the associated con
ict matrix.
(Note that each edge will have a dierent constraint matrix associated with it.)
There are several variants of this basic model, diering in how they treat p 1 and
we can either choose exactly p 1 n(n 1)=2 of the available edges or choose each
edge independently with probability p 1 . Similarly, in forming a constraint matrix,
we either choose exactly of the possible pairs of values to be the incompatible
pairs for this constraint, or choose each pair independently with probability p 2 .
If both p 1 and p 2 are treated as probabilities, we get the model termed Model
A in [14]; if both p 1 and p 2 are proportions, we get Model B. Many experimental
and theoretical studies of random binary CSPs have used a model equivalent to
one of these (see [5] or [11] for a survey). Sometimes, p 1 is treated as a proportion
and p 2 as a probability, giving Model D, and for completeness, Model C has p 1 as
a probability and p 2 as a proportion. In Models B and D, the constraint density,
vary continuously, since the number of constraints must be integral,
and the number of constraints might be a better parameter for these models. Some
researchers have indeed used models equivalent to Models B or D, using the number
of constraints as a parameter.
3 Phase Transition Studies in Binary CSPs
Most investigations of phase transitions in binary CSPs have been more concerned
with observing and explaining the behaviour of samples of problems at specic parameter
values than with how behaviour changes with problem size. Some have
been concerned with predicting where the hardness peak will occur (e.g. [14]). Others
consider the rare instances in the under-constrained region whose solution cost
is extremely high (e.g. [10, 6, 15]). It has also become standard practice for CSP
researchers proposing new algorithms or heuristics to show performance comparisons
across the phase transition, both because performance improvements in the
hard region are most worthwhile, and because one algorithm may not be uniformly
better or worse than another across the whole range of constraint tightnesses.
Furthermore, most experimental studies have not considered a wide range of
problem sizes, because of the di-culty of solving large samples of CSPs with many
variables. Typically, random CSPs have been generated with
problem sizes have then rarely been more than 100 variables. Occasionally problems
with e.g. [4], and then larger problem sizes have been
possible. Prosser [13] carried out an extensive investigation of CSPs generated
using Model B. In three series of experiments, he showed the eect of varying one
of the other parameters as well as in one series, the number of variables
varied (from 20 to 60) while m and p 1 were constant.
Grant and Smith [7, 8], in experiments comparing two CSP algorithms, explicitly
considered the eect of increasing problem size, and proposed keeping the average
degree of the constraint graph constant at the same time. Empirically, over the
range of sizes that were considered (20 to 50) this ensures that the peak in average
di-culty occurs at roughly the same value of p 2 for each value of n. There is some
theoretical justication for this. The crossover point is predicted to occur when the
expected number of solutions, E(N) is equal to 1. Since
if m and d are xed as n increases, the value of p 2 for which E(N)=1 will also be
constant. However, this has been shown to be a poor predictor of the crossover point
when the constraint graph is sparse. In [5], it was shown that, if the degree of the
constraint graph rather than the constraint density is kept constant, the di-culties
identied by Achlioptas at al. emerge much more slowly as problem size increases.
Nevertheless, their asymptotic result still holds, so that the crossover point will not
occur at the same value of p 2 indenitely as n increases.
Smith and Dyer [14] considered two ways in which the other parameters of Model
might change as n increases (p 1 constant; m constant or considering
the accuracy of the prediction of the crossover point given by E(N) = 1. They
pointed out that if m is constant the crossover point will eventually be smaller than
the smallest value of p 2 allowed by Model B, for large enough n, so that almost all
instances generated will have no solution. This will also be true if
To summarise, there has been little discussion of how the parameters m and
should increase with n, largely because the focus has been on discussing the
behaviour of CSPs at experimental sizes, rather than on asymptotic behaviour.
There has, however, been an implicit assumption that as problem size increases,
the domain size, m, should remain constant. This is perhaps by analogy with graph
colouring problems. Graph colouring problems can easily be represented as binary
CSPs, with a variable for each node in the graph; the domain is the set of available
colours. 3- and 4-colouring problems were considered in [2]. Clearly, the domain
size for k-colouring problems is k, however large the graph.
Achlioptas et al. [1] showed that these standard models do not have a phase
transition in the same sense as other NP-complete problems such as graph colouring,
because asymptotically, as n !1, almost all instances are unsatisable, provided
that it is a proportion. They show,
in fact, that asymptotically almost all instances are trivially unsatisable, in the
sense that there will be at least one variable which has no value consistent with the
values of adjacent variables. This is shown to be true if the average degree of the
nodes in the constraint graph is constant; if the constraint density is constant, then
the number of constraints grows even faster with n so that trivially unsatisable
instances will occur earlier. In both cases the domain size is assumed to be constant
with n. They attribute the di-culty to the fact that the constraint matrices are
random rather than structured (as for instance in graph colouring) and suggest
that it can be dealt with by changing the way in which the constraint matrices are
generated.
In [1], Model E is proposed as an alternative: rather than generating the constraint
graph and the constraint matrices separately, in Model E, a prescribed number
of nogoods are selected at random. However, a disadvantage of this model is
that as the number of nogoods increases it rapidly generates a complete constraint
graph, most of the constraints having only a small number of forbidden pairs of
values. This makes it unrepresentative of the CSPs that arise in modelling real
problems.
Models A, B, C and D specify only how a set of instances should be generat-
ed, given values of n; m; p 1 and p 2 . They say nothing about how the parameters
should change as n increases. It has been shown that if m and either p 1 or the
average degree of the constraint graph, d, are constant as n increases, there is not
an asymptotic phase transition. However, this does not necessarily require us to use
a dierent set of models. We could instead consider changing m and/or p 1 or d as
increases. Here, a scheme for using Model D and changing the other parameters
with n is proposed. It is shown that this guarantees that the phase transition will
occur within a specied range of values of p 2 , whatever the value of n. Since Model
D is otherwise unchanged, this modication does not require any change to the way
in which the constraint matrices are generated, unlike Model E.
4 Backtrack-Free Search
A CSP instance can be solved, or it can be proved that there is no solution, by a
complete search algorithm, for instance a simple backtracking algorithm (BT). This
algorithm attempts to build up a consistent solution to the CSP by considering
each variable in turn, and trying to assign a value to it which is consistent with
the existing assignments. If a consistent value is found, the next variable is tried;
otherwise, BT backtracks to the most recently assigned variable which still has an
untried value, and tries a dierent value for it. The algorithm proceeds in this
fashion until either a complete solution is found or every possible value for the rst
variable has been tried without success, in which case the problem has no solution.
A transition from satisable to unsatisable instances has been observed in
random binary CSPs if, for constant n, m and p 1 , a large number of hn; m;
instances are generated at each of a range of values of p 2 and each instance is
solved using an algorithm such as BT. Figure 1 shows the typical pattern seen
if the average search cost is computed for each value of p 2 . Here, the number
of consistency checks, i.e. the number of references to an element of a constraint
matrix, is used as the measure of search cost. Similar behaviour has been seen
for a wide range of parameter values and complete search algorithms, as shown for
instance in [13].
In
Figure
1, the instances were generated using Model B, but there is little difference
in the average cost curves between Models A, B, C and D. The maximum
average cost occurs where approximately half of the generated instances have solutions
and half do not, i.e. around the crossover point, where the probability that a
random instance has a solution is 0.5. There is a narrow region around the peak
where the generated populations contain a mixture of satisable and unsatisable
instances; for smaller values of p 2 , all the instances can be solved, and are increasingly
easy to solve as the constraints become looser; for larger values of p 2 , none
of the instances have solutions, and as the constraints become tighter, they are
increasingly easy to prove unsatisable.
In the 'easy-satisable' region, Figure 1 shows a point where the gradient of the
median curve changes, at approximately. This is also a kind of crossover
point: it is the point where the algorithm can solve 50% of the generated instances
without backtracking. In other words, in 50% of instances, as each variable is
considered, there is a value consistent with the past assignments, and the algorithm
never has to return to a previous variable. Unlike the solubility crossover point,
this is algorithm-dependent. More e-cient algorithms than BT, such as forward
checking [9], which considers the eect of each assignment on the variables not yet
assigned, can avoid failures due to incorrect choices which BT cannot, and so solve
instances without backtracking at larger values of p 2 .
The region where most problems can be solved without backtracking by an
algorithm is a region of 'trivial satisability'. This might be viewed as roughly
complementary to the region of trivial unsatisability identied by Achlioptas et
al. Clearly, if an instance can be solved without backtracking, it has a solution.
Thus, the probability that a random instance can be solved without backtracking is
a lower bound on the probability that it has a solution, and the constraint tightness
at which the probability of backtrack-free search is 0.5 is a lower bound on the
crossover point. Figure 1 suggests that for BT this bound will not be very tight.
However, it has the advantage that it can be calculated, as will be shown below.
Consistency
checks
Constraint tightness (p2)
Figure
1: Median cost of solving Model B problems with
using BT - 10,000 problems per p 2
If we can generate instances in such a way that the probability of backtrack-free
search is always positive for some range of values of p 2 , for all values of n, we shall no
longer have the situation where almost all problems become trivially unsatisable
for su-ciently large values of n. This would then be the rst step towards a CSP
model which gives an interesting asymptotic phase transition.
For Model D, the probability that BT can solve a problem without backtracking
can be calculated. This is given in [15] for complete constraint graphs. For incomplete
constraint graphs, if the variables are instantiated in the order
the probability that there is at least one value in the domain of variable i which is
consistent with the assignment already made is:
where d i is the past degree of variable i, i.e. the number of variables in
which constrain it. This is so because the nogoods in the constraint matrices are
generated independently. The probability that at least one value of every variable
is consistent with the previous assignments is:
Y
Let w be the maximum past degree of any variable in this ordering, i.e. the width
of the ordering. Then
Y
Y
There is a well-known algorithm for nding a ordering of the nodes with minimum
width [3], and Dyer and Frieze have shown that the minimum width of a
random graph is almost surely bounded by the average degree of the graph 1 . Experimental
results suggest that in randomly generated graphs, even of small size,
the minimum width will be strictly less than the average degree, unless the graph is
complete (in which case, of course, the width of any ordering is equal to the degree)
or extremely sparse.
We can therefore use (1 (1 q d
d is the average degree, as a lower
bound on the probability of backtrack-free search using BT. If we can ensure that
the value of this is at least 0.5 for some xed value of the constraint tightness, for
all n, we know that the true probability of backtrack-free search at that constraint
tightness is at least 0.5, and hence we have the required region of trivial solubility.
How do we keep (1 (1 q d
constant at a specied value of p 2 , when n
is increasing? To achieve this, (1 q d
must decrease, and hence either m must
increase or d must decrease. Intuitively, if the number of variables, n, increases
while the other parameters are kept constant, it becomes harder to nd a value for
every variable satisfying the constraints between it and the past assignments. This
can be compensated for by increasing the probability that an individual variable
is consistent with the past assignments, either by increasing the variable's domain
size, or by decreasing the number of past variables which constrain it. Decreasing
the average degree of the constraint graph as n increases is not a desirable option,
as it will eventually result in the graph being disconnected, so m must increase.
5 Upper bound on the crossover point
As shown in the last section, a lower bound on the crossover point is given by the
constraint tightness, ~
which the probability of backtrack-free search using BT
is at least 0.5. An upper bound, ^
given by the value for which the expected
number of solutions, E(N), is 0.5; for any p 2 the probability that there is a
solution is at most 0.5.
For the standard models for random binary CSPs,
we consider the value of p 2 at which E(N) = 1, we must have It
is clear that if m is increasing and d is constant, then to satisfy this equation, p 2
must also increase; as 1. Hence, the upper bound on the crossover
increase and we shall therefore not be able to show that there is a
range of values of the constraint tightness for which instances are unsatisable, with
high probability. To ensure that ^
does not increase with m, d must also increase.
However, bearing in mind the intuition given at the end of the last section, d must
not increase so fast that the region of trivial solubility disappears; increasing d and
makes instances more likely to have no solution, whereas increasing m makes
them more likely to have a solution, and a balance must be struck.
There is already evidence, in fact, that increasing m and d arbitrarily with n will
not necessarily lead to an asymptotic phase transition. In [14], the class of problems
problems in which and the constraint graph is
complete. For these problems, the crossover point ! 0 as n ! 1. Although the
number of values for each variable is increasing, the degree of the constraint graph
is also increasing (d = n 1), and too fast to allow instances to remain satisable.
6 A Proposed Scheme
To conne the crossover point between values of p 2 which are either constant or
converging as n increases, both m and d must increase with n. In order to keep as
Personal communication: this is a consequence of results in [12].
close as possible to the experimental use of existing models, in which m and p 1 or
d are xed, it is desirable to increase d and m only slowly with n.
Suppose we have a base population of problems, dened by the parameters
which will give the initial value of ~ p 2 , say ~
To increase d slowly with
n, suppose it grows as log n, e.g. a is a constant, and c is
determined by n 0 ; d 0 and a. Since almost every random graph with (a=2)n log n
edges is connected if a > 1, we should choose a > 1. If the constraint graph
is disconnected, each component of the graph forms a CSP, and these would in
practice be solved separately, so that in generating random instances it is a realistic
requirement that the constraint graph should be connected.
m is dened so that for any value of n, the lower bound on the crossover point
is and d are related
as just described. This ensures that, for all n, instances with p 2 ~
2 are trivially
satisable, with probability at least 0.5. To ensure an asymptotic phase transition,
the upper bound on the crossover point, ^
should have a limiting value as n !1
which is strictly < 1. This will ensure that there is, asymptotically, a region where
instances have no solution, for all n. However, since m and d are now dened in
terms of n, nothing further can be done to ensure the required behaviour; it only
remains to be seen whether the instances generated do in fact behave in this way.
7 Asymptotic Behaviour
First, calculations using dierent base populations and dierent values of a suggest
that increasing m and d with n as proposed does give the required behaviour. Figure
base population with
5, for a range of values of a. If a is small, i.e. d is increasing extremely slowly with
increases initially with n. However, Figure 2 suggests that it does eventually
decrease, so that there is a range of values of p 2 for which unsatisable instances
occur. By construction, the lower bound on the crossover point, given by ~
constant for all n, and hence the probability that an instance has a solution, p sat ,
is at least 0.5 for
n. The empirical evidence therefore suggests that
there is, asymptotically, a phase transition somewhere between ~ 2 and the limiting
value of ^
which is < 1.
If this scheme were to be used to generate Model D instances for small values
of n, there are some practical considerations which would need to be taken into
account. Specically, m must be an integer, and nd must be an even integer, since
the number of constraints is nd=2. Having calculated d from d = a log n+c, it might
then be necessary to adjust it slightly. Then, rather than choosing m as above, a
possible choice is the smallest integer such that (1 (1 (1 ~
0:5. The
upper bound on the crossover point would not then follow exactly the curves shown
in
Figure
2, when n and a are such that m is small, and the lower bound on the
crossover point would not be constant at ~
2 . However, since the principal concern
here is with asymptotic behaviour, these variations when the values are small do
not signicantly aect the argument.
To show that the proposed scheme does give the required asymptotic behaviour,
we need to show what happens to the upper bound on the crossover point. Suppose
that
2 is the constraint tightness for which the expected number of solutions is ,
for 0 < < 1. Then for
Upper
bound
on
crossover
point
Figure
2: Calculated upper bound on crossover point, from a base population with
Hence, log ^
and d is given by:
It can be shown (see Appendix A.1) that
lim
For instance, for the problems shown in Figure 2, ~
Since can be arbitrarily close to 0, we have
lim
Note that the limiting value of ^
does not depend on a, although Figure 2 shows
that the value of a does aect the rate of convergence.
By increasing m and d with n as specied, we can therefore ensure that for all n,
the value given by the base population, and lim n!1 p
Hence, asymptotically there is a crossover point between
these two values of the constraint tightness, and we do have an asymptotic phase
transition with this scheme.
Furthermore, both d and m can increase quite slowly with n, so that for the
range of problem sizes that are likely to be solvable in practice, this scheme will be
close to schemes in which m and d are constant. This depends on the value of a; if
a is large, d increases relatively rapidly with n, and m also increases very fast, and
will become larger than n. Figure 3 shows that for the base population shown in
100 1000 10000 100000 1e+06
m/n
Figure
3: Ratio of m to n, from a base population with
Figure
2, m becomes much larger than n, and appears to increase without limit, if
a 7:5. m remains smaller than n for a 5, and m=n rapidly approaches 0 for
a 2:5.
It can be shown (Appendix A.2) that
lim
For the populations shown in Figure 2, ~ so that lim n!1
a 6:934, which accords with Figure 3.
Figure
2 suggests that choosing a large value of a will give more rapid convergence
of the upper bound on the crossover point. On the other hand, m will then
grow much faster than n, giving problems very dierent from the instances normally
generated in experimental studies. For the initial parameters used in Figures 2 and
3, a value of a around 2.5 would give an upper bound on the crossover point which
is always less than its initial value, and a rapid decrease in the ratio m=n. Hence,
such a scheme has a guaranteed asymptotic crossover point but is not too dissimilar
at experimental sizes from a scheme with m and d constant.
8 Related Work
A similar scheme was proposed independently by Xu and Li [17] for Model B. Their
scheme also applies to non-binary constraints. In the binary case, the constraint
graph has (a=2)n log n edges, using the notation of this paper. The number of values
in each variable's domain is n
, for constant
The constraint tightness, p 2crit , at which E(N) = 1 has been used as a predictor
of the crossover point [16, 14]. In Xu and Li's scheme, p
=a . They
show, by considering the second moment, E(N 2 ), that if
> 1=2 and e 2
=a 1=2
then
lim
Since
> 1=2 and e 2
=a 1=2, a 2
1:4427. Hence there
is a range of acceptable values of a between 1 and 1.4427, for which the constraint
graphs are connected, and similarly a range of values of
between 0 and 1/2, where
Xu and Li's result does not apply, as far as the lower limit on the crossover point
is concerned. It is still true, however, that lim n!1 p
=a .
If we assume Model D rather than Model B, we can use the probability of
backtrack-free search, as in section 4, whether or not a and
have values in the
ranges to which Xu and Li's result applies. Hence, it can be shown that there must
still be a crossover point asymptotically, even if we cannot prove its precise location.
It can be shown (see Appendix A.3) that
lim
=a
and hence there is a crossover point, for
=a and 1 e 2
=a , even
when Xu and Li's result does not apply. It is worth noting that the relationship
between the upper and lower bounds on the crossover point is the same as found
previously in section 7.
Models B and D dier only in the generation of the constraint matrices; p 2 is the
proportion of nogoods in each matrix in Model B and the probability that a pair of
values is nogood in Model D. Since a Model B population is more homogeneous than
a Model D population, it seems likely that if Model D has an asymptotic crossover
point then so does Model B. However, this needs to be conrmed before it can be
assumed that the result just given applies to Xu and Li's scheme. On the other
hand, it was shown in [14] that Models A and B are not equivalent asymptotically,
so the results of this paper may not transfer to all the standard models.
9 Conclusions
It has been shown that it is possible to ensure an asymptotic crossover point in one
of the standard random binary CSP models, Model D, by increasing other parameters
of the model with the number of variables. This counters the objections raised
by Achlioptas et al. to these models, that asymptotically almost all instances are
trivially unsatisable. Hence, it is not necessary to change the way in which the
constraint matrices are generated in order to produce asymptotically interesting
behaviour. However, since there is an asymptotic phase transition in graph colour-
ing, for a xed number of colours, the lack of structure in random constraints does
make a signicant dierence to asymptotic behaviour. In random binary CSPs,
unless the number of values increases with the number of variables, there cannot
asymptotically be a region where almost all problems are satisable, whereas in
graph colouring problems, the number of values is the number of colours and by
denition xed.
The probability that a random instance can be solved without backtracking by
a simple search algorithm has been used to give a lower bound on the crossover
point, and this is novel in studies of phase transitions in CSPs. Although not a
very tight bound and unlikely to be useful for predicting the exact location of the
crossover point, it is adequate to show that there is a region of trivial satisability.
Xu and Li [17] have considered a similar scheme for increasing the parameters of
Model B with the number of variables. This is somewhat simpler than the scheme
considered in this paper, although it has two parameters (
a) rather than one (a).
[17] does not give any motivation for this scheme, but it has the great advantage
that, except over a certain range of valuers of the parameters, the location of the
asymptotic phase transition can be exactly determined. In section 8, it is shown
that in Model D, this scheme gives an asymptotic crossover point between xed
values of the constraint tightness even outside the range of parameters identied
by Xu and Li. Further work to show whether Models B and D are asymptotically
similar would be useful, as well as further study of the asymptotic phase transition
where Xu and Li's results does not apply. It may be that the range of parameter
values for which the asymptotic phase transition can be exactly located indicates
qualitatively dierent asymptotic behaviour from the range where their result does
not hold.
Acknowledgments
The author is a member of the APES research group (http://www.cs.strath.ac.uk/~apes)
and wishes to thank the other members. Thanks are also due to Alan Slomson for
the lemma given in Appendix A.3 and to Martin Dyer and Alan Frieze for the
results on the width of random graphs.
--R
Random Constraint Satisfaction - A More Accurate Picture
Where the Really Hard Problems are.
In search of the best constraint satisfaction search.
Random Constraint Satisfaction: Flaws and Structure.
The Phase Transition Behaviour of Maintaining Arc Consistency.
The Phase Transition Behaviour of Maintaining Arc Consistency.
Increasing tree search e-ciency for constraint satisfaction problems
The Hardest Constraint Problems: A Double Phase Transition.
Random Constraint Satisfac- tion: Theory meets Practice
Sudden Emergence of a Giant k-Core in a Random Graph
An empirical study of phase transitions in binary constraint satisfaction problems.
Locating the Phase Transition in Constraint Satisfaction Problems.
Modelling Exceptionally Hard Constraint Satisfaction Problems.
Exploiting the Deep Structure of Constraint Prob- lems
Exact Phase Transitions in Random Constraint Satisfaction Problems.
--TR
In search of the best constraint satisfaction search
The hardest constraint problems
Exploiting the deep structure of constraint problems
Sudden emergence of a giant <italic>k</italic>-core in a random graph
A Sufficient Condition for Backtrack-Free Search
Results related to threshold phenomena research in satisfiability
Lower bounds for random 3-SAT via differential equations
Random Constraint Satisfaction
--CTR
Martin Dyer , Alan Frieze , Michael Molloy, A probabilistic analysis of randomly generated binary constraint satisfaction problems, Theoretical Computer Science, v.290 n.3, p.1815-1828, 3 January
S. Durga Bhavani , Arun K. Pujari, EvIA - Evidential Interval Algebra and Heuristic Backtrack-Free Algorithm, Constraints, v.9 n.3, p.193-218, July 2004
Ke Xu , Wei Li, Many hard examples in exact phase transitions, Theoretical Computer Science, v.355 n.3, p.291-302, 14 April 2006
Alan Frieze , Michael Molloy, The satisfiability threshold for randomly generated binary constraint satisfaction problems, Random Structures & Algorithms, v.28 n.3, p.323-339, May 2006
Ke Xu , Frdric Boussemart , Fred Hemery , Christophe Lecoutre, Random constraint satisfaction: Easy generation of hard (satisfiable) instances, Artificial Intelligence, v.171 n.8-9, p.514-534, June, 2007
Ian P. Gent , Ewan Macintyre , Patrick Prosser , Barbara M. Smith , Toby Walsh, Random Constraint Satisfaction: Flaws and Structure, Constraints, v.6 n.4, p.345-372, October 2001 | random problems;constraint satisfaction;phase transitions |
500605 | Learning one-variable pattern languages very efficiently on average, in parallel, and by asking queries. | A pattern is a finite string of constant and variable symbols. The langauge generated by a pattern is the set of all strings of constant symbols which can be obtained from the pattern by substituting non-empty strings for variables. We study the learnability of one-variable pattern languages in the limit with respect to the update time needed for computing a new single hypothesis and the expected total learning time taken until convergence to a correct hypothesis. Our results are as follows. First, we design a consistent and set-driven learner that, using the concept of descriptive patterns, achieves update time O(n2logn), where n is the size of the input sample. The best previously known algorithm for computing descriptive one-variable patterns requires time O(n4logn) (cf. Angluin, J. Comput. Systems Sci. 21 (1) (1980) 46-62). Second, we give a parallel version of this algorithm that requires time O(logn) and O(n3/logn) processors on an EREW-PRAM. Third, using a modified version of the sequential algorithm as a subroutine, we devise a learning algorithm for one-variable patterns whose expected total learning time is O(l2logl) provided that sample strings are drawn from the target language according to a probability distribution with expected string length l. The probability distribution must be such that strings of equal length have equal probability, but can be arbitrary otherwise. Thus, we establish the first algorithm for learning one-variable pattern languages having an expected total learning time that provably differs from the update time by a constant factor only. Finally, we show how the algorithm for descriptive one-variable patterns can be used for learning one-variable patterns with a polynomial number of superset queries with respect to the one-variable patterns as query language. | Introduction
A pattern is a string of constant symbols and variable symbols. The language
generated by a pattern - is the set of all strings obtained by substituting strings
of constants for the variables in - (cf. [1]). Pattern languages and variations
thereof have been widely investigated (cf., e.g., [17, 18, 19]). This continuous
interest in pattern languages has many reasons, among them the learnability in
? A full version of this paper is available as technical report (cf. [4]).
the limit of all pattern languages from text (cf. [1, 2]). Moreover, the learnability
of pattern languages is very interesting with respect to potential applications
(cf., e.g., [19]). Given this, efficiency becomes a central issue. However, defining
an appropriate measure of efficiency for learning in the limit is a difficult problem
(cf. [16]). Various authors studied the efficiency of learning in terms of the
update time needed for computing a new single guess. But what really counts in
applications is the overall time needed by a learner until convergence, i.e., the
total learning time. One can show that the total learning time is unbounded in
the worst-case. Thus, we study the expected total learning time. For the purpose
of motivation we shortly summarize what has been known in this regard.
The pattern languages can be learned by outputting descriptive patterns as
hypotheses (cf. [1]). The resulting learning algorithm is consistent and set-driven.
A learner is set-driven iff its output depends only on the range of its input
(cf., e.g., [20, 22]). In general, consistency and set-drivenness considerably limit
the learning capabilities (cf., e.g., [14, 22]). But no polynomial time algorithm
computing descriptive patterns is known, and hence already the update time is
practically infeasible. Moreover, finding a descriptive pattern of maximum possible
length is NP-complete (cf. [1]). Thus, it is unlikely that there is a polynomial-time
algorithm computing descriptive patterns of maximum possible length.
It is therefore natural to ask whether efficient pattern learners can benefit from
the concept of descriptive patterns. Special cases, e.g., regular patterns, non-cross
patterns, and unions of at most k regular pattern languages (k a priori fixed)
have been studied. In all these cases, descriptive patterns are polynomial-time
computable (cf., e.g., [19]), and thus these learners achieve polynomial update
time but nothing is known about their expected total learning time. Another
restriction is obtained by a priori bounding the number k of different variables
occurring in a pattern (k-variable patterns) but it is open if there are polynomial-time
algorithms computing descriptive k-variable patterns for any fixed k ? 1
(cf. [9, 12]). On the other hand, k-variable patterns are PAC-learnable with
respect to unions of k-variable patterns as hypothesis space (cf. [11]).
Lange and Wiehagen [13] provided a learner LWA for all pattern languages
that may output inconsistent guesses. The LWA achieves polynomial update time.
It is set-driven (cf. [21]), and even iterative, thus beating Angluin's [1] algorithm
with respect to its space complexity. For the LWA, the expected total learning
time is exponential in the number of different variables occurring in the target
pattern (cf. [21]). Moreover, the point of convergence for the LWA definitely
depends on the appearance of sufficiently many shortest strings of the target lan-
guage, while for the other algorithms mentioned above at least the corresponding
correctness proofs depend on it. Thus, the following problem arises naturally.
Does there exist an efficient pattern language learner benefiting from the concept
of descriptive patterns thereby not depending on the presentation of sufficiently
many shortest strings from the target language, and still achieving an
expected total learning time polynomially bounded in the expected string length?
We provide a complete affirmative answer by studying the special case of
one-variable patterns. We believe this case to be a natural choice, since it is non-trivial
(there may be exponentially many consistent patterns for a given sample),
and since it has been the first case for which a polynomial time algorithm computing
descriptive patterns has been known (cf. [1]). Angluin's [1] algorithm for
finding descriptive patterns runs in time O(n 4 log n) for inputs of size n and it
always outputs descriptive patterns of maximum possible length. She was aware
of possible improvements of the running time only for certain special cases, but
hoped that further study would provide insight for a uniform improvement.
We present such an improvement, i.e., an algorithm computing descriptive one-variable
patterns in O(n 2 log n) steps. A key idea to achieve this goal is giving
up necessarily finding descriptive patterns of maximum possible length. Note
that all results concerning the difficulty of finding descriptive patterns depend
on the additional requirement to compute ones having maximum possible length
(cf., e.g., [1, 12]). Thus, our result may at least support the conjecture that more
efficient learners may arise when one is definitely not trying to find descriptive
patterns of maximum possible length but just descriptive ones, instead.
Moreover, our algorithm can be also efficiently parallelized, using O(logn)
time and O(n n) processors on an EREW-PRAM. Previously, no efficient
parallel algorithm for learning one-variable pattern languages was known.
Our main result is a version of the sequential algorithm still learning all one-variable
pattern languages that has expected total learning time O(' 2 log ') if the
sample strings are drawn from the target language according to a probability
distribution D having expected string length '. D can be arbitrary except that
strings of equal length have equal probability. In particular, all shortest strings
may have probability 0. Note that the expected total learning time differs only
by a constant factor from the time needed to update actual hypotheses. On the
other hand, we could only prove that O(log ') many examples are sufficient to
achieve convergence.
Finally, we deal with active learning. Now the learner gains information about
the target object by asking queries to an oracle. We show how the algorithm for
descriptive one-variable patterns can be used for learning one-variable patterns
by asking polynomially many superset queries. Another algorithm learning all
pattern languages with a polynomial number of superset queries is known, but
it uses a much more powerful query language, i.e., patterns with more than one
variable even when the target pattern is a one-variable pattern (cf. [3]).
1.1. The Pattern Languages and Inductive Inference
be the set of all natural numbers, and let IN
For all real numbers x we define bxc to be the greatest integer less than or equal
to x. Let be any finite alphabet containing at least two elements.
A denotes the free monoid over A. By A + we denote the set of all finite non-null
strings over A, i.e., A denotes the empty string. Let
INg be an infinite set of variables with A " Patterns are
strings from . The length of a string s 2 A and of a pattern - is
denoted by jsj and j-j, respectively. By Pat we denote the set of all patterns.
by -(i) we denote the i-th symbol in -, e.g.,
then -(i) is called a constant; otherwise -(i) 2 X,
i.e., -(i) is a variable. We use s(i), to denote the i-th symbol in
. By #var(-) we denote the number of different variables occurring in -,
and # x i
(-) denotes the number of occurrences of variable x i in -. If
then - is a k-variable pattern. By Pat k we denote the set of all k-variable patterns.
In the case we denote the variable occurring by x.
denotes the
string obtained by substituting u j for each occurrence of x j in -. The tuple
called substitution. For every pattern - 2 Pat k we define the
language generated by - by
. By PAT k we denote the set of all k-variable pattern languages.
denotes the set of all pattern languages over A.
Note that several problems are decidable or even efficiently solvable for PAT 1
but undecidable or NP-complete for PAT . For example, for general pattern
languages the word problem is NP-complete (cf. [1]) and the inclusion problem
is undecidable (cf. [10]), but both problems are decidable in linear time for one-variable
pattern languages. On the other hand, PAT 1 is still incomparable to
the regular and context free languages.
A finite set is called a sample. A pattern - is
consistent with a sample S if S ' L(-). A (one-variable) pattern - is called
descriptive for S if it is consistent with S and there is no other consistent (one-
pattern - such that L(- ) ae L(-).
Next, we define the relevant learning models. We start with inductive inference
of languages from text (cf., e.g., [15, 22]). Let L be a language; every
infinite sequence of strings with
to be a text for L. Text(L) denotes the set of all texts for L. Let t be a text,
and r 2 IN. We set t r the initial segment of t of length
r we denote the range of t r , i.e., t
rg. We define an
inductive inference machine (abbr. IIM) to be an algorithmic device taking as
its inputs initial segments of a text t, and outputting on every input a pattern
as hypothesis (cf. [7]).
Definition 1. PAT is called learnable in the limit from text iff there is an
IIM M such that for every L 2 PAT and every t 2 Text(L),
(1) for all r 2 IN, M (t r ) is defined,
(2) there is a - 2 Pat such that almost all r 2 IN, M (t r
The learnability of the one-variable pattern languages is defined analogously
by replacing PAT and Pat by PAT 1 and Pat 1 , respectively.
When dealing with set-driven learners, it is often technically advantageous to
describe them in dependence of the relevant set t
r , i.e., a sample, obtained as
input. Let
and refer to n as the size
of sample S.
3 We study non-erasing substitutions. Erasing substitutions have been also considered,
(variables may be replaced by "), leading to a different language class (cf. [5]).
PAT as well as PAT 1 constitute an indexable class L of uniformly recursive
languages, i.e., there are an effective enumeration (L j ) j2IN of L and a recursive
function f such that for all j 2 IN and all s 2 A we have f(j;
Except in Section 3, where we use the PRAM-model, we assume the same
model of computation and representation of patterns as in [1]. Next we define the
update time and the total learning time. Let M be any IIM. For every L 2 PAT
the least number m such that for all r -
the stage of convergence of M on t. By TM (t r ) we
denote the time to compute M (t r ), and we call TM (t r ) the update time of M .
The total learning time taken by the IIM M on successive input t is defined as
Finally, we define learning via queries. The objects to be learned are the elements
of an indexable class L. We assume an indexable class H as well as a
fixed effective enumeration of it as hypothesis space for L. Clearly, H
must comprise L. A hypothesis h describes a target language L iff
source of information about the target L are queries to an oracle. We distinguish
membership, equivalence, subset, superset, and disjointness queries (cf. [3]). Input
to a membership query is a string s, and the output is yes if s 2 L and
no otherwise. For the other queries, the input is an index j and the output is
yes if
query), and (disjointness query), and no otherwise. If the reply is
no, a counterexample is returned, too, i.e., a string s 2 L4h j (the symmetric
difference of L and h j respectively. We
always assume that all queries are answered truthfully.
Definition 2 (Angluin [3]). Let L be any indexable class and let H be a hypothesis
space for it. A learning algorithm exactly identifies a target L 2 L with
respect to H with access to a certain type of queries if it always halts and outputs
an index j such that
Note that the learner is allowed only one hypothesis which must be correct.
The complexity of a query learner is measured by the number of queries to be
asked in the worst-case.
2. An Improved Sequential Algorithm
We present an algorithm computing a descriptive pattern - 2 Pat 1 for a sample
as input. Without loss of generality, we assume
that s 0 is the shortest string in S. Our algorithm runs in time O(n js 0 j log js 0
and is simpler and much faster than Angluin's [1] algorithm, which needs time
log js 0 j). Angluin's [1] algorithm computes explicitly a representation
of the set of all consistent one-variable patterns for S and outputs a descriptive
pattern of maximum possible length. We avoid to find a descriptive pattern of
maximum possible length and can thus work with a polynomial-size subset of all
consistent patterns. Next, we review and establish basic properties needed later.
and only if - can be obtained from - by substituting a pattern % 2 Pat for x.
For a pattern - to be consistent with S, there must be strings ff
A + such that s i can be obtained from - by substituting ff i for x, for all
Given a consistent pattern -, the set fff is denoted by ff(S; -).
Moreover, a sample S is called prefix-free if jSj ? 1 and no string in S is a prefix
of all other strings in S. Note that the distinction between prefix-free samples
and non-prefix free samples does well pay off.
Lemma 2. If S is prefix-free then there exists a descriptive pattern - 2 Pat 1
for S such that at least two strings in ff(S; -) start with a different symbol.
Proof. Let u denote the longest common prefix of all strings in S. The pattern
ux is consistent with S because u is shorter than every string in S, since S is
prefix-free. Consequently, there exists a descriptive pattern - 2 Pat 1 for S with
we know that there exists a pattern % 2 Pat 1
such that longest common prefix of all strings in S, we
can conclude
A. Hence, and at least two
strings in ff(S; ux- ) must start with a different symbol. 2
(1)]g. Cons(S) is a subset of all consistent patterns for S, and
S is not prefix-free.
Lemma 3. Let S be any prefix-free sample. Then Cons(S) 6= ;, and every
of maximum length is descriptive for S.
Proof. Let S be prefix-free. According to Lemma 2 there exists a descriptive
pattern for S belonging to Cons(S); thus Cons(S) 6= ;. Now, suppose there is a
of maximum length which is not descriptive for S. Thus,
and, moreover, there exists a pattern - 2 Pat 1 such that S ' L(- ) as
well as L(- ) ae L(-). Hence, by Lemma 1 we know that - can be obtained from
- by substituting a pattern % for x. Since at least two strings in ff(S; -) start
with a different symbol, we immediately get %(1) 2 X. Moreover, at least two
strings in ff(S; - ) must also start with a different symbol. Hence - 2 Cons(S) and
-. Note that j-j - 1, since otherwise
contradicting L(- ) ae L(-). Finally, by j-j - 1, we may conclude
contradiction to - having maximum length. Thus, no such pattern - can exist,
and hence, - is descriptive. 2
Next, we explain how to handle non-prefix-free samples. The algorithm checks
whether the input sample consists of a single string s. If this happens, it outputs
s and terminates. Otherwise, it tests whether s 0 is a prefix of all other strings
. If this is the case, it outputs ux 2 Pat 1 , where u is the prefix of s 0
of length js terminates. Clearly, S ' L(ux). Suppose there is a pattern
- such that S ' L(- ), and L(- ) ae L(ux). Then Lemma 1 applies, i.e., there is a
% such that
thus, S 6' L(- ). Consequently, ux is descriptive.
Otherwise, jSj - 2 and s 0 is not a prefix of all other strings in S. Thus, by
Lemma 3 it suffices to find and to output a longest pattern in Cons(S).
The improved algorithm for prefix-free samples is based on the fact that
jCons(S)j is bounded by a small polynomial. Let k; l 2 IN, k ? 0; we call
patterns - with # x l occurrences of constants (k; l)-patterns. A
(k; l)-pattern has length k
Thus, there can only be a (k; l)-pattern in Cons(S) if there is an
satisfying js refers to the length of the string substituted
for the occurrences of x in the relevant (k; l)-pattern to obtain s 0 . Therefore,
there are at most bjs 0 j=kc possible values of l for a fixed value of k. Hence, the
number of possible (k; l)-pairs for which (k; l)-patterns in Cons(S) can exist is
bounded by
The algorithm considers all possible (k; l)-pairs in turn. We describe the algorithm
for one specific (k; l)-pair. If there is a (k; l)-pattern - 2 Cons(S), the
of the strings ff i 2 ff(S; -) must satisfy is the
substring of s i of length m i starting at the first position where the input strings
differ. If (js
2 IN for some i, then there is no consistent (k; l)-pattern and
no further computation is performed for this (k; l)-pair. The following lemma
shows that the (k; l)-pattern in Cons(S) is unique, if it exists at all.
Lemma 4. Let be any prefix-free sample. For every given
(k; l)-pair, there is at most one (k; l)-pattern in Cons(S).
The proof of Lemma 4 directly yields Algorithm 1 below. It either returns the
unique (k; l)-pattern - 2 Cons(S) or NIL if there is no (k; l)-pattern in Cons(S).
We assume a subprocedure taking as input a sample S, and returning the longest
common prefix u.
Algorithm 1. On input (k; l), and u do the following:
l and b - k and c - l do
else
l and S ' L(-) then return - else return NIL fi
Note that minor modifications of Algorithm 1 perform the consistency test
even while - is constructed. Putting Lemma 4 and the fact that there
are O(js possible (k; l)-pairs together, we directly obtain:
Lemma 5. prefix-free sample
g.
Using Algorithm 1 as a subroutine, Algorithm 2 below for finding a descriptive
pattern for a prefix-free sample S follows the strategy exemplified above. Thus, it
simply computes all patterns in Cons(S) and outputs one with maximum length.
For inputs of size n the overall complexity of the algorithm is O(n js 0 j log js 0
O(n 2 log n), since at most O(js 0 j log js 0 tests must be performed, which
have time complexity O(n) each.
Algorithm 2. On input do the following:
\Pi do
if there is a (k; js0
Output a maximum-length pattern - 2 P .
Note that the number of (k; l)-pairs to be processed is often smaller than
O(js since the condition (js restricts the possible
values of k if not all strings are of equal length. It is also advantageous to
process the (k; l)-pairs in order of non-increasing k l. Then the algorithm can
terminate as soon as it finds the first consistent pattern. However, the worst-case
complexity is not improved, if the descriptive pattern is x.
Finally, we summarize the main result obtained by the following theorem.
Theorem 1. Using Algorithm 2 as a subroutine, PAT 1 can be learned in the
limit by a set-driven and consistent IIM having update time O(n 2 log n) on input
samples of size n.
3. An Efficient Parallel Algorithm
Whereas the RAM model has been generally accepted as the most suitable
model for developing and analyzing sequential algorithms, such a consensus has
not yet been reached in the area of parallel computing. The PRAM model introduced
in [6], is usually considered an acceptable compromise. A PRAM consists
of a number of processors, each of which has its own local memory and can execute
its local program, and all of which can communicate by exchanging data
through a shared memory. Variants of the PRAM model differ in the constraints
on simultaneous accesses to the same memory location by different processors.
The CREW-PRAM allows concurrent read accesses but no concurrent write ac-
cesses. For ease of presentation, we describe our algorithm for the CREW-PRAM
model. The algorithm can be modified to run on an EREW-PRAM, however, by
the use of standard techniques.
computing descriptive one-variable patterns has been
known previously. Algorithm 2 can be efficiently parallelized by using well-known
techniques including prefix-sums, tree-contraction, and list-ranking as subroutines
(cf. [8]). A parallel algorithm can handle non-prefix-free samples S in the
same way as Algorithm 2. Checking S to be singleton or s 0 to be a prefix of all
other strings requires time O(log n) using O(n= log n) processors. Thus, we may
assume that the input sample is prefix-free. Additionally, we
assume the prefix-test has returned the first position d where the input strings
differ and an index t,
A parallel algorithm can handle all O(js 0 j log js 0 possible (k; l)-pairs in par-
allel. For each (k; l)-pair, our algorithm computes a unique candidate - for the
(k; l)-pattern in Cons(S), if it exists, and checks whether S ' L(-). Again, it
suffices to output any obtained pattern having maximum length. Next, we show
how to efficiently parallelize these two steps.
For a given (k; l)-pair, the algorithm uses only the strings s 0 and s t for calculating
the unique candidate - for the (k; l)-pattern in Cons(S). This reduces
the processor requirements, and a modification of Lemma 4 shows the candidate
pattern to remain unique.
Position j t in s t is said to be b-corresponding to position j 0 in s 0 if
k. The meaning of b-corresponding positions is as follows.
Suppose there is a consistent (k; l)-pattern - for fs such that position j 0
in s 0 corresponds to -(i) 2 A for some i, 1 - i - j-j, and that b occurrences of
x are to the left of -(i). Then -(i) corresponds to position
in s t .
For computing the candidate pattern from s 0 and s t , the algorithm calculates
the entries of an array EQUAL[j; b] of Boolean values first, where j ranges from
1 to js 0 j and b from 0 to k. EQUAL[j; b] is true iff the symbol in position j in s 0
is the same as the symbol in its b-corresponding position in s t . Thus, the array
is defined as follows: EQUAL[j; )). The
array EQUAL has O(kjs 0 entries each of which can be calculated in constant
time. Thus, using O(kjs 0 log n) processors, EQUAL can be computed in time
O(log n). Moreover, a directed graph G that is a forest of binary in-trees, can be
built from EQUAL, and the candidate pattern can be calculated from G using
tree-contraction, prefix-sums and list-ranking. The details are omitted due to
space restrictions. Thus, we can prove:
Lemma 6. Let be a sample, and n its size. Given the
array EQUAL, the unique candidate - for the (k; l)-pattern in Cons(S), or NIL,
if no such pattern exists, can be computed on an EREW-PRAM in time O(logn)
using O(kjs 0 processors.
Now, the algorithm has either discovered that no (k; l)-pattern exists, or it has
obtained a candidate (k; l)-pattern -. In the latter case, it has to test whether
- is consistent with S.
Lemma 7. Given a candidate pattern -, the consistency of - with any sample
S of size n can be checked on in time O(logn) using O(n= log n) processors
on a CREW-PRAM.
Putting it all together, we obtain the following theorem.
Theorem 2. There exists a parallel algorithm computing descriptive one-variable
patterns in time O(logn) using O(js
n) processors on a CREW-PRAM for samples of
size n.
Note that the product of the time and the number of processors of our algorithm
is the same as the time spent by the improved sequential algorithm above
larger, the product exceeds the time of
the sequential algorithm by a factor less than O(js
4. Analyzing the Expected Total Learning Time
Now, we are dealing with the major result of the present paper, i.e., with
the expected total learning time of our sequential learner. Let - be the target
pattern. The total learning time of any algorithm trying to infer - is unbounded
in the worst case, since there are infinitely many strings in L(-) that can mislead
it. However, in the best case two examples, i.e., the two shortest strings -[0=x]
and -[1=x], always suffice for a learner outputting descriptive patterns as guesses.
Hence, we assume that the strings presented to the algorithm are drawn from
L(-) according to a certain probability distribution D and compute the expected
total learning time of our algorithm. The distribution D must satisfy two criteria:
any two strings in L(-) of equal length must have equal probability, and the
expected string length ' must be finite. We refer to such distributions as proper
probability distributions.
We design an Algorithm 1LA inferring a pattern - 2 Pat 1 with expected total
learning time O(' 2 log '). It is advantageous not to calculate a descriptive pattern
each time a new string is read. Instead, Algorithm 1LA reads a certain number
of strings before it starts to perform any computations at all. It waits until the
length of a sample string is smaller than the number of sample strings read so
far and until at least two different sample strings have been read. During these
first two phases, it outputs s 1 , the first sample string, as its guess if all sample
strings read so far are the same, and x otherwise. If - is a constant pattern,
the correct hypothesis is always output and the algorithm never
reaches the third phase. Otherwise, the algorithm uses a modified version of
Algorithm 2 to calculate a set P 0 of candidate patterns when it enters Phase 3.
More precisely, it does not calculate the whole set P 0 at once. Instead, it uses
the function first cand once to obtain a longest pattern in P 0 , and the function
next cand repeatedly to obtain the remaining patterns of P 0 in order of non-increasing
length. This substantially reduces the memory requirements.
The pattern - obtained from calling first cand is used as the current candidate.
Each new string s is then compared to - . If s 2 L(- is output. Otherwise,
next cand is called to obtain a new candidate - 0 . Now, - 0 is the current candidate
and output, independently of s 2 L(- 0 ). If the longest common prefix of
all sample strings including s is shorter than that of all sample strings excluding
s, however, first cand is called again and a new list of candidate patterns is
considered. Thus, Algorithm 1LA may output inconsistent hypotheses.
Algorithm 1LA is shown in Figure 1. Let
defined as follows. If string w,
then fvxg. Otherwise, denote by u the longest common prefix of s and
be the set of patterns computed by Algorithms 1 and 2 above if
we omit the consistency check. Hence, P 0 ' Cons(S), where Cons(S) is defined
as in Section 2. P 0 necessarily contains the pattern - if s; s 0 2 L(-) and if the
longest common prefix of s and s 0 is the same as the longest constant prefix of -.
Assuming t, first cand (s; s 0 ) returns
returns - i+1 . Since we omit the consistency checks,
a call to first cand and all subsequent calls to next cand until either the correct
pattern is found or the prefix changes can be performed in time O(jsj 2 log jsj).
We will show that Algorithm 1LA correctly infers one-variable pattern languages
from text in the limit, and that it correctly infers one-variable pattern
languages from text with probability 1 if the sample strings are drawn from L(-)
according to a proper probability distribution. 4
Theorem 3. Let - be an arbitrary one-variable pattern. Algorithm 1LA correctly
infers - from text in the limit.
4 Note that learning a language L in the limit and learning L from strings that are
drawn from L according to a proper probability distribution are not the same.
r / 0;
repeat r / r string sr ;
until
while string sr ;
f Phase 3 g
s / a shortest string in fs1 ;
prefix of fs1 ;
if string in fs1 ; that is longer than s
else s 0 / a string in fs1 ; that differs from s in position juj
first cand(s;s 0 );
forever do
read string s 00 ;
if u is not a prefix of s 00 then
common prefix of s and s 00 ;
first cand(s;s 0 )
else if s
output hypothesis - ;
od
Fig. 1. Algorithm 1LA
Theorem 4. Let - 2 Pat 1 . If sample strings are drawn from L(-) according
to a proper probability distribution, Algorithm 1LA learns - with probability 1.
Proof. If outputs - after reading the first string
and converges. Otherwise, let is a string of d \Gamma 1 constant
symbols and - 2 Pat 1 [f"g. After Algorithm 1LA has read two strings that differ
in position d, pattern - will be one of the candidates in the set P 0 implicitly
maintained by the algorithm. As each string s r , r ? 1, satisfies s 1 (d) 6= s r (d)
with probability (jAj \Gamma 1)=jAj - 1=2, this event must happen with probability 1.
After that, as long as the current candidate - differs from -, the probability that
the next string read does not belong to L(- ) is at least 1=2 (cf. Lemma 8 below).
Hence, all candidate patterns will be discarded with probability 1 until - is the
current candidate and is output. After that, the algorithm converges. 2
Lemma 8. Let ux- be a one-variable pattern with constant prefix u, and
be arbitrary such that s 0 (juj
Let - 6= - be a pattern from P 0 (fs to generate
a string s drawn from L(-) according to a proper probability distribution with
probability at least (jAj \Gamma 1)=jAj.
Now, we analyze the total expected learning time of Algorithm 1LA. Obvi-
ously, the total expected learning time is O(') if the target pattern - 2 A + .
Hence, we assume in the following that - contains at least one occurrence of x.
Next, we recall the definition of the median, and establish a basic property of
it that is used later. By E(R) we denote the expectation of a random variable R.
Definition 3. Let R be any random variable with range(R) ' IN. The median
of R is the number - 2 range(R) such that Pr(R ! - 1and Pr(R ? -) ! 1Proposition 1. Let R be a random variable with range(R) ' IN + . Then its
median - satisfies - 2E(R).
Lemma 9. Let D be any proper probability distribution, let L be the random
variable taking as values the length of a string drawn from L(-) with respect to
D, and let - be the median of L and let ' be its expectation. Then, the expected
number of steps performed by Algorithm 1LA during Phase 1 is O(-').
Proof. Let L i be the random variable whose value is the length of the i-th string
read by the algorithm. Clearly, the distribution of L i is the same as that of L. Let
R be the random variable whose value is the number of strings Algorithm 1LA
reads in Phase 1. Let L \Sigma := LR be the number of symbols read by
Algorithm 1LA during Phase 1. Let W 1 be the random variable whose value is
the time spent by Algorithm 1LA in Phase 1. Obviously, W
Claim 1. For we have: E(L i
E(L)
(1)
As L i must be at least i provided R ? i, Equation (1) can be proved as follows:
Similarly, it can be shown that E(L r
Furthermore, it is clear that E(L r
Now, we rewrite E(L \Sigma
E(L
E(L \Sigma
r?-
E(L \Sigma
Using as well as Equations (1) and (3), we obtain:
E(L)
For (fi), we use Equations (1) and (2) to obtain:
E(L)
(using
r
under the same assumptions as in Lemma 9, we can estimate the expected
number of steps performed in Phase 2 as follows.
Lemma 10. During Phase 2, the expected number of steps performed by Algorithm
1LA is O(').
Finally, we deal with Phase 3. Again, let L be as in Lemma 9. Then, the
average amount of time spent in Phase 3 can be estimated as follows.
Lemma 11. During Phase 3, the expected number of steps performed in calls
to the functions first cand and next cand is O(- 2 log -).
Lemma 12. During Phase 3, the expected number of steps performed in reading
strings is O(-' log -).
Proof. Denote by W rthe number of steps performed while reading strings in
Phase 3. We make a distinction between strings read before the correct set of
candidate patterns is considered, and strings read afterwards until the end of
Phase 3. The former are accounted for by random variable V 1 , the latter by V 2 .
If the correct set of candidate patterns, i.e., the set containing -, is not yet
considered, the probability that a new string does not force the correct set of
candidate patterns to be considered is at most 1=2. Denote by K the random
variable whose value is the number of strings that are read in Phase 3 before the
correct set of candidate patterns is considered. We have:
Assume that the correct set of candidate patterns P 0 contains M patterns that
are considered before pattern -. For any such pattern - , the probability that a
string drawn from L(-) according to a proper probability distribution is in the
language of - is at most 1=2, because either - has an additional variable or - has
an additional constant symbol (Lemma 8). Denote by V i
2 the steps performed
for reading strings while the i-th pattern in P 0 is considered.
log R), we obtain:
=O(E(L)- log -)
and
O(E(L)r log r)
Hence, we have E(W r
Lemma 13. During Phase 3, the expected number of steps performed in checking
whether the current candidate pattern generates a newly read sample string
is O(-' log -).
Putting it all together, we arrive at the following expected total learning time
required by Algorithm 1LA.
Theorem 5. If the sample strings are drawn from L(-) according to a proper
probability distribution with expected string length ' the expected total learning
time of Algorithm 1LA is O(' 2 log ').
5. Learning with Superset Queries
PAT is not learnable with polynomially many queries if only equivalence,
membership, and subset queries are allowed provided
This result may be easily extended to PAT 1 . However, positive results are also
known. First, PAT is exactly learnable using polynomially many disjointness
queries with respect to the hypothesis space PAT [FIN , where FIN is the set of
all finite languages (cf. [13]). The proof technique easily extends to PAT 1 , too.
Second, Angluin [3] established an algorithm exactly learning PAT with respect
to PAT by asking O(j-j 2 + j-jjAj) many superset queries. However, it requires
choosing general patterns - for asking the queries, and does definitely not work
if the hypothesis space is PAT 1 . Hence, it is natural to ask:
Does there exist a superset query algorithm learning PAT 1 with respect to
PAT 1 that uses only polynomially many superset queries?
Using the results of previous sections, we are able to answer this question af-
firmatively. Nevertheless, whereas PAT can be learned with respect to PAT by
restricted superset queries, i.e., superset queries not returning counterexamples,
our query algorithm needs counterexamples. Interestingly, it does not need a
counterexample for every query answered negatively, instead two counterexamples
always suffice. The next theorem shows that one-variable patterns are not
learnable by a polynomial number of restricted superset queries.
Theorem 6. Any algorithm exactly identifying all L 2 PAT 1 generated by
a pattern - of length n with respect to PAT 1 by using only restricted superset
queries and restricted equivalence queries must make at least jAj n\Gamma2 - 2 n\Gamma2
queries in the worst case.
Furthermore, we can show that learning PAT 1 with a polynomial number of
superset queries is impossible if the algorithm may ask for a single counterexample
only.
Theorem 7. Any algorithm that exactly identifies all one-variable pattern
languages by restricted superset queries and one unrestricted superset query needs
at least 2 queries in the worst case, where k is the length of the
counterexample returned.
The new algorithm QL works as follows (see Figure 2). Assume the algorithm
should learn some pattern -. First QL asks whether L(-) '
holds. This is the case iff if the answer is yes QL knows the
right result. Otherwise, QL obtains a counterexample C(0) 2 L(-). By asking
the answer is no, QL computes
g. Now we know that - starts with C(0)
but what about the i-th position of -? If
asks L(-) ' L(C(0)) to determine if this is the case. If
A, since this would imply that
Now QL uses the counterexample for the query L(-) ' L(C(0) -i x) to construct
a set x)g. By construction, the two counterexamples differ
in their i-th position, but coincide in their first positions.
else
while L(-) ' L(C(0) -i x) do i
else S / fC(0); C(C(0) -i x)g; R / Cons(S);
repeat - / max(R); R / R n f-g until L(-) ' L(-)
Fig. 2. Algorithm QL. This algorithm learns a pattern - by superset queries. The
queries have the form "L(-) ' L(- )," where - 2 Pat 1 is chosen by the algorithm. If the
answer to a query L(-) ' L(-) is no, the algorithm can ask for a counterexample C(- ).
By w -i we denote the prefix of w of length i and by max(R) some maximum-length
element of R.
Algorithm 2 in Section 2 computes
coincides with S in the first Again we narrowed the search
for - to a set R of candidates. Let m be the length of the shortest counterexample
in S. Then log m) by Lemma 5. Now, the only task left is to find -
among all patterns in R. We find - by removing other patterns from R by using
the following lemma.
Lemma 14. Let S ' A
implies - .
QL tests L(-) ' L(- ) for a maximum length pattern - 2 R and removes -
from R if L(-) 6' L(- ). Iterating this process finally yields the longest pattern -
for which L(-) ' L(- ). Lemma 14 guarantees -. Thus, we have:
Theorem 8. Algorithm QL learns PAT 1 with respect to PAT 1 by asking only
superset queries. The query complexity of QL is O(j-j+m log m) many restricted
superset queries plus two superset queries (these are the first two queries answered
no) for every language L(-) 2 PAT 1 , where m is the length of the shortest
counterexample returned.
Thus, the query complexity O(j-j+m log m) of our learner compares well with
that of Angluin's [3] learner (which is O(j-jjAj) when restricted to learn PAT 1 )
using the much more powerful hypothesis space PAT as long as the length of
the shortest counterexample returned is not too large.
Acknowledgements
A substantial part of this work has been done while the second author was visiting
Kyushu University. This visit has been supported by the Japanese Society
for the Promotion of Science under Grant No. 106011.
The fifth author kindly acknowledges the support by the Grant-in-Aid for
Scientific Research (C) from the Japan Ministry of Education, Science, Sports,
and Culture under Grant No. 07680403.
--R
Finding patterns common to a set of strings.
Inductive inference of formal languages from positive data.
Machine Learning
Efficient Learning of One-Variable Pattern Languages from Positive Data
The relation of two patterns with comparable languages.
Parallelism in random access machines.
Language identification in the limit.
An introduction to parallel algorithms.
time inference of general pattern languages.
Inclusion is undecidable for pattern languages.
A polynomial-time algorithm for learning k-variable pattern languages from examples
A note on the two-variable pattern-finding problem
Systems that learn: An introduction to learning theory for cognitive and computer scientists.
Inductive inference
Patterns. EATCS Bulletin
Return to patterns.
Pattern inference.
Formal Principles of Language Acquisition.
Lange and Wiehagen's pattern language learning algorithm: An average-case analysis with respect to its total learning time
A guided tour across the boundaries of learning recursive languages.
--TR
A theory of the learnable
Systems that learn: an introduction to learning theory for cognitive and computer scientists
On the complexity of inductive inference
of pattern languages from examples and queries
A note on the two-variable pattern-finding problem
Deterministic simulation of idealized parallel computers on more realistic ones
Prudence and other conditions on formal language learning
A polynomial-time algorithm for learning <italic>k-</italic>variable pattern languages from examples
Polynomial-time inference of arbitrary pattern languages
Efficient PRAM simulation on a distributed memory machine
An introduction to parallel algorithms
Lange and Wiehagen?s pattern language learning algorithm
Queries and Concept Learning
Polynomial Time Inference of Extended Regular Pattern Languages
Inclusion is Undecidable for Pattern Languages
Polynomial Time Inference of General Pattern Languages
The Relation of Two Patterns with Comparable Languages
Pattern Inference
A Guided Tour Across the Boundaries of Learning Recursive Languages
Inductive Inference of Unbounded Unions of Pattern Languages from Positive Data
Monotonic and Nonmonotonic Inductive Inference of Functions and Patterns
Parallelism in random access machines
--CTR
John Case , Sanjay Jain , Rdiger Reischuk , Frank Stephan , Thomas Zeugmann, Learning a subclass of regular patterns in polynomial time, Theoretical Computer Science, v.364 n.1, p.115-131, 2 November 2006
Thomas Zeugmann, From learning in the limit to stochastic finite learning, Theoretical Computer Science, v.364 n.1, p.77-97, 2 November 2006 | average-cae analysis;inductive learning;parallelization;one-variable pattern languages |
501056 | The Invariants of the Clifford Groups. | The automorphism group of the Barnes-Wall lattice L_m in dimension 2^m(m \neq 3 ) is a subgroup of index 2 in a certain Clifford group \mathcal{C}_m of structure 2_+^{1+2m} . O^+(2m,2). This group and its complex analogue \mathcal{X}_m of structure (2_+^{1+2m}{\sf Y}Z_8) . Sp(2m, 2) have arisen in recent years in connection with the construction of orthogonal spreads, Kerdock sets, packings in Grassmannian spaces, quantum codes, Siegel modular forms and spherical designs. In this paper we give a simpler proof of Runge@apos;s 1996 result that the space of invariants for \mathcal{C}_m of degree 2k is spanned by the complete weight enumerators of the codes C \otimes \Bbb{F}_{2^m}, where C ranges over all binary self-dual codes of length 2k; these are a basis if m \ge k-1. We also give new constructions for L_m and \mathcal{C}_m: let M be the \Bbb{Z}[\sqrt{2}]-lattice with Gram matrix \scriptsize\big[\begin{array}{@{}r@{\quad}r@{}} 2 & \sqrt{2} \\ \sqrt{2} & 2 \end{array} \big]. Then L_m is the rational part of M^{\otimes m}, and (M^{\otimes m} ). Also, if C is a binary self-dual code not generated by vectors of weight 2, then \mathcal{C}_m is precisely the automorphism group of the complete weight enumerator of C \otimes \Bbb{F}_{2^m}. There are analogues of all these results for the complex group \mathcal{X}_m, with doubly-even self-dual code instead of self-dual code. | Introduction
In 1959 Barnes and Wall [2] constructed a family of lattices in dimensions 2
They distinguished two geometrically similar lattices Lm #
. The automorphism
investigated in a series of papers by Bolt, Room and Wall [8], [9],
[50]. Gm is a subgroup of index 2 in a certain group Cm of structure 2 1+2m
We follow Bolt et al. in calling Cm a Cli#ord group. This group and its complex analogue
Xm are the subject of the present paper.
These groups have appeared in several di#erent contexts in recent years. In 1972 Brou-e
and Enguehard [12] rediscovered the Barnes-Wall lattices and also determined their automorphism
groups. In 1995, Calderbank, Cameron, Kantor and Seidel [13] used the Cli#ord
groups to construct orthogonal spreads and Kerdock sets, and asked "is it possible to say
something about [their] Molien series, such as the minimal degree of an invariant?".
Around the same time, Runge [39], [40], [41], [42] (see also [20], [36]) investigated these
groups in connection with Siegel modular forms. Among other things, he established the
remarkable result that the space of homogeneous invariants for Cm of degree 2k is spanned
by the complete weight enumerators of the codes
ranges over all binary
self-dual (or type I) codes of length 2k, and the space of homogeneous invariants for Xm of
degree 8k is spanned by the complete weight enumerators of the codes
ranges over all binary doubly-even self-dual (or type II) codes of length 8k. One of our goals
is to give a simpler proof of these two assertions, not involving Siegel modular forms (see
Theorems 4.9 and 6.2).
Around 1996, the Cli#ord groups also appeared in the study of fault-tolerant quantum
computation and the construction of quantum error-correcting codes [4], [15], [16], [29], and
in the construction of optimal packings in Grassmannian spaces [14], [17], [44]. The story of
the astonishing coincidence (involving the group C 3 ) that led to [14], [15] and [16] is told in
[16]. (Other recent references that mention these groups are [23], [30], [51].)
Independently, and slightly later, Sidelnikov [45], [46], [47], [48] (see also [28]) came across
the group Cm when studying spherical codes and designs. In particular, he showed that for
the lowest degree harmonic invariant of Cm has degree 8, and hence that the orbit
under Cm of any point on a sphere in R 2 m
is a spherical 7-design. (Venkov [49] had earlier
shown that for m # 3 the minimal vectors of the Barnes-Wall lattices form 7-designs.)
In fact it is an immediate consequence of Runge's results that for m # 3 Cm has a unique
harmonic invariant of degree 8 and no such invariant of degree 10 (see Corollary 4.13). The
space of homogeneous invariants of degree 8 is spanned by the fourth power of the quadratic
form and the complete weight enumerator of the code H
is the [8, 4, 4]
Hamming code. An explicit formula for this complete weight enumerator is given in Theorem
4.14.
Our proof of the real version of Runge's theorem is given in Section 4 (Theorem 4.9),
following two preliminary sections dealing with the group Cm and with generalized weight
enumerators.
In Section 5 we study the connection between the group Cm and the Barnes-Wall lat-
tices. We define the balanced Barnes-Wall lattice Mm to be the Z[ # 2]-lattice #
(Lemma 5.2), which leads to a simple construction: the Barnes-Wall
lattice is just the rational part of
1 . Furthermore
More precisely,
Also, if C is any binary self-dual code that is not generated by vectors of weight 2,
(Corollary 5.7). The proof of this makes use of the fact that Cm is a
maximal finite subgroup of GL(2 m , R) (Theorem 5.6). Although there are partial results
about the maximality of Cm in Kleidman and Liebeck [30], this result appears to be new.
The proof does not use the classification of finite simple groups.
The analogous results for the complex Cli#ord group Xm are given in Section 6. Theorem
6.2 is Runge's theorem. Extending scalars, let M m be the hermitian Z[# 8 ]-lattice Z[# 8
Mm . Then Xm is the subgroup of U(2 m , Q [# 8 ]) preserving M m (Proposition 6.4). Theorem
6.5 shows that, apart from the center, Xm is a maximal finite subgroup of U(2 m , C ), and
Corollary 6.6 is the analogue of Corollary 5.7.
Bolt et al. [8], [9], [10], [50] and Sidelnikov [45], [46], [47] also consider the group C (p)
obtained by replacing 2 in the definition of Cm by an odd prime p. In the final section we
give some analogous results for this group.
In recent years many other kinds of self-dual codes have been studied by a number of
authors. Nine such families were named and surveyed in [38]. In a sequel [35] to the present
paper we will give a general definition of the "type" of a self-dual code which includes all
these families as well as other self-dual codes over rings and modules. For each "type" we
investigate the structure of the associated "Cli#ord-Weil group" (analogous to Cm and Xm
for types I and II) and its ring of invariants.
The results in this paper and in Part II can be regarded as providing a general setting
for Gleason's theorems [24], [32], [38] about the weight enumerator of a binary self-dual code
(cf. the case Theorem 4.9), a doubly-even binary self-dual code (cf. the case
Theorem 6.2) and a self-dual code over F p (cf. the case Theorem 7.1).
They are also a kind of discrete analogue of a long series of theorems going back to Eichler
(see for example [7], [39], [40], [42]), stating that under certain conditions theta series of
quadratic forms are bases for spaces of modular forms: here complete weight enumerators of
generalized self-dual codes are bases for spaces of invariants of "Cli#ord-Weil groups".
2 The real Cli#ord group Cm
This initial section defines the real Cli#ord group Cm . The extraspecial 2-group E(m)
is a subgroup of the orthogonal group O(2 m , R). If
is the automorphism group of the 2-dimensional standard lattice. In general E(m) is the
m-fold tensor power of E(1):
E(m) :=
E(1)# -
and is generated by the tensor products of # 1 and # 2 with 2 - 2 identity matrices I 2 .
Definition 2.1 The real Cli#ord group Cm is the normalizer in O(2 m , R) of the extraspecial
2-group E(m).
The natural representation of E(m) is absolutely irreducible. So the centralizer of E(m)
in the full orthogonal group is equal to { - I 2 m}, which is the center of E(m). Then Cm/E(m)
embeds into the outer automorphism group of E(m). The quotient group E(m)/Z(E(m)) is
isomorphic to a 2m-dimensional vector space over F 2 . Since every outer automorphism has
to respect the {+1, -1}-valued quadratic form
it follows easily that the outer automorphism group of E(m) is isomorphic to O + (2m, 2), the
full orthogonal group of a quadratic form of Witt defect 0 over F 2 (see e.g. [51]).
Since the group 2 1+2m
is a subgroup of O(2 m , R) (cf. [10] or the explicit
construction below), we find that Cm
2). The order of Cm is
1).
To perform explicit calculations we need a convenient set of generators for Cm .
Theorem 2.2 Cm is generated by the following elements
(1) diag((-1) q(v)+a ), where q ranges over all {0, 1}-valued quadratic forms on F m
2 and
a # {0, 1},
(2) AGL(m, 2), acting on R 2 m
by permuting the basis vectors in F m
and
(3) the single matrix
h# I
# .
Proof. Let H be the group generated by the elements in (1) and (2). First, H contains the
extraspecial group E(m), since #
are in H and their images under
GL(m,
To see that H/E(m) is a maximal parabolic subgroup of O note that by [13]
the elements a # GL(m, 2) act on E(m)/Z(E(m))
2 as # a 0
0 a -tr # , and the elements
# , where b is the skew-symmetric matrix corresponding to the
bilinear form b q (x, y) := q(x
Since
is not in H, the group generated by H and this element is
Cm .
Corollary 2.3 Cm is generated by
1# I
2# I
h# I
where # is the particular quadratic form
3 Full weight enumerators and complete weight enumerator
We now introduce certain weight enumerators and show that they are invariant under the
real Cli#ord group. Let C # F N
2 be a linear code # of length N over the field F 2 . For m # N
let C(m) :=
be the extension of C to a code over the field with 2 m elements.
Let V be the group algebra V := Regarding
2 as an
m-dimensional vector space over F 2 , we have a tensor decomposition
(R 2 ).
In the same manner the group algebra embeds naturally into the
group algebra
(R 2 )).
Definition 3.1 The full weight enumerator of C(m) is the element
(This was called a generalized weight polynomial in [24] and an exact enumerator in [32,
Chapter 5].)
Fix a basis (a 1 , . , am ) of F m
. Then a codeword c # C(m) is just an m-tuple of
codewords in C. The element
corresponds to the m-tuple
which can also be regarded as an m-N-matrix M of which the rows are the elements
Lemma 3.2 Let
c1 ,. ,c m#C
e c
# m# N
(R
Then the
induced by identifying an m-tuple
with the codeword c :=
Proof. Let
C. The generator e c of R[C(m)] is
x
(R 2 )),
has a basis y 0 , y 1 . Under the identification above this element is mapped
onto
(y # (1)
(R 2 )),
which is the element e c #
# A binary linear code C of length N is a subspace of F N
2 . If C # C # , C is self-orthogonal; if
is self-dual [32], [38].
Definition 3.3 (Cf. [32, Chapter 5].) The complete weight enumerator of C(m) is the
following homogeneous polynomial of degree N in 2 m variables:
x a f (c)
where a f (c) is the number of components of c that are equal to f .
Remark 3.4 The complete weight enumerator of C(m) is the projection under # of the full
weight enumerator of C(m) to the symmetric power Sym N (V ), where #
is the R-linear mapping defined by x f
Theorem 3.5 Let C be a self-dual code over F 2 .
(i) The Cli#ord group Cm preserves the full weight enumerator fwe(C(m)).
(ii) The Cli#ord group Cm preserves the complete weight enumerator cwe(C(m)).
Proof. Let N be the length of C, which is necessarily even. Then Cm acts on R[F N
diagonally. This action commutes with the projection #
statement (ii) follows immediately from (i) by Remark 3.4. To prove (i) it is enough to
consider the generators of Cm .
The generators #
1# I
2# I
-# I 2 and
h# I
I 2 of Corollary 2.3
are tensor products of the form
x# I 2 m-1 . By Lemma 3.2 it is therefore enough to consider
the case generators. But then the matrix # 1 acts as #
mapping a codeword is the
all-ones vector. Since C is self-dual, 1 is in C and therefore # 1 only permutes the codewords
and hence fixes fwe(C). Analogously, the matrix # 2 changes signs of the components of the
codewords in the full weight enumerator: if is mapped to (-1) c i x c i .
Since the codewords in C have even weight, the tensor product x c
is fixed by
That h preserves the full weight enumerator follows from the MacWilliams
identity [32, Chapter 5, Theorem 14].
The generator d := diag((-1) #(v)
only occurs for m # 2. By
Lemma 3.2 it su#ces to consider the case 2. Again by Lemma 3.2, we regard d as acting
on pairs (c, c # ) of codewords in C. Then d fixes or negates (x c
and negates it if and only if c and c # intersect in an odd number of 1's. This is impossible
since C is self-dual, and so d also preserves fwe(C(m)).
The remaining generators in g # GL(m, 2) permute the elements of F 2 m . The codewords
c # C(m) are precisely the elements of the form
a fixed F 2 -basis for acts linearly on F m
mapping a i onto
, the word
c is mapped to
which again is in C(m). Hence these generators also fix
fwe(C(m)).
4 The ring of invariants of Cm
In this section we establish Runge's theorem that the complete weight enumerators of the
codes C(m) generate the space of invariants for Cm .
Definition 4.1 A polynomial p in 2 m variables is called a Cli#ord invariant of genus m if it
is an invariant for the real Cli#ord group Cm . Furthermore, p is called a parabolic invariant
if it is invariant under the parabolic subgroup P generated by the elements of type (1) and
(2) of Theorem 2.2, and a diagonal invariant if it is invariant under the group generated by
the elements of type (1).
The following is obvious:
Lemma 4.2 A polynomial p is a diagonal invariant if and only if all of its monomials are
diagonal invariants.
Let M be an m - N matrix over F 2 . We can associate a monic monomial -M # R[x f |
with such a matrix by taking the product of the variables associated with its
columns. Clearly all monic monomials are of this form, and two matrices correspond to the
same monic monomial if and only if there is a column permutation taking one to the other.
Lemma 4.3 A monic monomial -M is a diagonal invariant if and only if the rows of M
are orthogonal.
Proof. It su#ces to consider quadratic
m); we easily check that the action of diag((-1) q ij ) is to multiply -M by (-1) k , where k is
the inner product of rows i and j of M ; the lemma follows.
For
maps -M onto -M+b , where the matrix M + b has entries (M This implies
that -M is equivalent to -M # under the action of AGL(m, 2) if and only if the binary codes
#M, 1# and #M # , 1# are equivalent. We can thus define a parabolic invariant -m (C) for any
self-orthogonal code C containing 1 and of dimension at most m
-M .
We define -m (C) to be 0 if 1 # C or dim(C) > m+ 1. Since the invariants -m (C) are sums
over orbits, we have:
Lemma 4.4 A basis for the space of parabolic invariants of degree N is given by polynomials
of the form -m (C) where C ranges over the equivalence classes of binary self-orthogonal codes
of length N containing 1 and of dimension
Lemma 4.5 For any binary self-orthogonal code C containing 1,
-m (D).
Proof. From the definition,
where M ranges over m-N matrices with all rows in C. Let M be such a matrix. Then M
uniquely determines a subcode D := #M, 1# of C; we thus have
-m (D)
as required.
Theorem 4.6 A basis for the space of parabolic invariants is given by the polynomials
cwe(C(m)), where C ranges over equivalence classes of self-orthogonal codes containing 1
and of dimension
Proof. The equations in Lemma 4.5 form a triangular system which we can solve for the
polynomials -m (C). In particular, -m (C) is a linear combination of the cwe(D(m)) for
subcodes 1 # D # C.
denote the linear transformation
where P is the parabolic subgroup of Cm ; that is, X P is the operation of averaging over the
parabolic subgroup.
Lemma 4.7 For any binary self-orthogonal code C of even length N containing 1 and of
dimension N/2 - r,
The final sum is over all self-orthogonal codes C # containing C to index 2.
Proof. By the MacWilliams identity, we find that
where M ranges over m-N matrices such that the first row of M is in C # and the remaining
rows are in C. For each code 1 # D # C # , consider the partial sum over the terms with
D. If D # C, the partial sum is just -m (D), so in particular is a parabolic
invariant. The other possibility is that [D : D # 2. For a matrix M with #M,
define a vector v M # F m
2 such that (v M the ith row of M is in C, and (v M
otherwise. In particular, the partial sum we are considering is
-M .
If D is not self-orthogonal then this sum is annihilated by averaging over the diagonal
subgroup. Similarly, if we apply an element of AGLm (2) to this sum, this simply has the
e#ect of changing v M . Thus, when D # D # ,
-m (D).
Hence
-m (D),
where the sums are restricted to self-orthogonal codes D. Introducing a variable C #D, C#
into the second sum (note that since D # C # , C # C # precisely when D # D # ), this
becomes
-m (D).
Any given C # will, of course, contain each subcode of C exactly once, so we can remove the
condition
-m (D)
-m (D)
as required.
Lemma 4.8 Let V be a finite dimensional vector space, M a linear transformation on V ,
and P a partially ordered set. Suppose there exists a spanning set v p of V indexed by p # P
on which M acts triangularly; that is,
for suitable coe#cients c pq . Suppose furthermore that c only if p is maximal in
. Then the fixed subspace of M in V is spanned by the elements
Proof. Since the matrix is triangular, there exists another triangular matrix D
that conjugates C into Jordan canonical form. Setting
(d pp #= 0), we find
with c # su#ciently large n. In other words, each w p is
in the Jordan block of M with eigenvalue c pp . But the vectors w
the Jordan blocks of M on V are spanned by the corresponding Jordan blocks of C. In
particular, this is true for the block corresponding to 1.
Theorem 4.9 (Runge [42].) Fix integers k and m # 1. The space of homogeneous invariants
of degree 2k for the Cli#ord group Cm of genus m is spanned by cwe(C(m)), where C
ranges over all binary self-dual codes of length 2k; this is a basis if m # k - 1.
Proof. Let p be a parabolic invariant. If p is a Cli#ord invariant then
By Lemma 4.7, the operator X P
acts triangularly on the vectors cwem (C) (ordered
by inclusion); since
the hypotheses of Lemma 4.8 are satisfied. The first claim then follows by Lemma 4.8 and
Theorem 3.5. Linear independence for m # k - 1 follows from Lemma 4.4.
In fact a stronger result holds:
Theorem 4.10 For any binary self-orthogonal code C of even length N containing 1 and
of dimension N/2 - r,|C m | #
g#Cm
1#i#r
where the sum on the right is over all self-dual codes C # containing C.
To see that this is indeed stronger than Theorem 4.9, we observe that if p is an invariant
for
g#Cm
Since the space of parabolic invariants contains the space of invariants, the same is true of
the span of|C m | #
g#Cm
ranges over the parabolic invariants. By Theorem 4.10 each of these can be written
as a linear combination of complete weight enumerators of self-dual codes, and thus Theorem
4.9 follows.
Proof. For any self-orthogonal code C, let
g#Cm
Averaging both sides of equation (1) in Lemma 4.7 over Cm , we find
and solving for Em (C) gives
By induction on r (observing that the result follows from Theorem 3.5 when
1#i#r
cwem (C # ).
But each code C # is counted 2 r
times (corresponding to the 1-dimensional subspaces of
thus eliminating the sum over C # gives the desired result.
Note that
This gives a surjective map from the space of genus m complete weight enumerators to the
space of genus m- 1 complete weight enumerators. By Theorem 4.9 it follows that this also
gives a surjective map from the genus m invariants to the genus m- 1 invariants. (Runge's
proof of Theorem 4.9 proceeds by first showing this map is surjective, using Siegel modular
forms, and then arguing that this implies Theorem 4.9.) Since by Theorem 4.6 the parabolic
invariants of degree N become linearly independent when m # N
Corollary 4.11 Let #m (t) be the Molien series of the Cli#ord group of genus m. As m
tends to infinity, the series #m (t) tend monotonically to
where N 2k is the number of equivalence classes of self-dual codes of length 2k.
(For the definition of Molien series, see for example [5] or [32, Chapter 19].)
Explicit calculations for
Corollary 4.12 The initial terms of the Molien series of the Cli#ord group of genus m # 1
are given by
where the next term is 2t 12 for
showed that the lowest degree of a harmonic invariant of Cm is 8.
Inspection of the above Molien series gives the following stronger result.
Corollary 4.13 The smallest degree of a harmonic invariant of Cm is 8, and there is a
unique harmonic invariant of degree 8. There are no harmonic invariants of degree 10.
The two-dimensional space of homogeneous invariants for Cm of degree 8 is spanned by
the fourth power of the quadratic form and by hm := cwe(H
is the
[8, 4, 4] binary Hamming code. We can give hm explicitly.
Theorem 4.14 Let G(m, denote the set of k-dimensional subspaces of F m
2 . Then
v#d+U
v#d+U
v#d+U
x v . (2)
The second term on the right-hand side is equal to 14
runs
through unordered pairs of elements of F m
. The total number of terms is
Proof. We will compute cwe(H
(which is equal to cwe(H
defined by the generator matrix
A codeword corresponds to a choice of (a, b, c, d) # F m
, one for each row; from the columns
of the generator matrix we find that the corresponding term of the weight enumerator is
x d x c+d x b+d x b+c+d x a+d x a+c+d x a+b+d x a+b+c+d .
This depends only on the a#ne space #a, b, c# d. The four terms on the right-hand side
of Eq. (2) correspond to dim#a, b, the coe#cients are the number of ways of
choosing d) for a given a#ne space. If dim#a, b, for example, there are 7 - 6 - 4
ways to choose a, b, c and 8 ways to choose d, giving the coe#cient 8 - 7 - 6 -
Remarks
(1) The unique harmonic invariant of degree 8 integrates to zero over the sphere, and
so must have zeros on the sphere. The orbit of any such point under Cm therefore forms a
spherical 11-design, cf. [25]. This was already observed by Sidelnikov [48].
(2) The case dihedral group of order 16 with Molien series 1/(1-# 2 )(1-# 8 ),
as in Gleason's theorem on the weight enumerators of binary self-dual codes [24], [32, Problem
3, p. 602], [38].
(3) The case 2: C 2 has order 2304 and Molien series
(The reflection group [3, 4, 3], No. 28 on the Shephard-Todd list, cf. [5, p. 199], is a subgroup
of C 2 of index 2.) The unique harmonic invariants f 8 and f 12 (say) of degrees 8 and 12 are
easily computed, and then one can find real points
and f 12 vanish. Any orbit of such a point under C 2 forms a spherical 15-design of size 2304
(cf. [25]). We conjecture that such points exists for all m # 2.
(4) The group C 3 of order 5160960 has appeared in su#ciently many di#erent contexts
that it is worth placing its Molien series on record. It is p(#)/q(#), where p(#) is the
symmetric polynomial of degree 154 beginning
and
(5) For completeness, we mention that the Molien series for E(1) is 1
basic invariants
1 and x 2
1 . For arbitrary m the Molien series for E(m) is2n 2
5 Real Cli#ord groups and Barnes-Wall-lattices
In a series of papers [2], [8], [9], [10], [50], Barnes, Bolt, Room and Wall investigated a
family of lattices in
(cf. also [12], [18]). They distinguish two geometrically similar
lattices Lm # L # m in each dimension 2 m , for which if m #= 3 the automorphism groups
are subgroups Gm of index 2 in the real Cli#ord group Cm . When
are two versions of the root lattice
index 270 in Aut(L 3 ) and index 2 in C 3 .
The lattices Lm and L # m can be defined in terms of an orthonormal basis b 0 , . ,
of
as follows. Let V := F m
2 and index the basis elements
by the elements of
V . For each a#ne subspace U # V let # U #
correspond to the characteristic function
of
corresponds to an element of U and #
spanned by the set
{2 #(m-d+#)/2# U | 0 # d # m, U is a d-dimensional a#ne subspace of V } ,
Extending scalars, we define the Z[ # 2]-lattice
which we call the balanced Barnes-Wall lattice.
From the generating sets for Lm and L # m we have:
Remark 5.1 Mm is generated by the vectors # 2 m-d
runs
through the a#ne subspaces of V of dimension d.
Lemma 5.2 For all m > 1, the lattice Mm is a tensor product:
Proof. Write
as the direct sum of an (m - 1)-dimensional vector
space Vm-1 and a 1-dimensional space arrange the basis vectors so that
correspond to the elements in Vm-1 and b 2 m-1 , . ,
-1 to the elements in
# U be a generator for Mm , where d-dimensional linear
subspace U 0 of V and a
If U 0 # Vm-1 , then
m-d
m-1-d
Otherwise Um-1 := U 0 # Vm-1 has dimension d - 1 and U
some v m-1 # Vm-1 . If v m-1 # Um-1 , then
m-d
If v m-1 # Um-1 we have the identity
Hence Mm #
. The other inclusion follows more easily by similar arguments.
In view of Lemma 5.2, we have the following simple and apparently new construction for
the Barnes-Wall lattice Lm . Namely, Lm is the rational part of the Z[ # 2]-lattice
1 is the Z[ # 2]-lattice with Gram matrix # 2 # 2
# . For more about this construction
see [34].
Proposition 5.3 For all m # 1, the automorphism group Aut(Mm ) (the subgroup of the
orthogonal group O(2 m , R) that preserves Mm ) is isomorphic to Cm .
Proof. Let (v 1 , . , v 2 m) be a Z-basis for
is a Z-basis for Lm . Then m) is a Z[ # 2]-basis for
Hence Mm has a Z-basis ( # 2v 1 , . , #
Since the scalar products of the v i are integral, the Z-lattice Mm with respect to 1the
trace form of the Z[
# 2]-valued standard form on Mm is isometric to # 2L # m # Lm . In
particular, the automorphism group of the Z[ # 2]-lattice Mm is the subgroup of Aut( # 2Lm #
that commutes with the multiplication by # 2. Hence Aut(Mm ) contains
a subgroup of index at most two. Since
by Lemma 5.2, [Aut(Mm
Lemma 5.4 If m # 2, then the Z-span (denoted Z[C m ]) of the matrices in Cm acting on the
-dimensional Z[
2]-lattice Mm is Z[
Proof. We proceed by induction on m. Explicit calculations show that the lemma is true
by induction Z[C m-2
and
the automorphism group of Mm
contains C
2# Cm-2 . Hence
We now proceed to show that for m # 2 the real Cli#ord group Cm is a maximal finite
subgroup of GL(2 m , R). For the investigation of possible normal subgroups of finite groups
containing Cm , the notion of a primitive matrix group plays a central role. A matrix group
called imprimitive if there is a nontrivial decomposition
of V into subspaces which are permuted under the action of G. G is called primitive if it is
not imprimitive. If N is a normal subgroup of G then G permutes the isotypic components
of V |N . So if G is primitive, the restriction of V to N is isotypic, i.e. is a multiple of an
irreducible representation. In particular, since the image of an irreducible representation of
an abelian group N is cyclic, all abelian normal subgroups of G are cyclic.
Lemma 2. Let G be a finite group with Cm # G # GL(2 m , R) and let p be
a prime. If p is odd, the maximal normal p-subgroup of G is trivial. The maximal normal
2-subgroup of G is either E(m) if
Proof. We first observe that the only nontrivial normal subgroup of Cm that is properly
contained in E(m) is Therefore, if U is a normal subgroup of G,
U # E(m) is one of 1, Z(E(m)) or E(m).
The matrix group Cm and hence also G is primitive. In particular, all abelian normal
subgroups of G are cyclic. Let p > 2 be a rational prime and U E G a normal p-subgroup
of G. The degree of the absolutely irreducible representations of U that occur in R 2 m
|U is
a power of p and divides 2 m . So this degree is 1 and U is abelian, hence cyclic by the
primitivity of G. Therefore the automorphism group of U does not contain E(m)/Z(E(m)).
Since CG (U)#E(m) is a normal subgroup of Cm , it equals E(m) and hence E(m) centralizes
U . Since E(m) is already absolutely irreducible, U consists of scalar matrices in GL(2 m , R),
and therefore U = 1. If because Cm is the largest
finite subgroup of GL(2 m , R) that normalizes E(m). Since the normal 2-subgroups of G do
not contain an abelian noncyclic characteristic subgroup, the possible normal 2-subgroups
are classified in a theorem of P. Hall (cf. [27, p. 357]). In particular they do not contain
Cm /Z(E(m)) as a subgroup of their automorphism groups, so again U commutes with E(m),
and therefore consists only of scalar matrices.
Theorem 5.6 Let m # 2. Then the real Cli#ord group Cm is a maximal finite subgroup of
Proof. Let G be a finite subgroup of GL(2 m , R) that properly contains Cm . By Lemma
5.5, all normal p-subgroups of G are central. By a theorem of Brauer, every representation
of a finite group is realizable over a cyclotomic number field (cf. [43, -12.3]). In fact, since
the natural representation of G is real, it is even true that G is conjugate to a subgroup of
totally real abelian number field K containing
5.6]). Let K be a minimal such field and assume that G # GL(2 m , K). Let R be the
ring of integers of K. Then G fixes an RCm -lattice. By Lemma 5.4 all RCm -lattices are of
the form
I# Z[ # 2] Mm for some fractional ideal I of R, the group G fixes all RCm -lattices
and hence also
R# Z[ # 2] Mm . So any choice of an R-basis for Mm gives rise to an embedding
by which we may regard G as a group of matrices. Without loss of generality
we may assume that
Aut(R# Z[ # 2] Mm ). Then the Galois group # := Gal(K/Q [ # 2])
acts on G by acting componentwise on the matrices. Seeking a contradiction, we assume
It is enough to show that there is a nontrivial element # that acts trivially
on G, because then the matrices in G have their entries in the fixed field of #, contradicting
the minimality of K.
Assume first that there is an odd prime p ramified in K/Q , and let # be a prime ideal
of R that lies over p. Then p is also ramified in K/Q [ # 2] and therefore the action of the
ramification group, the stabilizer in # of #, on R/# is not faithful, hence the first inertia
group
is nontrivial (see e.g. [22, Corollary III.4.2]). Since G # := {g # G | g # I 2 m (mod #)} is
a normal p-subgroup of G, G Therefore all the elements in # act
trivially on G, which is what we were seeking to prove.
2 is the only ramified prime in K, which implies that
a ] for some a # 3,
and we are done. So
assume a > 3 and let # be the prime ideal of R over 2 (generated by (1-# 2 a)(1-1
2 a )) and let
# be the Galois automorphism defined by # 2 a+#
2 a +# -2 a-1
2 a ).
# and
Therefore # 2# . Since the subgroup G 2# := {g # G | g # I 2 m (mod 2#)} of G is trivial
(cf. [3, Hilfssatz 1]) one concludes that # acts trivially on G, and thus G is in fact defined
over
The theorem follows by induction.
Corollary 5.7 Let m # 1 and let C be a self-dual code over F 2 that is not generated by
vectors of weight 2. Then
Proof. The proof for the case will be postponed to Section 6. Assume m # 2.
We first show that the parabolic subgroup H # Cm acts irreducibly on the Lie algebra
R)), the set of real 2 m
. The group AGL(m, 2)
acts 2-transitively on our standard basis b 0 , . ,
. A basis for
is given by the matrices b ij := b
- 1. Since AGL(m, 2)
acts transitively on the b ij , a basis for the endomorphism ring End
is given by the orbits of the stabilizer of b 01 . Representatives for these orbits are b 01 , b 02 ,
b 23 and b 24 . But the generator corresponding to the quadratic form q(v 1 , . , therefore does not commute with the endomorphism
corresponding to b 02 or b 24 . Similarly the endomorphism corresponding to b 23 is ruled out
by q(v 1 , . ,
Let G := AutO(2 m ,R) (cwe(C(m)). Then G is a closed subgroup of O(2 m , R) and hence is a
Lie group (cf. [37, Theorem 3.4]). Since G contains Cm it acts irreducibly on Lie(O(2 m , R)).
Assume that G #= Cm . Then G is infinite by Theorem 5.6 and therefore G contains SO(2 m , R).
However, the ring of invariants of SO(2 m , R) is generated by the quadratic form
The only binary self-dual codes C that produce such complete weight enumerators are direct
sums of copies of the code {00, 11}.
6 The complex Cli#ord groups and doubly-even codes
There are analogues for the complex Cli#ord group Xm for most of the above results. (Z a
will denote a cyclic group of order a.)
Definition 6.1 The complex Cli#ord group Xm is the normalizer in U(2 m , Q [# 8 ]) of the central
product E(m)YZ 4 .
As in the real case, one concludes that
(cf. [33, Cor. 8.4]).
The analogue of Theorem 4.9 is the following, which can be proved in essentially the
same way.
Theorem 6.2 (Runge [42].) Fix integers N and m # 1. The space of homogeneous invariants
of degree N for the complex Cli#ord group Xm is spanned by cwe(C(m)), where C
ranges over all binary doubly-even self-dual codes of length N . (In particular, when N is not
a multiple of 8, the invariant space is empty.)
The analogues of Theorem 4.10 and Proposition 5.3 are:
Theorem 6.3 For any doubly-even binary code C of length N # 0(8) containing 1 and of
dimension
g#Xm
0#i<r
where the sum is over all doubly-even self-dual codes C # containing C.
Proposition 6.4 Let M m := Z[# 8
. Then the subgroup of U(2 m , Q [# 8 ]) preserving
M m is precisely Xm .
We omit the proofs.
For the analogue of Lemma 5.4, observe that the matrices in Xm generate a maximal order.
Even for the Z-span of the matrices in X 1 acting on M 1 is the maximal order Z[# 8 ] 2-2 .
Hence the induction argument used to prove Lemma 5.4 shows that Z[Xm
Therefore the analogue of Theorem 5.6 holds even for
Theorem 6.5 Let m # 1 and let G be a finite group such that Xm # G # U(2 m , C ). Then
there exists a root of unity # such that
Proof. As in the proof of Theorem 5.6, we may assume that G is contained in U(2 m , K)
for some abelian number field K containing # 8 . Let R be the ring of integers in K and T the
group of roots of unity in R. Then TXm is the normalizer in U(2 m , K) of TE(m) (cf. [33,
Cor. 8.4]). As before, the RXm -lattices in the natural module are of the form
where I is a fractional ideal of R. Since G fixes one of these lattices, it also fixes
As in the proof of Theorem 5.6, we write the elements of G as matrices with respect to a
basis for M m and assume that G is the full (unitary) automorphism group of
Then the Galois group # := Gal(K/Q [# 8 ]) acts on G. Assume that G #= TXm . Then TE(m)
is not normal in G. As in Lemma 5.5 one shows that the maximal normal p-subgroup of
G is central for all primes p. Let # be a prime ideal in R that ramifies in K/Q [# 8 ], and
let # be an element of the inertia group # . Then for all G, the image g # satisfies
is a normal p-subgroup,
where p is the rational prime divisible by #, it is central. Therefore the map g # a(g) is a
homomorphism of G into an abelian group, and hence the commutator subgroup G # is fixed
under #. Since any abelian extension K of Q that properly contains Q is ramified at some
finite prime of Q [# 8 ], we conclude that G # Aut(M m ). Since E(m)YZ 8
characteristic in Aut(M m ) and therefore also in G # YZ 8 , the group TE(m) is normal in G,
which is a contradiction.
Corollary 6.6 Assume m # 1 and let C be a binary self-dual doubly-even code of length N .
Then
Remarks
(1) The case unitary reflection group (No. 9 on the Shephard-Todd list)
of order 192 with Molien series 1/(1 - # 8 )(1 - # 24 ), as in Gleason's theorem on the weight
enumerators of doubly-even binary self-dual codes [24], [32, p. 602, Theorem 3c], [38].
(2) The case 2: X 2 has order 92160 and Molien series
This has a reflection subgroup of index 2, No. 31 on the Shephard-Todd list.
(3) The case 3: X 3 has order 743178240, and the Molien series can be written as
is the symmetric polynomial of degree 44 beginning
and
Runge [40] gives the Molien series for the commutator subgroup H
3 , of index 2 in
. The Molien series for X 3 consists of the terms in the series for H 3 that have exponents
divisible by 4. Oura [36] has computed the Molien series for H
4 , and that for X 4 can
be obtained from it in the same way. Other related Molien series can be found in [1].
Proof of Corollary 5.7, case
Let C be a self-dual binary code of length n with Hamming weight enumerator hwe C (x, y).
We will show that if C is not generated by vectors of weight 2 then AutO(2) (hwe C
Certainly G := AutO(2) (hwe C must show it is no larger. The only
closed subgroups of O(2) containing D 16 are the dihedral groups D 16k for k # 1 and O(2)
itself. So if the result is false then G contains a rotation
where # is not a multiple of #/4.
Consider the shadow S(C) of C [38]; that is, the set of vectors v # F n
2 such that
The weight enumerator of S(C) is given by S(x, y, i(x - y)). Then
G if and only if S(x, or in other words if and only if for all
is a multiple of 2#.
Now, pick a vector v 0 # S(C), and consider the polynomial W (x, y, z, w) given by
This has the following symmetries:
y, z, w),
y, z, w).
Furthermore, since only if
y, z, w).
To each of these symmetries we associate a 2 - 2 unitary matrix U such that (x, y) is
transformed according to U and (z, w) according to U . The first two symmetries generate
the complex group X 1 , which is maximally finite in PU(2) by Theorem 6.5. On the other
hand, we can check directly that
even up to scalar multiplication. Thus the three symmetries topologically generate PU(2);
and hence W is invariant under any unitary matrix of determinant -1. Since hwe C (x,
y, x, y), it follows that G = O(2). But then
hwe C (x,
implying that C is generated by vectors of weight 2.
This completes the proof of Corollary 5.7.
7 Cli#ord groups for p > 2
Given an odd prime p, there again is a natural representation of the extraspecial p-group
of exponent p, this time in U(p m , C ); to be precise, E p (1) is generated by
transforms
is the m-th tensor power of E p (1). The Cli#ord group C (p)
m is then defined to be
the normalizer in U(p m , Q [# ap ]) of E p (m), where a one finds that
(cf. e.g. [51]).
As before, the invariants of these Cli#ord groups are given by codes:
Theorem 7.1 Fix integers N and m # 1. The space of invariants of degree N for the
Cli#ord group C (p)
m is spanned by cwe(C(m)), where C ranges over all self-dual codes over F p
of length N containing 1.
Theorem 7.2 For any self-orthogonal code C over F p of length N containing 1 and of
dimension N/2 - r,|C (p)
0#i<r
where the sum is over all self-dual codes C # containing C (and in particular is 0 if no such
code exists).
Regarding maximal finiteness, the arguments we used for to prove Theorem 5.6 do
not carry over to odd primes, since the groups C (p)
m do not span a maximal order. Lindsey
[31] showed by group theoretic arguments that C (p)
1 is a maximal finite subgroup of SL(p, C )
(cf. [6] for the theorem below follows from [21] and [26].
Theorem 7.3 Let p > 2 be a prime and m # 1. If G is a finite group with C (p)
there exists a root of unity # such that
Proof. As before we may assume that G is contained in U(p m , K) for some abelian number
field K containing # p . Let L denote the set of rational primes l satisfying the following four
properties: (i) G is l-adically integral, (ii) l is unramified in K, (iii) |G| < |P GL(p m , l)|, (iv)
l splits completely in K. Since all but finitely many primes satisfy conditions (i)-(iii), and
infinitely many primes satisfy (iv) (by the -
Cebotarev Density Theorem), it follows that the
set L is infinite.
Fix a prime l over l # L. Since G is l-adically integral, we can reduce it mod l, obtaining
a representation of G in GL(p m , l). Since p is ramified in K, l #= p, so this representation
is faithful on the extraspecial group. Since the extraspecial group acts irreducibly, the
representation is in fact faithful on the entire Cli#ord group. Thus G mod l contains the
normalizer of an extraspecial group, but modulo scalars is strictly contained in PGL(p m , l)
(by condition (iii)). It follows from the main theorem of [30] that for p m
l and
coincide as subgroups of PGL(p m , l). For already follows from the
references in the paragraph preceding the theorem.
Fix a coset S of C (p)
m in G. For each prime l|l with l # L, the above argument implies
that we can choose an element g # S such that g # 1 (mod l). As there are infinitely many
such primes, at least one such g must get chosen infinitely often. But then we must actually
have in K, and since g has finite order, some root of unity # S .
Since this holds for all cosets S, G is generated by C (p)
together with the roots of unity
proving the theorem.
Remark 7.4 It is worth pointing out that the proof of the main theorem in [30] relies
heavily on the classification of finite simple groups, which is why we preferred to use our
alternative arguments when proving Theorem 5.6.
--R
Type II codes
Some extreme forms defined in terms of Abelian groups
Zur Galoiskohomologie definiter arithmetischer Gruppen
Mixed state entanglement and quantum error correction
Polynomial Invariants of Finite Groups
University of Chicago Press
in Theta functions (Bowdoin
The Cli
Une famille infinie de formes quadratiques enti-eres
A group-theoretic framework for the construction of packings in Grassmannian spaces
Quantum error correction orthogonal geometry
Quantum error correction via codes over GF (4)
packings in Grassmannian space
Lattices and Groups
Induction and structure theorems for orthogonal representations of finite groups
Notices 5
On finite linear groups in dimension at most 10
Algebraic Number Theory
On the faithful representations
Weight polynomials of self-dual codes and the MacWilliams identities
The football
On certain groups defined by Sidelnikov (in Russian)
algorithms and error correction (in Russian)
The Subgroup Structure of the Finite Classical Groups
The Theory of Error-Correcting Codes
Finite quaternionic matrix groups
A simple construction for the Barnes-Wall lattices
Generalized self-dual codes and Cli#ord- Weil groups
The dimension formula for the ring of code polynomials in genus 4
Algebraic Groups and Number Theory
in Handbook of Coding Theory
On
On
The Schottky ideal
Codes and
Linear Representations of Finite Groups
A family of optimal packings in Grassmannian manifolds
On a finite group of matrices and codes on the Euclidean sphere (in Russian)
On a finite group of matrices generating orbit codes on the Euclidean sphere
Spherical 7-designs in 2 n -dimensional Euclidean space
Orbital spherical 11-designs in which the initial point is a root of an invariant polynomial (in Russian)
"designs"
The automorphism group of an extraspecial p-group
--TR
Codes and Siegel modular forms
A Family of Optimal Packings in Grassmannian Manifolds
A Group-Theoretic Framework for the Construction of Packings in Grassmannian Spaces
Spherical 7-Designs in 2<math coding="latex" type="inline">^n</math>-Dimensional Euclidean Space
--CTR
F. L. Chiera, Type II Codes over$$\mathbb{Z}/2k\mathbb{Z}$$, Invariant Rings and Theta Series, Designs, Codes and Cryptography, v.36 n.2, p.147-158, August 2005
K. Betsumiya , Y. Choie, Jacobi forms over totally real fields and type II codes over Galois rings GR(2m, f), European Journal of Combinatorics, v.25 n.4, p.475-486, May 2004 | invariants;spherical designs;self-dual codes;barnes-wall lattices;clifford groups |
501419 | Neighborhood aware source routing. | A novel approach to source routing in ad hoc networks is introduced that takes advantage of maintaining information regarding the two-hop neighborhood of a node. The neighborhood aware source routing (NSR) protocol is presented based on this approach, and its performance is compared by simulation with the peformance of the Dynamic Source Routing (DSR) protocol. The simulation analysis indicates that NSR requires much fewer control packets while delivering at least as many data packets as DSR. | INTRODUCTION
On-demand routing protocols have been shown to be very
eective for ad hoc networks. The success of a caching algorithm
for an on-demand routing protocol depends of the
strategies used for the deletion of links from the routing
cache [5]. DSR has been shown to incur less routing overhead
when utilizing a cache data structure based on a graph
representation of individual links (link cache), rather than
based on complete paths. DSR removes failed links from
the link cache when a ROUTE ERROR packet reports the
failure of the link and other links are removed by aging. The
lifetime of a link is estimated based on a node's perceived
stability of both endpoint nodes of the link. In some sce-
narios, the use of link caches has been shown to produce
overhead tra-c as little as 50% of that incurred with path
caches [5].
The results reported on DSR indicate that a routing protocol
based on link-state information can make better routing
decisions than one based on path information, because the
freshness of routing information being processed can be determined
by the timestamp or sequence number assigned
by the head node of the links. On the other hand, in an
ad hoc network, it is very easy for a given node to learn
about the neighbors of its neighbors. Based on these obser-
vations, we introduce a new approach to link-state routing
in ad hoc networks based on on-demand source routing and
knowledge of links that exist in the two-hop neighborhood of
nodes. We call this approach the neighborhood aware source
routing (NSR) protocol. In NSR, a node maintains a partial
topology of the network consisting of the links to its immediate
neighbors (1-hop neighbors), the links to its 2-hop
neighbors, and the links in the requested paths to destinations
that are more than two hops away. Links are removed
from this partial topology graph by aging only, and the life-time
of a link is determined by the node from which the link
starts (head node of the link), re
ecting with a good degree
of certainty the degree of mobility of the node.
Section 2 describes NSR in detail. Section 3 presents the
performance comparison of NSR and DSR using a 50-node
network and eleven simulation experiments with dierent
numbers of sources and destinations. The simulation results
indicate that NSR incurs far less communication overhead
while delivering packets with the same or better delivery
rates as DSR using path caches. Section 4 presents our
conclusions.
2. NSR DESCRIPTION
2.1
Overview
To describe NSR, the topology of a network is modeled as a
directed graph, where each node in the graph has a unique
identier and represents a router, and where the links connecting
the nodes are described by some parameters. A link
from node u to node v is denoted (u; v), node u is referred
as the head node of the link and node v is referred as the
tail node of the link. In this study it is assumed that the
links have unit cost.
Routers are assumed to operate correctly and information
is assumed to be stored without errors. All events are processed
one at a time within a nite time and in the order in
which they are detected. Broadcast packets are transmitted
unreliably and it is assumed that the link-level protocol can
inform NSR when a packet cannot be sent over a particular
link.
The source route contained in a data packet species the
sequence of nodes to be traversed by the packet. A source
route can be changed by the routers along the path to the
destination and its maximum length is bounded.
NSR does not attempt to maintain routes from every node
to every other node in the network. Routes are discovered
on an on-demand basis and are maintained only as long as
they are necessary.
Routers running NSR exchange link-state information and
source routes for all known destinations are computed by
running Dijkstra's shortest-path rst on the partial topology
information (topology graph) maintained by the router.
If NSR has a route to the destination of a locally generated
data packet, a source route is added to the header of the
packet and it is forwarded to the next hop. Otherwise, NSR
broadcast a route request (RREQ) to its neighbors asking
for the link-state information needed to build a source route
to the destination. If the neighbors do not have a path, a
RREQ is broadcast to the entire network and only the destination
of the data packet is allowed to send to the source
of the RREQ a route reply (RREP) containing the complete
path to the destination. A router forwarding a data packet
needs to send a route error (RERR) packet to the source of
the data only when it is not able to nd an alternate path
to a broken source route or when the failure of a link in the
source route needs to be reported to the source of the data
packet in order to erase outdated link-state information.
NSR has the property of being loop-free at all times because
any change made to the path to be traversed by a data
packet does not include nodes from the traversed path.
A link to a new neighbor is brought up when NSR receives
any packet from the neighbor. The link to a neighbor is
taken down either when the link-level protocol is unable to
deliver unicast packets to the neighbor or when a timeout
has elapsed from the last time the router received any tra-c
from the neighbor.
A node running NSR periodically broadcasts to its neighboring
nodes a hello (HELLO) packet. HELLO packets have a
dual purpose: they are used to notify the presence of the
node to its neighbors and as a way of obtaining reasonable
up-to-date information about the set of links two hops away
from the node. By having such link-state information refreshed
periodically the nodes forwarding a data packet may
not need to notify the source of the packet when a repair is
done to a broken source route.
2.2 Routing Information Maintained in NSR
A node i running NSR maintains the node's epoch E i , the
node's sequence number SN i , the average lifetime of the
node's links to neighbors L i , the broadcast ID, the neighbor
table, the topology graph, the shortest-path tree, the data
queue, the RREQ history table, and the RERR history table
The node's epoch is incremented when the node boots up before
NSR starts its operation. The node's epoch is the only
data that needs to be kept in non-volatile storage because
it is used to maintain the integrity of routing information
across node's resets (a separate section describes its role in
NSR's operation).
NSR uses sequence numbers to validate link-state informa-
tion. All the outgoing links of a node are identied by the
same monotonically increasing sequence number. A node
increments its sequence number when it needs to send a
control packet and a link was brought up or taken down
since the last time a control packet was transmitted.
The lifetime of the node's links to neighbors is updated periodically
by applying a decay factor to the current lifetime
and averaging the time the links to neighbors are up.
The broadcast ID is used with the node's address to uniquely
identify a RREQ packet. The broadcast ID has its value
incremented when a new RREQ packet is created.
An entry in the neighbor table contains the address of a
neighbor, the time the link to the neighbor was brought
up, the neighbor ID, and a delete
ag. When the link to a
neighbor is taken down the delete
ag is set to 1 to mark the
entry as deleted. The neighbor table has 255 entries, and the
neighbor ID consists of the number of the entry in the table.
An entry marked as deleted is reused when the link for the
neighbor listed in the entry is brought up. This guarantees
that the neighbor ID is preserved for some time across link
failures, which is useful when building source routes based
on this IDs, as described later.
The topology graph is updated with the state of the links
reported in both control packets and data packets. The
parameters of each link (u; v) in the topology graph consists
of the tuple (sn; cost; lifetime; ageTime; nbrID), where
sn is the link's sequence number, cost is the cost of the link,
lifetime is the link's lifetime as reported by u, ageTime is
the age-out time of the link, and nbrID is the neighbor ID
assigned by node u to its neighbor v.
Link-state information is only deleted from the topology
graph due to aging. The lifetime of a link is determined by
the head node of the link. All the outgoing links of a node
are considered to have the same lifetime, which is computed
according to a function that estimates the average time a
link to a neighbor is up.
The shortest-path tree is obtained by running Dijkstra's
shortest-path on the topology graph when a control packet
is received, when the state of an outgoing link changes, when
a link ages-out, and periodically if the shortest-path tree has
not been updated within a given time interval.
Data packets locally generated at the node waiting for a
route to the destination are kept in the data queue. The
packet at the tail of the queue is deleted when the queue is
full and a new packet arrives to be enqueued. A packet is
also deleted from the queue if a certain period of time has
elapsed since it was inserted into the queue. A leaky bucket
controls the rate with which data packets are dequeued for
transmission in order to reduce the chances of congestion.
Each node maintains in the RREQ history table, for a spe-
cic length of time, a record with the source address and
broadcast ID of each RREQ received. A node that receives
a RREQ with a source address and broadcast ID already
listed on the table does not forward the packet.
A node can send a RERR packet for a node src, through
neighbor nbr, as a consequence of processing a data packet
sent by node src to destination dst, after detecting the failure
of the link (u; v), only if there is no entry in the RERR
history table for the tuple (src; dst; u; v; nbr). An entry is
deleted from the table after a certain period of time has
elapsed since it was created. The RERR history table is a
mechanism used to avoid the generation of a storm of RERR
packets reporting the failure of the same link.
2.3 Routing Information Exchanged by NSR
NSR can generate four types of control packets: RREQ,
RREP, RERR, and HELLO. Routing information is also
sent in the header of data packets. RREQ and HELLO
packets are broadcast unreliably and RREP and RERR are
transmitted reliably as unicast packets. The link to the next-hop
along the path to be traversed by RREP and RERR
packets is taken down if after several retransmissions the
link-layer fails to deliver the packet to the intended neighbor.
All the packets transmitted by NSR have a eld that keeps
track of the number of hops traversed by the packets. A
packet is not forwarded if the it has traveled MAX PATHLEN
hops, however, the link-state information it carries is processed
RREP, RERR, and data packets contain a source route. The
source route consists of the sequence of nodes to be traversed
by the packet. The identication of a node in a source route
does not need to be the node's address, it can be the neighbor
ID assigned by the node that precedes it in the source
route. The neighbor ID is encoded in 1 byte, representing
a signicant reduction in the overhead added by the source
route in a data packet when the addresses of the nodes are
encoded in several bytes (e.g., 4 bytes in IPv4 [6], or 16 bytes
in IPv6 [2]).
Every packet but HELLOs is updated with the state of the
link over which it was received (the receiving node is the
head of the link and the neighbor which sent the packet is
the tail of the link). The source of a data packet also adds to
the source route the sequence number of the links along the
path to be traversed. The receiver node processing a source
route updates its topology graph with the state of the links
traversed by the packet.
The link state information (LSI) reported by NSR for a given
link consists of the cost the link (encoded in 1 byte), the
sequence number of the head of the link (encoded in 2 bytes),
and the lifetime of the link (encoded in 4 bits). LSIs reported
in control packets have an extra eld (encoded in 1 byte):
the neighbor ID assigned by the head of the link to the tail
node.
A node relaying RREQ, RREP, and RERR packets adds to
the packet its neighborhood link state (NLS) which consists
of the LSIs for outgoing links to neighbors. The NLS also
contains the partialLSI
ag which is set when the node reports
a partial list of its outgoing links due to packet size
constraints. All the links in an NLS have the same sequence
number and lifetime. After processing the LSIs received
in an NLS with the partialLSI
ag not set, the node sets
to innity the cost of all the links in the topology graph
having the same head node of the NLS but with a smaller
sequence number, and the sequence number of these links
is updated with the sequence number reported in the NLS.
Consequently, the node that advertises its NLS does not
have to report the set of links that were removed from its
NLS due to failures.
HELLO packets carry the node's NLS and are not relayed
by the receiving node. The receiver of a HELLO processes
the NLS reported in the packet in the same way NLSs are
processed when received in RREQ packets.
A RREQ packet contains the source node's address, the des-
tination's address, the maximum number of hops it can tra-
verse, and a broadcast ID which is incremented each time
the source node initiates a RREQ (the broadcast ID and the
address of the source node form a unique identier for the
RREQ). Two kinds of RREQs are sent: non-propagating
RREQs which can travel at most one hop, and propagating
RREQs which can be relayed by up to MAX PATHLEN
nodes.
A RERR packet is generated due to the failure of a link in
the source route of a data packet. The RERR contains the
source route received in the data packet, the head node of
the failed link, the LSIs having as head node the head of the
failed link, and the LSIs for the links in the alternate path
to the destination (if any).
2.4 Operation of NSR
The NSR protocol is composed of four mechanisms that
work together to allow the reliable computation of source
routes on an on-demand basis:
Connectivity Management: by which a node can
learn the state of those links on the path to nodes two
hops away. The cost of repairing a source route due
to link failures can be signicantly reduced by having
available up-to-date state of such links.
Sequence Number Management: by which the
sequence number used in the validation of link-state
information is updated such that its integrity is preserved
across node-resets and network partitions. This
mechanism also ensures that RREQs are uniquely iden-
tied across node-resets.
Route Discovery: by which the source of a data
packet obtains a source route to the destination when
the node does not already know a route to it.
Route Maintenance: by which any node relaying a
data packet is able to detect and repair a source route
that contains a broken link, and by which the source of
a data packet is able to optimize source routes. A broken
source route may be repaired multiple times until
it reaches the destination without needing to notify
the source of the data packet of such repairs.
2.4.1 Connectivity Management
This mechanism is responsible for determining the node's
lifetime L i and the state of the links to neighboring nodes.
The link to a new neighbor is brought up when any packet
is received from the neighbor. The link to a neighbor is
taken down if the node does not receive any packet from the
neighbor for a given period of time.
HELLO packets are broadcast periodically and have their
rescheduled when RREQ packets are transmitted
The node's lifetime L i is recomputed periodically based on
the average time the links to neighbors are up. The minimum
lifetime Lmin is assumed to be seconds and the maximum
lifetime Lmax is assumed to be 1800 seconds. When
reported in an LSI the node's lifetime is encoded in 4 bits
after being rounded down to the nearest of one of the following
values (in seconds): Lmin , 45, 60, 75, 90, 105, 120,
150, 165, 180, 240, 360, 480, 900, and Lmax .
2.4.2 Sequence Number Management
NSR works with the assumption that only the source of data
packets be notied of the failure of a link in the path being
traversed by a data packet. Given that the cost of a link
may change over time without being noticed by some nodes
in the network that have the link in their topology graphs,
link-state information must be aged-out to prevent routers
from keeping stale routes. A node can ascertain whether
the link-state information reported by a neighbor is valid
by comparing the sequence number in the LSI against the
sequence number stored in the topology graph for the same
link. The router considers the received LSI as valid if its
sequence number is greater than the sequence number stored
for the same link, or if there is no entry for the link in the
topology graph.
The sequence number used in the validation of link-state information
consists of two counters maintained by the head
node i of the link: the node's epoch E i and the node's sequence
number SN i . Both E i and SN i are encoded in one
byte each and have a value in the range [1, 254]. It is assumed
that SN i wraps around when its value is either 127
or 254 and it is incremented.
The value of SN i is incremented whenever the router needs
to advertise changes to its NLS. It is assumed that the time
interval between node resets is greater or equal to Lmin =
seconds, and that SN i should wrap around in at least
node other than the head
i of a link with lifetime set to Lmax will have deleted the
link from its topology graph by aging before E i and SN i
wrap around. If SN i wraps around before Lmax seconds
have elapsed since the previous wrap around, then E i is
incremented and SN i is set to 1.
The procedure used to determine whether a value X based
on SN i or E i is greater than a value Y is shown in Figure 1.
When SN i gets incremented to a value sn greater than 127
then all the nodes in the network have already aged-out all
the links reported by i with SN i in the range [1, sn - 127].
Likewise, when SN i gets incremented to a value sn smaller
than 128 then all the routers have already aged-out all the
links reported by i with SN i in the range [128, sn
From the perspective of any node x 6= i in the network, the
combination of E i and SN i is seen as an unbounded counter
because the values of E i and SN i have a lifetime.
result FALSE;
else
result TRUE;
else if
result TRUE;
X is greater than Y;
else if
X is greater than Y;
Figure
1: Comparing values derived from E i or SN i
The sequence number of a received LSI for the link (u; v)
is greater than the sequence number stored in the topology
graph for the same link if the Eu component of the LSI's
sequence number is greater than the respective Eu stored
in the topology graph, or if the epochs are the same but
the SNu component of the LSI's sequence number is greater
than the respective SNu component in the topology graph.
The broadcast ID set by node i in a RREQ packet also
consists of two counters: the node's epoch E i and a 4-byte
sequence number B i . The source of a RREQ increments B i
before creating the RREQ for transmission. By having E i
as part of the broadcast ID, RREQs are uniquely identied
across node resets.
2.4.3 Route Discovery
When NSR receives a data packet from an upper-layer and
the router has a source route to the destination, the source
route is inserted into the packet's header and the packet is
forwarded to the next hop towards the destination. Oth-
erwise, NSR inserts the data packet into the data queue
and initiates the route discovery process, if there is none
already in progress, for the data packet's destination by
broadcasting a non-propagating RREQ. By sending non-propagating
RREQs, NSR prevents unnecessary
ooding
when some neighbor has a source route to the required des-
tination. If none of the neighbors send a RREP within a
timeout period, a propagating RREQ is transmitted. Each
time a propagating RREQ is transmitted the timeout period
is doubled until a pre-dened number of attempts have been
made, after which it is kept constant. After a pre-dened
number of RREQs have been transmitted for a given desti-
nation, the route discovery process is restarted by sending
a non-propagating RREQ if the data queue holds a packet
for the destination.
When a node receives a RREQ, it processes all the LSIs in
the packet and then checks whether it has seen it before by
comparing the source address and the broadcast ID from the
RREQ against the entries in the RREQ history table. The
RREQ is discarded if the node has already seen it before,
otherwise the node is said to have received a valid RREQ,
and an entry is added to the RREQ history table with the
values of the RREQ's source address and broadcast ID. Non-propagating
RREQs are always considered valid RREQs.
The receiver node of a non-propagating RREQ sends a RREP
if it has a source route to the destination of the RREQ. Since
a b c e
d
a b c e
d
f
(a) (b)
a b c e
d
(c)
Figure
2: Link-state information learned from
RREQ and RREP
d
a b c e a b c e
d
(a) (b)
a b c e
d
f
(c)
Figure
3: Repairs that can be applied to a source
route in a RREP
a RREP to a non-propagating RREQ is not generated by
the destination of the RREQ, the lifetime of the LSIs reported
in the RREP must correspond to the time left for
being aged-out from the node's topology graph.
If the node processing a valid RREQ is the destination of
the RREQ then it sends a RREP back to the source of the
RREQ. The source route contained in the RREP consists of
the reversed path traversed by the RREQ packet. A node
other than the destination of a valid RREQ adds its NLS
into the packet before broadcasting it. Likewise, a node
other than the destination of a RREP adds its NLS into
the packet before forwarding it. The link-state information
learned from RREQs and RREPs increases the chances of
a node nding a source route in the topology graph and,
consequently, increases the likelihood of replying to non-propagating
RREQs. As an example, consider the network
topology shown in Figure 2(a), where solid lines indicate the
path traversed by the rst RREQ packet received by destination
node e from the source node a. The dashed lines
in
Figure
2(b) represent the links learned from the RREP
received by node a from destination e. The dashed lines
in
Figure
2(c) represent the links learned from the RREQ
received by node e from node a. The solid lines in Figures
2(b) and 2(c) correspond to those links learned from
HELLO packets.
A node forwarding a RREP packet may change its source
route if the link to the next hop has failed. The broken
source route can be repaired if the node is able to nd an
alternate path having at most 2-hops to any of the nodes
in the path to be traversed by the RREP packet. The order
with which the nodes from the broken source route are
visited when seeking an alternate path is from the tail node
towards the node that corresponds to the tail of the failed
link.
Figure
3 shows the types of repairs that can be applied
to the source route (shown in solid lines) in a RREP:
in
Figure
3(a) the failure of the link (b; c) causes node b to
replace links (b; c) and (c; d) by (b; d), in Figure 3(b) the
failure of the link (b; c) causes node b to replace links (b; c)
and (c; d) by (b; f) and (f; d), and in Figure 3(c) the failure
of the link (b; c) causes node b to replace link (b; c) by (b; f)
and (f; c). An extra-hop can be added to a broken source
route only if the length of the new path does not exceed
MAX PATHLEN hops.
2.4.4 Route Maintenance
A node forwarding a data packet attempts to repair the
source route when either the link to the next hop or the
link headed by the next hop in the path to be traversed
has failed. The repair consists in nding an alternate path
to the destination of the data packet, and may involve the
transmission of a RERR packet to the source of the data
packet.
The repair made by a forwarding node to the source route of
a data packet does not trigger the transmission of a RERR
packet if the following rules are satised:
Rule-1: the node processing the packet is listed in the
original source route received in the data packet.
Rule-2: one of the nodes not yet visited by the data
packet but listed in the original source route is at most
two hops away from the router itself in the repaired
source route.
The path traversed by a RERR packet consists of the reversed
path traversed by the data packet, having as destination
the source of the data packet that triggered its trans-
mission. Rule-1 and Rule-2 guarantee that the source of the
data packet is notied of all the link failures present in its
source route. When the RERR reaches its destination, the
source of the data packet updates its topology graph and
recomputes its shortest-path tree.
Figure
4 illustrates the cases where repairs can be applied
to the source route in a data packet without triggering the
generation of RERRs. The links shown as solid lines in Figure
4 correspond to the source route carried by data packets
originated at node a with destination f , and the dashed lines
represent the links added to the new source route repaired
by the nodes indicated with lled circles.
Figure
4(a) illustrates the fact that a node considers a source
route as broken if any of the links in the next two hops
following the node processing the packet has failed. In this
particular case, node b receives a HELLO packet from node
c reporting the failure of link (c; d) before b receives a data
packet to be forwarded. The node forwarding a data packet
may not have in its topology graph the link that is one
hop away in the source route. In order to prevent the node
from dropping the data packet, NSR allows the packet to
be forwarded if the sequence number in the source route for
HELLO
b c d e f
a
(a)
f
a b c d e
(b)
f
b c d
a e
(c)
e
b c d
a f
(d)
b c d
a f
e
Figure
4: Type of repairs that can be applied to a
source route in a
the missed link is greater than the sequence number of any
link reported by the neighbor.
The failure of the link (b; c) shown in Figure 4(c) makes the
links (b; c) and (c; d) in the source route be replaced by the
links (b; g) and (g; d). The failure of the link (b; c) shown in
Figure
4(d) causes the link (b; c) be replaced by the link (b; g)
and the source route be extended in one hop by adding link
(g; c) to it. The failure of the link (b; c) shown in Figure 4(e)
causes the link (b; c) be replaced by link (b; g) and the source
route be shortened in one hop by replacing the links (c; d)
and (d; e) by link (g; e).
If node g shown in Figure 4 receives a data packet with the
source route repaired by node b and it detects the source
route is broken, a RERR packet needs to be transmitted
(even if g has an alternate path) since Rule-1 is not satised
when the node attempts repairing the route.
Figure
5 illustrates the cases that trigger the transmission
of a RERR packet when a source route in a data packet
is detected to be broken. The links shown as solid lines
in
Figure
5 correspond to the source route carried by data
packets originated at node a with destination f , and the
dashed lines represent the links added to the new source
route repaired by the nodes indicated with lled circles.
The generation of RERR packets reporting the failure of the
same link are spaced by some time interval if the source route
being processed was generated by the same source node, and
the data packet is for the same destination, and the data
RERR
RERR
b c d e f
a
(a)
RERR
RERR
b c d e f
a
(b)
f
RERR
RERR
a b c d e
(c)
RERR
a b c d e f
(d)
a b c d e f
RERR
Figure
5: Broken source-routes leading to transmission
of RERR packets
packet was received from the same neighbor that caused the
transmission of the previous RERR packet. This mechanism
prevents the generation of a RERR packet for every
data packet in transit carrying the same source route. After
transmitting a RERR packet the node updates its RERR
history table by adding an entry with information about the
RERR packet.
The failure of the link (c; d) shown in Figure 5(a) causes
node c to transmit a RERR packet to the source of a data
packet received for forwarding. The RERR packet reports
the new source route to destination f , which consists of the
links (c; g), (g; h), (h; i), and (i; f) instead of the links (c; d),
(d; e), and (e; f ). The data packet being processed has its
source route updated accordingly and is forwarded to node
g. When node b receives the RERR packet its topology
graph is updated with the link-state information reported
in the packet, its shortest-path tree is recomputed, its NLS
is added to the packet, and the packet is then forwarded to
a.
Node b shown in Figure 5(b) adds to the RERR packet received
from node c an alternate path to f before forwarding
the packet to a. NSR allows the node forwarding a RERR
to add an alternate path to the RERR packet only if it is a
neighbor of the head node of the failed link that triggered
the generation of the RERR packet.
Node a shown in Figure 5(c) receives a RERR packet not
reporting an alternate path to the destination.
b c d e f
a
a
Figure
Using neighbor IDs in source routes
Node b shown in Figures 5(d) and 5(e) does not forward
the RERR packet because it has an alternate path to the
destination. The next data packet it receives from a has the
source route repaired with the alternate path.
2.5 Using Neighbor IDs in Source Routes
The source route given in a data packet can be formed by
the sequence of neighbor IDs mapped to each link along the
path to be traversed by the packet, instead of being formed
by node's addresses. Such approach makes the source route
very compact and allows more data be carried in each packet.
As an example consider the scenarios depicted in Figure 6.
The numbers beside links shown as solid lines in Figure 6
correspond to the source route carried by data packets originated
at node a with destination f , and the dashed lines
represent the links added to the new source route repaired
by the nodes indicated with lled circles. The number beside
a link is the neighbor ID given by the head of the link
to the tail node. The neighbor table contains the mapping
between neighbor ID and the address of a neighbor. The
entry for the node with neighbor ID 5 is not deleted from
the neighbor table of node b when the link (b; c) fails (Fig-
ure 6(a)), allowing b to get the address of c and repair the
source route accordingly. Because links are deleted from the
topology graph only due to aging, node b in Figure 6(b) is
able to identify the tail of the failed link (c; d) by looking
for all the links in the topology graph having node c as the
head of the link and a neighbor ID 5.
The source route received by node f in the data packet
sourced at a contains the state of all the links in the reversed
path traversed by the data packet. With the failure of either
link (b; c) or link (c; d) the source route received by node f
contains LSIs for the links (d; g) and (g; b). Node f cannot
update the topology graph with the state of (d; g) and (g; b)
if the links are not part of f 's topology graph and the source
route lists neighbor IDs instead of node's addresses. The
likelihood of nding alternate paths to destinations increases
with up-to-date link-state information carried in data pack-
ets, especially when the data
ows are bidirectional. For
this reason, the source of data packets are required to periodically
use addresses instead of neighbor IDs in the source
routes.
3. PERFORMANCE EVALUATION
We run a number of simulation experiments to compare the
average performance of NSR with respect to DSR. Both
NSR and DSR use the services of a medium access (MAC)
protocol based on an RTS-CTS-DATA-ACK packet exchange
for unicast tra-c (similar to the IEEE 802.11 standard [1]).
The promiscuous mode of operation is disabled on DSR because
the MAC protocol uses multiple channels to transmit
data. (Both NSR and DSR might benet from having
the node's network interface running in promiscuous mode.)
The physical layer is modeled as a frequency hopping spread
spectrum radio with a link bandwidth of 1 Mbit/sec, accurately
simulating the physical aspects of a wireless multi-hop
network.
3.1 Mobility Pattern
The simulation experiments use 50 nodes moving over a rect-
angular
at space of 5Km x 7Km and initially randomly
distributed at a density of 1.5 nodes per square kilometer.
Nodes move in the simulation according to the random way-point
model [3]. In this model, each node begins the simulation
by remaining stationary for pause time seconds, it then
selects a random destination and moves to that destination
at a speed of 20 meters per second for a period of time uniformly
distributed between 5 and 11 seconds. Upon reaching
the destination, the node pauses again for pause time sec-
onds, selects another destination, and proceeds there as previously
described, repeating this behavior for the duration
of the simulation.
The simulation experiments are run for the pause times of 0,
15, 30, 45, 60, 90, and 900 seconds, and the total simulated
time in all the experiments is 900 seconds. A pause time of
seconds correspond to the continuous motion of the nodes,
and in a pause time of 900 seconds the nodes are stationary.
3.2 Data Traffic Model
The overall goal of the simulation experiments is to measure
the ability of the routing protocols to react to changes in
the network topology while delivering data packets to their
destinations. The aggregate tra-c load generated by all the
ows in a simulated network consists of
and the size of a data packet is 64 bytes. The data tra-c
load was kept small to ensure that congestion of links is due
only to heavy control tra-c.
We applied to the simulated network three dierent communication
patterns: a pattern of N-sources and N-destinations
a pattern of N-sources and 8-destinations (Nsrc-
8dst), and a pattern of N-sources and 1-destination (Nsrc-
1dst). For each communication pattern we run a number of
simulation experiments with dierent number of data
ows.
The data
ows consist of continuous bit rate tra-c, all the
ows in a simulation experiment generate tra-c at the same
data rate, and each node is the source of no more than one
data
ow. We run four simulation experiments with 8, 16,
32, and 50 sources for both the Nsrc-Ndst and the Nsrc-1dst
patterns, and 3 simulation experiments with 16, 32, and 50
sources for the Nsrc-8dst pattern. The data
ows are started
at times uniformly distributed between 10 and 120 seconds
of simulated time.
3.3 Protocol Configuration
The values for the constants controlling DSR operation during
the simulations are those present in the ns-2 implementation
of DSR [4]. The values for the constants controlling
NSR operation are listed below:
Time between successive transmissions of RREQs for
the same destination is 0.5 seconds. This time is doubled
with each transmission and is kept constant to 10
seconds with the transmission of the sixth RREQ.
The minimum lifetime of an LSI is seconds, and the
maximum lifetime is 1800 seconds.
The average time interval between the transmission of
HELLO packets is 59 seconds with a standard deviation
of 1 second.
The maximum number of entries in the data queue,
RREQ history table, and RERR history table is 50,
200, and 200, respectively.
The lifetime of an entry in the data queue, RREQ history
table, and RERR history table is 30, 30, and 5
seconds, respectively.
Data packets with a source route are removed from the
data queue for transmission spaced from each other by
50 milliseconds.
The maximum number of nodes (MAX PATHLEN)
traversed by any packet is 10.
3.4 Simulation Results
Figures
7, 8, and 9 summarize the comparative performance
of NSR and DSR in all the simulation experiments. The
coordinates having a value of 100 in the x-axis of Figures 10
and 11 correspond to the results obtained for networks with
stationary nodes.
The number of control packets generated by NSR falls in
the range [950, 6490], while for DSR is in the range [260,
83010]. From
Figure
7 we can see that DSR is able to
generate less than 6490 packets in only 24% of the experi-
ments. The benets brought by the ability of NSR repairing
source routes without needing to inform the source of the
data packets is noticeable when N sources send data packets
to the same destination (Nsrc-1dst tra-c pattern): most
of the ROUTE REPLIES generated by nodes running DSR
carry stale routing information, which leads to the transmission
of more ROUTE REQUESTS. And, as shown in Figures
10(d), 10(e), and 10(f), this behavior is independent of
the pause time (except when the nodes are stationary and
the number of data
ows is low). NSR may generate more
control packets than DSR in some scenarios because HELLO
packets are transmitted periodically. These are the cases depicted
in Figures 10(a), 10(b) and 10(d) when the nodes do
not move.
Figure
shows that the number of control packets transmitted
by nodes running NSR deviates very little from the
average among dierent pause times while DSR presents a
large deviation from the average, especially when comparing
the experiments with high node mobility against the experiments
with stationary nodes. We can also see that the type
of workload introduced in the network makes DSR to behave
in an unpredictable manner while NSR is very little af-
fected. In the simulation experiments with Nsrc-Ndst tra-c
pattern DSR generates up to 9.9 times more control packets
than NSR, with Nsrc-1dst tra-c pattern DSR generates
up to 36.6 times more control packets, and with Nsrc-8dst
tra-c pattern DSR generates up to 15.7 times more control
packets than NSR.
We observe from Figure 8(a) that, on average, NSR is able
to deliver to the destinations more data packets than DSR:
in 79% of the simulation experiments NSR delivered more
than 50% of the data packets generated while DSR delivered
more than 50% in 59% of the simulation experiments.
The lack of a source route to the destination in the experiments
with Nsrc-1dst tra-c pattern caused DSR to discard
a high number of packets awaiting for a route (Figure 8(c)).
Figure
11 gives the average performance of NSR and DSR
in terms of the percentage of data packets delivered to the
destinations for a given pause time and tra-c pattern. We
see that the highest number of packets are delivered as the
nodes become less mobile. This behavior is expected because
all the packets enqueued for transmission at the link-layer
are dropped after link failures, and link failures occur
less frequently when the nodes in the network become more
stationary.
The end-to-end delay experienced by data packets (Figure
routed by NSR is similar to the end-to-end delay experienced
by data packets routed by DSR. The curve shown in
Figure
9(b) shows data packets routed by DSR having a
smaller end-to-end delay than those routed by NSR. This
is because the percentage of data packets delivered by DSR
for the Nsrc-1dst tra-c pattern was very low compared to
NSR. We observe that most of the data packets routed by
NSR traversed from 2 to 3 hops, while most of the packets
routed by DSR traversed from 2 to 4 hops.
4. CONCLUSIONS
We have presented the neighborhood aware source routing
protocol (NSR), which we derived from the performance improvements
observed in DSR when link caches were used,
and the ease with which nodes can inform their neighbors of
their own neighbors. The key feature of NSR is that nodes
reduce the eort required to x source routes due to node
mobility by using alternate links available in their two-hop
neighborhood. Simulations demonstrate the advantages derived
from the availability of such alternate paths. Future
work focuses on comparing by simulation the performance
of NSR with DSR when link caches are used.
5.
--R
ns Notes and Documentation.
Caching Strategies in On-Demand Routing Protocols for Wireless Ad Hoc Networks
Internet Protocol.
--TR
A performance comparison of multi-hop wireless ad hoc network routing protocols
Caching strategies in on-demand routing protocols for wireless ad hoc networks
--CTR
J. J. Garcia-Luna-Aceves , Marc Mosko , Charles E. Perkins, A new approach to on-demand loop-free routing in networks using sequence numbers, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.10, p.1599-1615, 14 July 2006
J. J. Garcia-Luna-Aceves , Marc Mosko , Charles E. Perkins, A new approach to on-demand loop-free routing in ad hoc networks, Proceedings of the twenty-second annual symposium on Principles of distributed computing, p.53-62, July 13-16, 2003, Boston, Massachusetts
Hari Rangarajan , J. J. Garcia-Luna-Aceves, Using labeled paths for loop-free on-demand routing in ad hoc networks, Proceedings of the 5th ACM international symposium on Mobile ad hoc networking and computing, May 24-26, 2004, Roppongi Hills, Tokyo, Japan
Giovanni Resta , Paolo Santi, An analysis of the node spatial distribution of the random waypoint mobility model for ad hoc networks, Proceedings of the second ACM international workshop on Principles of mobile computing, October 30-31, 2002, Toulouse, France
Chao Gui , Prasant Mohapatra, SHORT: self-healing and optimizing routing techniques for mobile ad hoc networks, Proceedings of the 4th ACM international symposium on Mobile ad hoc networking & computing, June 01-03, 2003, Annapolis, Maryland, USA
Chao Gui , Prasant Mohapatra, A framework for self-healing and optimizing routing techniques for mobile ad hoc networks, Wireless Networks, v.14 n.1, p.29-46, January 2008
Christian Bettstetter , Giovanni Resta , Paolo Santi, The Node Distribution of the Random Waypoint Mobility Model for Wireless Ad Hoc Networks, IEEE Transactions on Mobile Computing, v.2 n.3, p.257-269, March
I. Kadayif , M. Kandemir , N. Vijaykrishnan , M. J. Irwin, An integer linear programming-based tool for wireless sensor networks, Journal of Parallel and Distributed Computing, v.65 n.3, p.247-260, March 2005
Satyabrata Chakrabarti , Amitabh Mishra, Quality of service in mobile ad hoc networks, The handbook of ad hoc wireless networks, CRC Press, Inc., Boca Raton, FL, | ad hoc networks;wireless mobile networks;source routing;on-demand routing;link-state information |
501469 | Bounding Cache-Related Preemption Delay for Real-Time Systems. | AbstractCache memory is used in almost all computer systems today to bridge the ever increasing speed gap between the processor and main memory. However, its use in multitasking computer systems introduces additional preemption delay due to the reloading of memory blocks that are replaced during preemption. This cache-related preemption delay poses a serious problem in real-time computing systems where predictability is of utmost importance. In this paper, we propose an enhanced technique for analyzing and thus bounding the cache-related preemption delay in fixed-priority preemptive scheduling focusing on instruction caching. The proposed technique improves upon previous techniques in two important ways. First, the technique takes into account the relationship between a preempted task and the set of tasks that execute during the preemption when calculating the cache-related preemption delay. Second, the technique considers the phasing of tasks to eliminate many infeasible task interactions. These two features are expressed as constraints of a linear programming problem whose solution gives a guaranteed upper bound on the cache-related preemption delay. This paper also compares the proposed technique with previous techniques using randomly generated task sets. The results show that the improvement on the worst-case response time prediction by the proposed technique over previous techniques ranges between 5 percent and percent depending on the cache refill time when the task set utilization is 0.6. The results also show that as the cache refill time increases, the improvement increases, which indicates that accurate prediction of cache-related preemption delay by the proposed technique becomes increasingly important if the current trend of widening speed gap between the processor and main memory continues. | Introduction
In a real-time computing system, tasks have timing constraints in terms of deadlines that
must be met for correct operation. To guarantee such timing constraints, extensive research
has been performed on schedulability analysis [1, 2, 3, 4, 5, 6]. In these studies, various
assumptions are usually made to simplify the analysis. One such simplifying assumption
is that the cost of task preemption is zero. This assumption, however, does not hold
in general in actual systems invalidating the result of the schedulability analysis. For
example, task preemption incurs costs to process interrupts [7, 8, 9, 10], to manipulate task
queues [7, 8, 10], and to actually perform context switches [8, 10]. Many of such direct
costs are addressed in a number of recent studies on schedulability analysis that focus on
practical aspects of task scheduling [7, 8, 9, 10].
In addition to the direct costs, task preemption introduces a form of indirect cost due to
cache memory, which is used in almost all computer systems today. In computer systems
with cache memory, when a task is preempted a large number of memory blocks 1 belonging
to the task are displaced from the cache memory between the time the task is preempted
and the time the task resumes execution. When the task resumes execution, it spends a
substantial amount of its execution time reloading the cache with the memory blocks that
were displaced during preemption. Such cache reloading greatly increases preemption delay,
which may invalidate the result of schedulability analysis that overlooks this indirect cost.
There are two ways to address the unpredictability resulting from the above cache-related
preemption delay. The first way is to use cache partitioning where cache memory is divided
into disjoint partitions and one or more partitions are dedicated to each real-time task [12,
13, 14, 15]. In the cache partitioning techniques, each task is allowed to access only its own
partition and thus cache-related preemption delay is avoided. However, cache partitioning
1 A block is the minimum unit of information that can be either present or not present in the cache-main
memory hierarchy [11]. We assume without loss of generality that memory references are made in block
units.
has a number of drawbacks. One drawback is that it requires modification of existing
hardware, software, or both. Another drawback is that it limits the amount of cache
memory that can be used by individual tasks.
The second way to address the unpredictability resulting from the cache-related preemption
delay is to take into account its effects in the schedulability analysis. In [16], Basumallick
and Nilsen propose one such technique. The technique uses the following schedulability
condition for a set of n tasks, which extends the well-known Liu and Layland's schedulability
condition [4].
In the condition, U is the total utilization of the task set and C i and T i are the worst case
execution time (WCET) and period of - i , respectively 2 . The additional term fl i is an upper
bound on the cache-related preemption cost that - i imposes on preempted tasks.
One drawback of this technique is that it suffers from a pessimistic utilization bound, which
approaches 0.693 for a large n [4]. Many task sets that have total utilization higher than
this bound can be successfully scheduled [3]. To rectify this problem, Busquets-Mataix et
al. in [17] propose a technique based on the response time approach [2, 6]. In this technique,
the incorporated into the response time equation as follows
d
e \Theta (C j
where R i is the worst case response time of - i and hp(i) the set of tasks whose priorities are
higher than that of - i . This recursive equation can be solved iteratively and the resulting
worst case response time R i of task - i is compared against its deadline D i to determine the
schedulability.
These notations will be used throughout this paper along with D i that denotes the deadline of - i where
We assume without loss of generality that - i has higher priority than -
Main Memory Cache Memory
preempt preempt
preempt preempt
(a) Cache mapping (b) Worst case preemption scenario
's response time:
preempt
Fig. 1. Overestimation of cache-related preemption delay.
The used in both techniques is computed by multiplying the number of cache
blocks used by task - i and the time needed to refill a cache block. This estimation is based
on a pessimistic assumption that each cache block used by - i replaces from the cache a
memory block that is needed by a preempted task. This pessimistic assumption leads to
overestimation of the cache-related preemption delay since it is possible that the replaced
memory block is one that is no longer needed or one that will be replaced without being
re-referenced even when there were no preemptions.
The above overestimation is addressed by Lee et al. in [18]. They use the concept of useful
cache blocks in computing the cache-related preemption delay where a useful cache block
is defined as a cache block that contains a memory block that may be re-referenced before
being replaced by another memory block. Their technique consists of two steps. The first
step analyzes each task to estimate the maximum number of useful cache blocks in the
task. Based on the results of the first step, the second step computes an upper bound on
the cache-related preemption delay using a linear programming technique. As in Busquets-
Mataix et al.'s technique, this upper bound is incorporated into the response time equation
to compute the worst case response time.
Although Lee et al.'s technique is more accurate than the techniques that do not consider
the usefulness of cache blocks, it is still subject to a number of overestimation sources. We
explain these sources using the example in Fig. 1. In the example, there are three tasks,
we assume without loss of generality that - i has higher priority than
has the highest priority and task - 3 the lowest priority. Suppose that
main memory regions used by the three tasks are mapped to the cache as in Fig. 1-(a).
Also, suppose that the maximum number of useful cache blocks of - 2 and - 3 is 5 and 2,
respectively, and that the time needed to refill a cache block is a single cycle. In this setting,
the linear programming method used in Lee et al.'s technique would give a solution where
preempted three times by - 1 , and - 3 is preempted twice by - 2 during - 3 's response
time R 3 resulting in a preemption delay of 3 \Theta 5
The above solution, however, suffers from two types of overestimation. First, when a task
is preempted, not all of its useful cache blocks are replaced from the cache. For example,
when - 2 is preempted by - 1 , only a small portion of - 2 's useful cache blocks can be replaced
from the cache corresponding to those that conflict with cache blocks used by - 1 , i.e., the
cache blocks framed by thick borders in Fig. 1-(a). Second, the worst case preemption
scenario given by the solution may not be feasible in the actual execution. For example,
cannot be preempted three times by - 1 since - 2 has the WCET of only 20 and thus, the
first invocation of - 2 can certainly be completed before the second invocation of - 1 .
To rectify these problems, this paper proposes a novel technique that incorporates the
following two important features. First, the proposed technique takes into account the relationship
between a preempted task and the set of tasks that execute during the preemption
when calculating the maximum number of useful cache blocks that should be reloaded after
the preemption. Second, the technique considers phasing of tasks to eliminate many
infeasible task interactions. These two features are expressed as constraints of the linear
programming problem whose solution bounds the cache-related preemption delay. In this
paper, we focus on the cache-related preemption delay resulting from instruction caching.
Analysis of data cache-related preemption delay is an equally important research issue and
can be handled by the method explained in [18].
This paper also compares the proposed technique with previous techniques. The results
show that the proposed technique gives up to 60% tighter prediction of the worst case response
time than the previous techniques. The results also show that as the cache refill
time increases, the gap between the worst case response time prediction made by the proposed
technique and those by previous techniques increases. Finally, the results show that
as the cache refill time increases, the cache-related preemption delay takes a proportionally
large percentage in the worst case response time, which indicates that accurate prediction
of cache-related preemption delay becomes increasingly important if the current trend of
widening speed gap between the processor and main memory continues [11].
The rest of the paper is organized as follows: The next section describes in detail Lee
et al.'s technique, which serves as the basis for our proposed technique. In Section III,
we describe the overall approach of the proposed technique along with constraints that
are needed to incorporate scenario-sensitive preemption cost. More advanced constraints
that take into account task phasing are discussed in Section IV. In Section V, we discuss
an optimization that aims to reduce the amount of computation needed in the proposed
technique. Section VI presents the results of our experiments to assess the effectiveness of
the proposed technique. Finally, we conclude this paper in Section VII.
II. Linear Programming-based Analysis of
Cache-related Preemption Delay
In this section, we describe in detail Lee et al.'s linear programming-based technique for
analyzing cache-related preemption delay.
In the technique, the response time equation is given as follows
where the PC i (R i ) term is a guaranteed upper bound on the cache-related preemption
delay of - i during a given response time R i . The term includes not only the delay due to
preemptions but also the delay due to the preemptions of higher priority tasks. The
response time equation can be solved iteratively as follows:
d
(R 0
R k+1
(R k
This iterative procedure terminates when R m
m and this converged R i
value is compared against - i 's deadline to determine the schedulability of - i .
To compute PC i (R k
i ) at each iteration, the technique uses a two step approach. In the first
step, each task is analyzed to estimate the maximum number of useful cache blocks that the
task may have during its execution. The estimation uses a data flow analysis technique [19]
that generates the following two types of information for each execution point p and for
each cache block c:
1. the set of memory blocks that may reside in the cache block c at the execution point
p and
2. the set of memory blocks that may be the first reference to the cache block c after
the execution point p.
The cache block c is defined to be useful at point p if the above two sets have any common
element, which means that there may be an execution where the memory block in cache
block c at p is re-referenced. The total number of useful cache blocks at p determines the
additional cache reloading time that is incurred if the task is preempted at p. Obviously,
the worst case preemption occurs when the task is preempted at the execution point with
the maximum total number of useful cache blocks and this case gives the worst case cache-related
preemption cost for the task. The final result of the first step is a table called the
preemption cost table that gives for each task - i the worst case preemption cost f i
3 .
The second step uses the preemption cost table and a linear programming technique to
derive an upper bound on PC i (R i ) that is required at each iteration of the response time
calculation. This step first defines g j as the number of preemptions of - j during R i . If the g j
values that give the worst case preemption scenario among tasks are known, the worst case
cache-related preemption delay of - i during interval R i , i.e., PC i (R i ), can be calculated as
follows:
This total cache-related preemption delay of - i includes all the delay due to the preemptions
of - i and those of higher priority tasks. Note that the highest priority task - 1 is not included
in the summation since it can never be preempted.
In general, however, the exact g j values that give the worst case preemption delay of - i
cannot be determined. Thus, for the analysis to be safe, a scenario that is guaranteed to
be worse than any actual preemption scenario should be assumed. Such a conservative
scenario can be derived from the following two constraints that any valid g j combination
should satisfy.
3 The technique defines a more general preemption cost f i;j , which is the cost task - i pays in the worst
case for its j-th preemption over the (j \Gamma 1)-th preemption. However, since in most cases the execution
point with the maximum total number of useful cache blocks is contained within a loop nest, the generalized
preemption cost has little effect because f is the product of the iteration
bounds of all the containing loops.
First, the total number of preemptions of - during R i cannot be larger than the
total number of invocations of -
Second, the total number of preemptions of - j during R i cannot be larger than the number
of invocations of - j during R i multiplied by the maximum number of times that any single
invocation can be preempted by higher priority tasks
e \Theta
d
Note that since the technique computes the worst case response times from the highest
priority task to the lowest priority task, the worst case response times of -
that is, R 1 are available when R i is computed.
To summarize, in Lee et al.'s technique, the problem of computing a safe upper bound on
formulated as a linear programming problem in which the objective function
value
maximized while satisfying the above two constraints.
As a side note, linear programming is increasingly being used in the real-time research area
because of its strong theoretical ground. For example, it is used to bound the worst case
execution time of a task [20]; to bound the interference of program execution by DMA
operation [21]; and to bound the number of retries in lock-free real-time systems [22].
III. Overall Approach
One problem with Lee et al.'s technique is that the preemption cost of a task is fixed
regardless of which tasks execute during the task's preemption. This may result in severe
overestimation of cache-related preemption delay when only a few cache blocks are shared
among tasks. For example, if the cache blocks used by a preempted task and those used
by the tasks that execute during the preemption are disjoint, the preemption cost of this
particular preemption would be zero. Nevertheless, Lee et al.'s technique assumes that
the preemption cost is still the time needed to reload all the useful cache blocks of the
preempted task.
To address this problem, the technique proposed in this paper takes into account the relationship
between a preempted task and the set of tasks that execute during the preemption
in computing the preemption cost. For this purpose, the proposed technique categorizes
preemptions of a task into a number of disjoint groups according to which tasks execute
during preemption. The number of such disjoint groups is when there are k higher
priority tasks. For example, if there are three higher priority tasks - 1 , - 2 , and - 3 for a
lower priority task - 4 , the number of possible preemption scenarios of - 4 is 7 (=
corresponding to f- 1 g, f- 2 g, f- 3 g, f- according to
the set of tasks that execute during - 4 's preemption. For task - j , we denote by P j \Gamma1 the set
of all of its possible preemption scenarios by the higher priority tasks
that set P j \Gamma1 is equal to the power set [23] of the set f- excluding the empty
set since for task - j to be preempted, at least one higher priority task must be involved. In
addition, we denote by p j (H) the preemption of task - j during which the tasks in set H
execute. For example, denotes the preemption of - 4 during which tasks - 1 and
The preemption costs of tasks for different preemption scenarios are given by the following
augmented preemption cost table for this example.
blocks
U U
U
U
U
U
U U U U
c 6
c 7
execution points
cache
blocks
U
Fig. 2. Calculation of scenario-sensitive preemption cost.
To compute f j (H), the preemption cost of scenario p j (H), the following three steps are
taken based on the information about the set of useful cache blocks, which can be obtained
through the analysis explained in [18]. First, for each execution point in task - j , we compute
the intersection of the set of useful cache blocks of - j at the execution point and the set
of cache blocks used by tasks in H. Second, we determine the execution point in - j with
the largest elements (i.e., useful cache blocks) in the intersection. Finally, we compute
the (worst case) preemption cost of this preemption scenario by multiplying the number of
useful cache blocks in that intersection and the cache refill time.
As an example, consider Fig. 2 that shows the set of useful cache blocks (denoted by U's in
the figure) for all the execution points of a lower priority task - 4 and the sets of cache blocks
used by higher priority tasks - 1 , - 2 , and - 3 . In this example, the worst case preemption cost
of - 4 for the case where tasks - 1 and - 3 execute during preemption (i.e., f 4 (f-
multiplied by the cache refill time. This preemption cost is determined by the execution
point shaded in the figure, which has the largest number of useful cache blocks that conflict
with the cache blocks used by - 1 and - 3 .
Since there are preemption scenarios for k higher priority tasks, we need
to compute the same number of preemption costs in the worst case. This may require an
enormous amount of computation when k is large. The computational requirement can be
reduced substantially by noting that we do not need to consider the higher priority tasks
whose cache blocks do not conflict with the cache blocks used by the task for which the
preemption cost is computed. For example, in Fig. 2 since none of the cache blocks used by
conflict with those used by - 4 , we do not need to consider the preemption scenarios that
include - 2 when computing the preemption costs of - 4 . Instead, the preemption costs for
scenarios that include - 2 can be derived from those that do not include - 2 by noting that
A. Problem formulation
To formulate the problem of computing a safe upper bound of PC i (R i ) as a linear programming
problem based on the augmented preemption costs f j (H)'s, we define a new variable
j (H) that denotes the number of preemptions of - j by task set H, that is, the number of
preemptions of scenario p j (H). The corresponding objective function is
This objective function states that the cache-related preemption delay of - i during R i is the
sum of delay due to preemptions of - i and those of higher priority tasks during R i where the
delay due to preemptions of a task is defined as the sum of the counts of mutually disjoint
preemption scenarios of that task multiplied by the corresponding preemption costs.
As in Lee et al.'s technique, we cannot determine the exact g j (H) values that give the worst
case preemption delay and thus we should use various constraints on the g j (H)'s to bound
the objective function value. In the next subsection, we give two such constraints that are
extensions of Lee et al.'s original constraints. Then, in Section C, we discuss an advanced
constraint that relates the invocations of a higher priority task and the preemptions of
lower priority tasks where the higher priority task is involved. In Section IV, we give
more advanced constraints that consider phasing of tasks to eliminate many infeasible task
preemption scenarios. Finally, in Section V, we discuss an optimization that reduces the
computational requirement of the proposed technique.
B. Extensions of Lee et al.'s constraints
This subsection describes extensions (based on scenario-sensitive preemption cost) of the
two constraints used in Lee et al.'s technique. The extended constraints are given in terms
of g j (H)'s and, as we will see later, subsume Lee et al.'s original two constraints.
The first constraint of Lee et al.'s technique, which states that the total number of preemptions
of during R i cannot be larger than the total number of invocations
of during R i , can be straightforwardly extended using g j (H)'s as follows:
g k (H) -
Note that since
k (H) is equal to g k , the above constraint is equivalent to the first
constraint of Lee et al.'s technique.
Similarly, the second constraint, which originally states that the total number of preemptions
of - j during R i cannot be larger than the number of invocations of - j during R i multiplied
by the maximum number of times that any single - j invocation can be preempted
by higher priority tasks can be extended as follows:
e \Theta d R j
This constraint states that the number of preemptions of - j during which a higher priority
executes is bounded by the number of invocations of - j multiplied by the maximum
number of times that any single - j invocation can be preempted by - k . To show that this
constraint subsumes the second constraint of Lee et al.'s technique, we sum up both sides
of the constraints for
e \Theta
j (H) and
j (H), we have
e \Theta
d
This shows that the new constraint subsumes the second constraint of Lee et al.'s technique.
In addition, since the number of preemptions of - j during which a higher priority task - k
executes is bounded by the number of - k invocations, we have
By combining Constraints (4) and (5), we have
e \Theta d
Since the new constraints described in this subsection are either equivalent to or more stringent
than the original two constraints of Lee et al.'s technique and f j (H) is always less than
or equal to f j for all H in resulting objective function value
is always smaller than or equal to the objective function value
of Lee et al.'s tech-
(a) Cache mapping (b) Preemption cost table
(c) Task invocations
Task
R 4
Fig. 3. Example task set.
nique yielding a tighter prediction of cache-related preemption delay.
As an example, consider the task set in Fig. 3 that consists of four tasks - 1 , - 2 , - 3 , and
is the highest priority task and - 4 the lowest one. Assume that the tasks are
mapped to cache memory as shown in Fig. 3-(a), where the useful cache blocks of tasks - 1 ,
are denoted by numbers 1, 2, 3, and 4, respectively. The cache mapping and
distribution of useful cache blocks of the tasks give the preemption cost table in Fig. 3-(b)
assuming that the cache refill time is a single cycle 4 .
Assume that we are interested in computing the cache-related preemption delay during the
response time of task - 4 , denoted by R 4 in Fig. 3-(c). Also assume that during R 4 , there
4 In this example, to simplify the explanation, we assume that the set of useful cache blocks of each task
shown in Fig. 3-(a) includes the set of useful cache blocks of any other execution point in the task. This
assumption does not hold in general as the example in Fig. 2 illustrates.
are four invocations of - 1 , three invocations of - 2 , and two invocations of - 3 , whose response
times are denoted in the figure by R 1 , R 2 , and R 3 , respectively. Note that these response
times are available when we compute R 4 since we calculate the response times from the
highest priority task to the lowest priority task.
From the first constraint, i.e., Constraint (3), we have the following three inequalities:X
d
R 4
Similarly, from the second constraint, i.e., Constraint (6), we have the following inequalities:
e \Theta d
R 4
g 3
e \Theta d R 3
g 3
e \Theta d R 3
g 4
- min(d R 4
e \Theta d R 4
g 4
- min(d R 4
e \Theta d R 4
g 4
- min(d R 4
e \Theta d R 4
2:
The maximum objective function value that satisfies the above two sets of constraints is
and the values of g j (H)'s that give the maximum are as follows.
For comparison purposes, if Lee et al.'s technique were used instead, the maximum objective
function value would be 54, which is significantly larger than that given by the proposed
technique. This maximum objective function value is derived from
preemption costs f are determined from
the number of useful cache blocks of the tasks shown in Fig. 3-(a). Note that the solution
corresponds to the case where all the nine invocations of tasks - 1 , - 2 , and - 3 preempt task
which has the largest preemption cost. The constraints used areX
d
R 4
R 4
e \Theta d
R 4
e \Theta (d R 3
e \Theta (d R 4
C. Advanced constraints on the relationship between task invocations and preemption
Although the new constraints are more stringent than those of Lee et al.'s technique, they
cannot eliminate all infeasible preemption scenarios. In fact, even the combination of the
g j (H) values that gives the maximum objective function value in our previous example
is infeasible since it requires at least eight invocations of - 1 whereas there are only four
invocations of - 1 in the example (cf. Fig. 3-(c)). Among the eight required invocations,
four invocations are from g 3 (f- 1 meaning that there are four preemptions of - 3 during
which only - 1 executes. The other four required invocations are from g 4 (f- 1
that there are four preemptions of - 4 during which only - 1 executes.
The reason why our earlier constraints cannot eliminate the above infeasible preemption
scenario is that they cannot relate the invocations of a higher priority task and the preemptions
of lower priority tasks where that higher priority task is involved. For the example in
Fig. 3, it can be trivially shown that the sum of the number of preemptions of - 2 , - 3 , and
during which only - 1 executes is bounded by the number of - 1 invocations giving the
following constraint,
which eliminates the infeasible preemption scenario.
At first sight, it appears that the above problem can be solved by bounding the number
of preemptions of lower priority tasks - during which a higher priority task
executes by the number of invocations of that higher priority task - j , which can be
expressed by the following constraint:
g k (H) - d R i
This constraint, when cast into our example in Fig. 3, is translated into the following
constraint when - 1 is the higher priority task involved:
This particular constraint, and Constraint (7) in general, however, is not safe meaning
that a valid preemption scenario may not satisfy it because a single invocation of a higher
priority task can be involved in more than one preemption of lower priority tasks, and thus
can be counted multiple times in the summation on the left-hand side of Constraint (7). For
example, when - 3 is preempted by - 2 and - 2 is, in turn, preempted by - 1 , the invocation of
doubly counted, first in g 2 (f- 1 g) and second in g 3 (f- In general, an invocation
of - k can be doubly counted in g j (H) and g.
We capture this observation by a symmetric relation [23] which we call the DC (standing for
doubly counted) relation denoted by dc
$. This relation associates two preemption scenarios
g. For our example
with four tasks - 1 , - 2 , - 3 , and - 4 , all the possible pairs of preemption scenarios related by
dc
are as follows:
Using this relation, safe constraints can be derived as follows: Consider a combination of
preemption scenarios where a higher priority task is involved. If no pair of preemption
scenarios in the combination is related by dc
$, the sum of the number of the preemptions in
the combination can be bounded by the number of invocations of the higher priority task.
For example, the following constraint that eliminated the infeasible preemption scenario,
is safe since no pairs of the preemption scenarios that appear on the left-hand side are
related by dc
$.
On the other hand, the following constraint is not safe because it has both p 3 (f-
are related by dc
$.
e:
The set of all possible safe constraints that can be derived by the above rule is as follows
when the higher priority task involved is
R 4
R 4
e:
(a) Maximum number of
preemptions of by preemptions of by
(b) Minimum number of
R
R
Fig. 4. Example of infeasible task phasing.
The other constraints for the cases where the higher priority task involved is - 2 or - 3 can
be derived similarly.
IV. Advanced Constraints on Task Phasing
Among the two problems with Lee et al.'s technique explained in the introduction, the
first problem was addressed in the previous section by introducing a scenario-sensitive
preemption cost. This section addresses the second problem, namely, the problem that
the technique does not consider phasing among tasks and thus may allow many infeasible
preemption scenarios. For example, the technique assumes that the number of preemptions
of a lower priority task where a higher priority task is involved can potentially range from
zero to the number of invocations of the higher priority task.
However, as Fig. 4-(a) illustrates, some of the invocations of a higher priority task (denoted
by - j in the figure) cannot be involved in any preemption of a lower priority task (denoted
by - k in the figure) even when we assume the worst case response time (denoted by R k
in the figure) for the lower priority task. Similarly, as Fig. 4-(b) illustrates, some of the
invocations of the higher priority task will inevitably be involved in preemptions of the lower
priority task even when we assume the best case response time B k for the lower priority
task. In this section, we incorporate these constraints and others about task phasing into
the framework developed in the previous section.
First, we define the following four numbers between two tasks - j and - k (j !
priorities are higher than that of - i for which the worst case response time R i is being
jk , and N 0
jk . Let I be the set of all intervals of length R i in the
hyperperiod formed by - j and - k , that is, in LCM(T least common multiple of
number M jk is the maximum number of preemptions of the lower priority
during which the higher priority task - j executes over all intervals in I. Similarly,
N jk is the minimum number of preemptions of the lower priority task - k during which the
higher priority task - j executes over the same set of intervals. On the other hand, M 0
jk is
the maximum number of times that instances of the lower priority task - k are overlapped
with an instance of the higher priority task - j . More technically, it is the maximum number
of level-k busy periods [24] that have both - j and - k over all intervals in I. Finally, N 0
jk is
the minimum number of times that instances of the lower priority task - k are overlapped
with an instance of the higher priority task - j , i.e., the minimum number of level-k busy
periods that have both - j and - k over all intervals in I.
Assume that the worst case response times of - j and - k are R j and R k , respectively, both
of which are available when we compute R i . Likewise, assume that the best case response
times of - j and - k are B j and B k , respectively, for which the best case execution times of
can be used. Then, the above four numbers are given as follows:
d
where d
min x=1
d
d y
where d
where d
min x=1
d d y
where d
The derivation of the above is lengthy and is not presented here. Interested readers are
referred to an extended version of this paper [25].
The first two numbers, M jk and N jk , can be used to bound the number of preemptions of
The other two numbers, M 0
jk and N 0
jk , can be used to bound the number of preemptions of
certain types. First, M 0
jk can be used to bound the number of preemptions in which both
execute (of course, without being multiply counted using the technique explained
in the previous section). On the other hand, N
jk can be used to bound the number of
preemptions in which either only - j or only - k executes. For example, the number of
preemptions in which - j executes but - k does not is bounded by d R i
jk . Likewise, the
number of preemptions in which - k executes but - j does not is bounded by d
jk .
In the following, we give examples of constraints that use M 0
jk and N 0
jk . Assume that there
are four tasks - 1 , - 2 , - 3 , and - 4 and that - 1 and - 2 correspond to - j and - k in the above
constraints, respectively.
In our example, there are ten possible preemption scenarios of - 3 and
Among them, there are three preemption scenarios during which both - 1 and - 2 execute:
are related
by dc
and thus are subject to being multiply counted, only one of them can participate
in the summation of the number of preemptions. This restriction leads to the following two
inequalities:
Similarly, there are three preemption scenarios during which - 1 executes but - 2 does not:
are related by dc
$, we
have the following two inequalities:
Finally, there are three preemption scenarios during which - 2 executes but - 1 does not:
are related by dc
$, we
have the following two inequalities:
V. Optimization Based on Task Set Decomposition
One potential problem of the proposed technique is that it requires a large amount of
computation when there are a large number of tasks since the number of variables used
is O(2 n ) where n is the number of tasks in the task set. This section discusses a simple
optimization based on task set decomposition that can drastically reduce the amount of
Fig. 5. Example of task decomposition.
computation required.
Consider the example in Fig. 5 that shows the cache blocks used by four tasks
and - 4 . In the figure, we notice that although cache blocks are shared between - 1 and - 2
and also between - 3 and - 4 , there is no overlap between the cache blocks used by - 1 and - 2
and those used by - 3 and - 4 . This means that neither - 1 nor - 2 affects the cache-related
preemption delay of either - 3 or - 4 , and vice versa. Based on this observation, we can
decompose a given task set into a collection of subsets in such a way that no two tasks
from two different subsets share a cache block between them. Then the tasks in each subset
can be analyzed independently of tasks in other subsets using the constraints given in the
previous two sections.
For our example in Fig. 5, the given task set is decomposed into two subsets: and
g. When we calculate the worst case response time of the lowest priority task - 4 using
the iterative procedure explained in Section II, the tasks in one subset can be analyzed
independently of the tasks in the other subset and the two results can be combined as
follows:
R k+1
(R k
(R k
(R k
4 ) is the cache-related preemption delay of - 4 due to the interactions between
shared cache blocks and is computed from maximizeX
with constraints involving only - 1 and - 2 . Similarly, PC 2
(R k
4 ) is the cache-related preemption
delay of - 4 due to the interactions between - 3 and - 4 and is computed from maximizeX
with the constraints that involve only - 3 and - 4 .
To maximize the benefit of the optimization explained above, the number of subsets that
can be analyzed independently should be large. An interesting topic for future research is
to devise a scheme that allocates main memory to tasks so that the resulting cache mapping
gives a large number of such subsets.
VI. Experimental Results
In this section, we compare the worst case response time prediction by the proposed technique
with those by previous techniques using a sample task set. Our target machine is
an IDT7RS383 board that has a 20 MHz R3000 RISC CPU, R3010 FPA (Floating Point
Accelerator), and an instruction cache and a data cache of 16 Kbytes each. Both caches are
direct mapped and have block sizes of 4 bytes. SRAM (static RAM) is used as the target
machine's main memory and the cache refill time is 4 cycles.
I
Task set specification.
Task Period WCET
(unit: cycles)
In our experiment, we used a sample task set whose specification is given in TABLE I.
In the table, the first column lists the tasks in the task set. Four tasks were used in our
experiments: FFT, LUD, LMS, and FIR. The FFT task performs the FFT and inverse FFT
operations on an array of 8 floating point numbers using the Cooley-Tukey algorithm [26].
simultaneous linear equations by the Doolittle's method of LU decomposition
[27] and FIR implements a 35 point Finite Impulse Response (FIR) filter [28] on a
generated signal. Finally, LMS is a 21 point adaptive FIR filter where the filter coefficients
are updated on each input signal [28].
The table also gives the period and WCET of each task in the second and third columns,
respectively. Since our target machine uses SRAM as its main memory, its cache refill time
(4 cycles) is much smaller than those of most current computer systems, which range from
8 cycles to more than 100 cycles when DRAM is used as main memory [11]. To obtain
the WCET of each task for more realistic cache refill times, we divide the WCET into
two components. The first component is the execution time of the task when all memory
references are cache hits, and is independent of the cache refill time. It was measured from
our target machine by executing the task with its code and data pre-loaded in the cache.
The second component is the time needed to service cache misses that occur during the
task's execution and is dependent on the cache refill time. This component is computed
by multiplying the total number of cache misses and the cache refill time t ref ill . In our
experiment, the total number of cache misses was obtained by the following procedure:
1. Two different execution times were measured for each task: one with its code and data
pre-loaded in the cache and the other without such pre-loading, which are denoted by
2. By dividing the difference between T 1 and T 2 by the 4 cycle cache refill time of the
target machine, we computed the total number of cache misses during the task's
execution.
We used three different cache mappings for the code used by the four tasks as shown in
LUD LMS
FFT
cache mapping 1 cache mapping 2 cache mapping 3
Fig. 6. Three different cache mappings of tasks.
Fig. 6. In the first mapping, the code used by each task is mapped to the same cache
region. On the other hand, in the second mapping, the cache regions used by the tasks
are overlapped with each other by about 70%. Finally, in the third mapping, the code
used by each task is mapped into a disjoint region in the cache. We speculate that these
three mappings represent reasonably well the spectrum of possible overlap among the cache
regions used by tasks.
II gives the preemption cost tables for the three mappings. Note that the preemption
costs of the tasks decrease as the overlapping cache regions decrease. This is because
less useful cache blocks are displaced during preemption, and eventually when the cache
regions are disjoint, all the preemption costs are zero.
We used a public-domain linear programming tool called lp solve by Michel Berkelaar
(URL: ftp://ftp.es.ele.tue.nl/pub/lp solve) to solve the linear programming problem
posed by the proposed technique. The total number of constraints for our task set is
and it took less than 3 minutes of user CPU time and 5 minutes of system CPU time to
compute all the data points presented in this section for the proposed technique on a Axil
II
Preemption cost tables for the three cache mappings.
Preemption cost table for cache mapping 1
(unit: cycles)
Preemption cost table for cache mapping 2
(unit: cycles)
Preemption cost table for cache mapping 3
(unit: cycles)
workstation running SunOS 5.4 with a 50 MHz SuperSparc CPU (TI TMS390Z80) and
128 Mbyte main memory.
For our experiments, we also implemented a simple fixed-priority scheduler based on the
tick scheduling explained in [7]. In our implementation, the scheduler is invoked every
160,000 cycles. To take into account the overhead associated with the scheduler, we used
the analysis technique explained in [7]. In this technique, the scheduler overhead S i during
response time R i is given by
where
is the number of scheduler invocations during R i ,
is the number of times that the scheduler moves a task from the delay queue
(where tasks wait for their next invocations) to the run queue during R i ,
ffl C int is the time needed to service a timer interrupt (it measured 413 cycles in our
target machine),
ffl C ql is the time needed to move the first task from the delay queue to the run queue
(it measured 142 cycles in our target machine),
ffl C qs is the time needed to move each additional task from the delay queue to the run
queue (it measured 132 cycles in our target machine).
A detailed explanation of this equation is beyond the scope of this paper and interested
readers are referred to [7].
Fig.s 7-(a) and (b) show the predicted worst case response time of the lowest priority task
- 4 and the percentage of cache-related preemption delay in the worst case response time,
respectively, as the cache refill time increases from 4 cycles to 200 cycles. Three different
techniques were used to predict the worst case response time. First, M is the technique
proposed in this paper, and M 1 , M 2 , and M 3 are its predictions for the three different
cache mappings explained earlier. Second, C is the technique explained in [17] that assumes
that each cache block used by a preempting task replaces from the cache a memory block
needed by a preempted task. Finally, P is Lee et al.'s technique presented in [18] where
the preemption cost is assumed to be the time needed to reload all the useful cache blocks.
Note that unlike the proposed technique, the worst case response time predictions by C
and P are insensitive to cache mapping since the preemption costs assumed by them are
independent of cache mapping.
Cache Refill Time10000000Worst
Case
Response
Time
M3
(a)
Cache Refill Time0.5(Cache-related
Preemption
(Worst
Case
Response
M3
(b)
Fig. 7. Worst case response time and (cache-related preemption delay)/(worst case response
time) vs. cache refill time.
The results in Fig. 7-(a) show that the proposed technique gives significantly tighter prediction
of the worst case response time than the previous techniques. For example, when the
cache refill time is 100 cycles and the second cache mapping is used, the proposed technique
gives a worst case response time prediction that is 60% tighter than the best of the previous
approaches (5,323,620 cycles in M 2 vs. 13,411,402 cycles in P ). This superior performance
of the proposed technique becomes more evident as the cache regions used by the tasks
become less overlapped, that is, as we move from M 1 to M 3 .
In Fig. 7-(a), there are a few jumps in the worst case response time predictions of all the
three techniques. These jumps occur when increase in the worst case response time due to
increased cache refill time causes additional invocations of higher priority tasks resulting in
a number of bumps in Fig. 7-(b).
The results in Fig. 7-(a) also show that as the cache refill time increases, the gap increases
between the worst case response time prediction by M and those by the other two tech-
Cache mapping 1 Cache mapping 2 Cache mapping 36000000Worst
Case
Response
Time
cache refill
(a)
Cache mapping 1 Cache mapping 2 Cache mapping 36000000Worst
Case
Response
Time
cache refill
(b)
Fig. 8. Impact of the different constraint groups on the accuracy of the worst case response
time prediction.
niques. Eventually, the task set is deemed unschedulable by C and P when the cache
refill time is more than 90 and 100 cycles, respectively. On the other hand, the task set is
schedulable by M even when the cache refill time is more than 200 cycles if cache mapping
3 is used.
Finally, the results in Fig. 7-(b) show that as the cache refill time increases, the cache-related
preemption delay takes a proportionally large percentage in the worst case response time.
As a result, even for method M , the cache-related preemption delay takes more than 30%
of the worst case response time when the cache refill time is 100 cycles and cache mapping 2
is used. This indicates that accurate prediction of cache-related preemption delay becomes
increasingly important as the cache refill time increases, that is, if the current trend of
widening speed gap between the processor and main memory continues [11].
To assess the impact of the various constraints used in the proposed technique on the
accuracy of the resultant worst case response time prediction, we classified the constraints
into two groups and calculated the reduction of the worst case response time prediction by
each group. The constraint sets were classified as follows: the three constraints in Section III
that deal with scenario-sensitive preemption cost were classified as Group 1 whereas those
in Section IV that eliminate infeasible task phasing were classified as Group 2.
Figs. 8-(a) and (b) show the reduction of the worst case response time prediction as the two
constraint groups are applied for cache refill times of 60 cycles and 80 cycles, respectively.
For comparison purposes, we also give the worst case response time prediction by technique
. The results show that for both cache refill times when the cache regions used by the
tasks are completely overlapped (i.e., cache mapping 1), most of the reduction comes from
the constraints in Group 2 since in this case scenario-sensitive preemption cost degenerates
to the preemption cost used by technique P . However, as the cache regions used by the
tasks become less overlapped, the impact of the constraints in Group 1 becomes more
significant and eventually when the cache regions are disjoint, all the reduction comes from
the constraints in Group 1 alone since in this case all the scenario-sensitive preemption
costs are zero.
We performed experiments using a number of other task sets and the results were very
similar to those given in this section. Interested readers are referred to [25] where the
results for the other task sets are presented.
VII. Conclusion
In this paper, we have proposed an enhanced schedulability analysis technique for analyzing
the cache-related preemption delay, which is required if cache memory is to be used in multitasking
real-time systems. The proposed technique uses linear programming and has the
following two novel features expressed in terms of constraints in linear programming. First,
the technique takes into account the relationship between a preempted task and the set of
tasks that execute during the preemption when calculating the number of memory blocks
that should be reloaded into the cache after the preempted task resumes execution. Second,
the technique considers phasing of tasks to eliminate many infeasible task interactions.
Our experimental results showed that the incorporation of the two features yields up to
60% more accurate prediction of the worst case response time when compared with the
prediction made by previous techniques. The results also showed that as the cache refill
time increases, the gap increases between the worst case response time prediction by the
proposed technique and those by the previous techniques. Finally, the results showed that
as the cache refill time increases, the cache-related preemption delay takes a proportionally
large percentage in the worst case response time, which indicates that accurate prediction
of cache-related preemption delay becomes increasingly important if the current trend of
widening speed gap between the processor and main memory continues.
Acknowledgments
The authors are grateful to Sam H. Noh for helpful suggestions and comments on an earlier
version of this paper.
--R
"Some Results of the Earliest Deadline Scheduling Al- gorithm,"
"Finding Response Times in a Real-Time System,"
"The Rate Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behavior,"
"Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment,"
"Dynamic Scheduling of Hard Real-Time Tasks and Real-Time Threads,"
"An Extendible Approach for Analyzing Fixed Priority Hard Real-Time Tasks,"
"Effective Analysis for Engineering Real-Time Fixed Priority Schedulers,"
"The Impact of an Ada Run-time System's Performance Characteristics on Scheduling Models,"
"Accounting for Interrupt Handling Costs in Dynamic Priority Task Systems,"
"Engineering and Analysis of Fixed Priority Schedulers,"
Computer Architecture A Quantitative Approach.
"SMART (Strategic Memory Allocation for Real-Time) Cache Design,"
"OS-Controlled Cache Predictability for Real-Time Systems."
"Compiler Support for Software-based Cache Partitioning,"
"Software-Based Cache Partitioning for Real-time Applications,"
"Cache Issues in Real-Time Systems,"
"Adding Instruction Cache Effect to Schedulability Analysis of Preemptive Real-Time Systems,"
"Analysis of Cache-related Preemption Delay in Fixed-priority Preemptive Scheduling,"
"Efficient Microarchitecture Modeling and Path Analysis for Real-Time Software,"
"A Method for Bounding the Effect of DMA I/O Interference on Program Execution Time,"
"A Framework for Implementing Objects and Scheduling Tasks in Lock-Free Real-Time Systems,"
Science Research Associates
"Fixed Priority Scheduling of Periodic Task Sets with Arbitrary Dead- lines,"
"Bounding Cache-related Preemption Delay for Real-time Systems,"
DFT/FFT and Convolution Algorithm: Theory
Elementary Numerical Analysis.
C Algorithms for Real-Time DSP
--TR
Compilers: principles, techniques, and tools
Some Results of the Earliest Deadline Scheduling Algorithm
Dynamic Scheduling of Hard Real-Time Tasks and Real-Time Threads
An extendible approach for analyzing fixed priority hard real-time tasks
C language algorithms for real-time DSP
Compiler support for software-based cache partitioning
Analysis of Cache-Related Preemption Delay in Fixed-Priority Preemptive Scheduling
Computer architecture (2nd ed.)
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment
Elementary Numerical Analysis
Engineering and Analysis of Fixed Priority Schedulers
Effective Analysis for Engineering Real-Time Fixed Priority Schedulers
The Impact of an Ada Run-Time System''s Performance Characteristics on Scheduling Models
Adding instruction cache effect to schedulability analysis of preemptive real-time systems
OS-Controlled Cache Predictability for Real-Time Systems
Efficient microarchitecture modeling and path analysis for real-time software
Analysis of cache-related preemption delay in fixed-priority preemptive scheduling
A Method for Bounding the Effect of DMA I/O Interference on Program Execution Time
A framework for implementing objects and scheduling tasks in lock-free real-time systems
--CTR
Jaudelice C. de Oliveira , Caterina Scoglio , Ian F. Akyildiz , George Uhl, New preemption policies for DiffServ-aware traffic engineering to minimize rerouting in MPLS networks, IEEE/ACM Transactions on Networking (TON), v.12 n.4, p.733-745, August 2004
Accounting for cache-related preemption delay in dynamic priority schedulability analysis, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Hemendra Singh Negi , Tulika Mitra , Abhik Roychoudhury, Accurate estimation of cache-related preemption delay, Proceedings of the 1st IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, October 01-03, 2003, Newport Beach, CA, USA
Jan Staschulat , Rolf Ernst, Scalable precision cache analysis for preemptive scheduling, ACM SIGPLAN Notices, v.40 n.7, July 2005
Chanik Park , Jaeyu Seo , Sunghwan Bae , Hyojun Kim , Shinhan Kim , Bumsoo Kim, A low-cost memory architecture with NAND XIP for mobile embedded systems, Proceedings of the 1st IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, October 01-03, 2003, Newport Beach, CA, USA
Jan Staschulat , Rolf Ernst, Multiple process execution in cache related preemption delay analysis, Proceedings of the 4th ACM international conference on Embedded software, September 27-29, 2004, Pisa, Italy | cache memory;fixed-priority scheduling;preemption;schedulability analysis;real-time system |
501975 | The complexity of the exponential output size problem for top-down and bottom-up tree transducers. | The exponential output size problem is to determine whether the size of output trees of a tree transducer grows exponentially in the size of input trees. In this paper the complexity of this problem is studied. It is shown to be NL-complete for total top-down tree transducers, DEXPTIME-complete for general top-down tree transducers, and P-complete for bottom-up tree tranducers. Copyright 2001 Academic Press. | Introduction
Top-down and bottom-up tree transducers were introduced in the late sixties
by Rounds and Thatcher [13, 16, 17, 18] as a generalisation of finite-state transducers
on strings. The main motivation was to provide a simple formal model
of syntax-directed transformational grammars in mathematical linguistics and
of syntax-directed translation in compiler construction (for the latter, see the
recent book by F-ul-op and Vogler [8]). Since that time it has turned out that
tree transducers are a useful tool for many other areas, too, and their properties
and extensions have been studied by a variety of authors. For references see,
e.g., [9, 8].
For the most part of this paper top-down tree transducers are studied. As
mentioned above, they can be seen as a generalisation of finite-state string
transducers (also called generalised sequential machines) to trees 1 . Like those,
top-down tree transducers are one-way devices which process their input in one
direction, using a finite number of states. However, while string transducers usually
process their input from left to right, top-down tree transducers transform
input trees to output trees from the root towards the leaves (which, of course, is
the reason for calling them top-down tree transducers). Roughly speaking, the
Partially supported by the EC TMR Network GETGRATS (General Theory of Graph
Transformation Systems) and the ESPRIT Working Group APPLIGRAPH through the University
of Bremen.
y A short version of this paper was presented on FCT'99 [3].
1 In this context, a tree is a labelled, ordered tree whose labels are taken from a ranked
alphabet (or signature), i.e., a term.
string case is obtained by considering monadic input and output trees (which
can be viewed as "vertical strings").
Although the generalisation is quite a direct one, the fact that trees instead of
strings are considered makes a rather crucial difference in certain respects. This
concerns, for example, closure properties which hold in the string case but do
not carry over to top-down tree transducers. For instance, an infinite hierarchy
is obtained by considering compositions of top-down tree transducers (see [5]).
Another important difference is that, intuitively, the computations of top-down
tree transducers are usually ramifying: when the topmost node of an input tree
has been processed, the computation proceeds on all subtrees in parallel. In fact,
subtrees can also be deleted or copied. One of the most distinct consequences
of this fact is that, in contrast to the string case, the size of output trees of a
top-down tree transducer is not necessarily linearly bounded in the size of its
input trees. As an example, consider the two rules
fl[a] !a (which should be considered as term rewrite rules in the usual way).
Here, fl is a special symbol of rank 1 called a state and f; g; a are symbols of
rank 2, 1, and 0, respectively. Without going into the details it should be clear
that these rules transform the monadic tree of height n into a
complete binary tree of the same height. Thus, the output size is exponential in
the size of input trees. It follows directly from the definition of top-down tree
transducers that an exponential size of output trees is the maximum growth they
can achieve. However, it is as well possible to build a top-down tree transducer
whose maximum output size is given by a polynomial of degree k, for any given
N. As a simple example, consider the rules
using states fl and fl 0 . Taking fl to be the initial state, an input tree
of size n+ 1 is turned into the output tree f each
t i is a tree In other words, the size of output trees is
quadratic in the size of input trees.
In this paper the complexity of the corresponding decision problem is studied:
given a top-down tree transducer td , is its output size os td (n) exponential in
the size n of input trees? It turns out that this problem is efficiently solvable
(namely NL-complete 2 ) for total top-down tree transducers, but is very
hard (namely DEXPTIME-complete) in general. Using known results, the NL-
respectively DEXPTIME-hardness of the two variants is relatively easy to es-
tablish, but some effort is necessary in order to prove that these resources are
indeed sufficient.
Apart from being interesting in their own right, these results can be useful if
tree transducers are considered as a model of syntax-directed translation [8].
For practical reasons, an exponential behaviour of a translation is often not
acceptable. For example, the translation of expressions of some high-level programming
language into primitive instructions is of little practical use if the
2 Throughout this paper, completeness means log-space completeness.
output code is of exponential size. Closely related is the use of tree transducers
in order to generate trees which are interpreted as expressions over some
domain (see, e.g., [4, 2]). In order to estimate the evaluation costs of the generated
expressions (and the costs of computing the tree transduction itself) it
is necessary to estimate the size of output trees, i.e., to solve the exponential
output size problem. In fact, the top-down tree transducers which are used in
these two areas are often total for natural reasons. Therefore, the result that
the exponential output size problem is in NL for this class of tree transducers
may be of particular interest.
For their notion of generalised syntax-directed translations, which are closely
related to total deterministic top-down tree transducers, Aho and Ullman investigated
output size already in [1]. They showed that the output size of a
generalised syntax-directed translation is either polynomial or exponential [1,
Theorem 5.2]. The same result is proved for top-down tree transducers in general
in Section 4 of this paper (Corollary 4.7). Aho and Ullman also proved that
the exponential output size problem for generalised syntax-directed translations
is decidable [1, Theorem 4.3]. However, complexity issues are not addressed in
their paper, and the proposed algorithm is highly inefficient while the results in
this paper yield an NL-algorithm if formulated for generalised syntax-directed
translations (due to their close relationship with total deterministic top-down
tree transducers).
Bottom-up tree transducers are somewhat less interesting because their computations
are structurally simpler. By a polynomial-time reduction one can
exploit the results on total top-down tree transducers in order to show that the
exponential output size problem for bottom-up tree transducers is in P. In fact,
using known results it turns out that the problem is P-complete. Interestingly,
in this case the assumption of totality does not make a difference-the problem
remains P-complete even for total deterministic bottom-up tree transducers.
The paper is structured as follows. In the next section some basic notions
are recalled. Sections 3-5 are concerned with top-down tree transducers. In
Section 3 their definition is recalled, some auxiliary notions are introduced, and
it is shown that the exponential output size problem can be reduced to the case
of total deterministic top-down tree transducers with monadic input signatures.
In Section 4 a combinatorial result on trees is shown which can be used to
obtain a characterisation of exponential output size. The latter is turned into
decision algorithms in Section 5, where the main result of this paper is presented.
Section 6 deals with the complexity of the exponential output size problem for
bottom-up tree transducers. Finally, in Section 7 a short conclusion is given.
Preliminaries
The set of all natural numbers (including 0) is denoted by N, and N+ denotes
N nf0g. For n 2 N, [n] denotes the set ng. The set of all finite sequences
over a set A is denoted by A . The empty sequence is denoted by -, the length
of a sequence s by jsj, and the concatenation of sequences by juxtaposition.
Like the length of a sequence, the cardinality of a set A is denoted by jAj. The
canonical extensions of a function f : A ! B to the powerset of A and to A
are denoted by f , too. Hence,
an an 2 A. The reflexive and transitive
closure of a binary relation r ' A \Theta B is denoted by r . The domain of r, i.e.,
the set fa 2 A j (a; b) 2 r for some b 2 Bg, is denoted by dom(r).
It is assumed that the reader is familiar with the basic notions of complexity
theory and has at least some basic experience concerning the estimation of
resources needed by an algorithm (especially with respect to polynomial and
exponential time and logarithmic space). A function f : N ! N is said to be
polynomially bounded if there is a polynomial p such that f(n) - p(n) for all
N. If there are constants c 2 R, c ? 1, and n 0 2 N such that f(n) - c n for
all is said to be exponential. Thus, the latter is a lower bound
whereas the former is an upper one!
A (finite, ordered) unlabelled tree is a finite prefix-closed subset T of N
. The
elements of T are called its nodes. The rank of a node v in T is the number of
distinct . The rank of T is the maximum rank of its
nodes. A leaf is a node of rank 0. A node u is a descendant of a node v if v is a
proper prefix of u. Conversely, u is a ancestor of v if it is a proper prefix of v.
The subtree of T rooted at v is the tree fv 0 j vv 0 2 Tg. A direct subtree of T is a
subtree of T rooted at v for some . The size of T is jT j, its height is
(i.e., the number of nodes on a longest path from
the root to a leaf), and its width, denoted by wd(T ), is the number of leaves in
T .
A labelled tree is a mapping is an unlabelled tree and L
is a set of labels. The underlying unlabelled tree T is also denoted by N (t)
in this case. All notions and notations introduced for unlabelled trees above
carry over to labelled trees in the obvious way. In the following, the attributes
'labelled' and `unlabelled' will mostly be dropped when speaking about trees.
As a general rule, unlabelled trees will be denoted by capital letters (usually T )
whereas labelled trees will be denoted by lowercase letters (usually s and t).
For trees denotes the tree t such that
usually denoted by f (which actually
means that we identify a single-node tree with the label of that node).
A symbol is a pair (f; n) consisting of a label f and a number n 2 N, called the
rank of the symbol. Such a symbol is also denoted by f (n) , or simply f if n is
of minor importance. A signature is a finite set \Sigma of symbols. \Sigma is monadic if
for some signature \Sigma 0 all of whose symbols are of rank 1. For an
arbitrary set S, mon(S) denotes the monadic signature ff (1) g.
A tree is called monadic if it has the form f 1
Note that such a monadic tree can be identified with the string f 1
For a signature \Sigma and a set S of trees, \Sigma(S) denotes the set of all trees
the set of trees over \Sigma with subtrees in S. It is the smallest set of trees such that
The notation T \Sigma is used as an abbreviation for T \Sigma (;).
For the rest of this paper, let us fix an indexed set of pair-wise
distinct symbols of rank 0 called variables. For every n 2 N,
g. In order to avoid confusion, the set X is assumed to be disjoint
with all signatures under consideration. The variable x 1 is also denoted by x.
For trees t; t denotes the
substitution of t i for x i in t (i 2 [n]). More precisely, if
then then
rewriting works as usual, except that only left-linear rules are considered.
Thus, in the context of this paper a rewrite rule is a pair ae = (l; r) of trees, called
the left- and right-hand side, respectively, such that l contains every variable at
most once and every variable in r occurs in l, too. Such a rewrite rule is usually
denoted by l ! r. The derivation relation determined by ae is the binary relation
ae on trees such that s
exactly once, and R is a set of
rewrite rules, !R denotes the union of all ! ae with ae 2 R.
3 Top-down tree transducers
Top-down tree transducers transform input trees into output trees in a top-down
manner, using a restricted type of term rewrite rules. Special symbols of rank 1
are used as states which, in every step, consume the input symbol beneath and
replace it by a tree consisting of output symbols and states processing copies of
the direct subtrees of the consumed symbol.
3.1 Definition (top-down tree transducer) A top-down tree transducer is a
tuple
ffl \Sigma and \Sigma 0 are signatures, called the input signature and the output signature,
respectively,
is a signature of states of rank 1 each, disjoint with \Sigma [ \Sigma 0 ,
is a finite set of rewrite rules, and
is the initial state.
If \Sigma is a monadic signature, then td is a (top-down) string-to-tree transducer.
The top-down tree transduction computed by td , which is denoted by td as well,
is the set of all pairs (s;
such that
td t, where ! td denotes
the rewrite relation !R . \Pi
In the following, for every top-down tree transducer
every state fl 2 \Gamma, the top-down tree transducer (\Sigma; \Sigma 0 ; \Gamma; R; fl) is denoted by
td fl . As a convention, it is assumed that the variables in the left-hand side of a
rule, read from left to right, are always x N. Thus, every
rule of a top-down tree transducer has the form
where fl; From now
on, denoting a rule in this way is always meant to imply that t is chosen in such
a way as to contain every variable in X l exactly once. This carries over to the
denotation of derivation steps: in fl[f
every
is assumed to correspond to one particular occurrence of this subtree in
(but notice that we may have
for distinct
of course).
A rule of a top-down tree transducer is called a flf -rule if it has the form
t. Thus, a flf -rule is a rule that processes the input symbol
f in state fl. A top-down tree transducer
contains at least one flf -rule for every
it contains at most one such rule for every
total then dom(td) = T \Sigma , and if it is deterministic then it computes a partial
function. In the latter case one may therefore use functional notation, writing
3.2 Definition (output size) The output size of a top-down tree transducer
td is given by the function os td : N ! N such that, for all n 2 N,
ng
(where, as usual,
The exponential output size problem is the problem to determine, for an arbitrary
top-down tree transducer td , whether os td is exponential. \Pi
Note the technically convenient fact that os td is a monotonic function. Clearly,
one can always find some c such that os td (n) - c n for all n 2 N. This follows
from the fact that the rank of output trees is bounded and (s; t) 2 td implies
is the maximum height of right-hand sides of rules of
td .
The remainder of this section consists of three lemmas and their proofs. The
purpose of these lemmas is to show that a top-down tree transducer can be simplified
in a certain way without affecting the output size too much. In particular,
the modifications preserve polynomial boundedness as well as exponentiality.
The first lemma shows that it suffices to consider top-down tree transducers
whose right-hand sides contain exactly one output symbol each. For this, let \Theta
denote the set of all top-down tree transducers (\Sigma; H; \Gamma; R; fl 0 ) such that every
right-hand side of a rule in R is an element of H (\Gamma(X
for some m 2 N. Thus, standard output symbols h (n) are used, where n may
vary but the label is always h. This "overloading" of h is not essential, but it
helps reduce the notational complexity of proofs. Clearly, this standardisation
is harmless since the size of output trees is independent of their node labels.
3.3 Lemma For every top-down tree transducer td one can construct a top-down
tree transducer td 0 2 \Theta such that, for some constant a 2 N+ ,
for all n 2 N. The construction preserves determinism as well as totality, and
can be carried out on logarithmic space.
Proof. Let
lows. For every rule fl[f [x
in R, let R 0 contain
the rule fl[f [x
]], and let H consist of all symbols
h (l) which appear in the so-defined right-hand sides. Clearly, this construction
can be carried out on logarithmic space and it preserves determinism and
totality.
It remains to estimate the difference between os td and os td 0 . Let a be the
maximum number of nodes in right-hand sides of rules in R which are labelled
with symbols in \Sigma 0 . By the obvious one-to-one correspondence between derivations
in td and td 0 , and the fact that every application of a rule in R 0 adds
exactly one output symbol while the corresponding rule in R adds at most a,
Consequently,
(n) for all n 2 N.
Conversely, for every (s; t 0 there is some (s; t) 2 td such that wd(t 0
wd(t). (Notice that the same does not necessarily hold with respect to size because
some rules in R may have right-hand sides in \Gamma(X ).) Using this inequality
and the fact that ht(t 0
In other words, os td 0 (n) - n \Delta os td (n) for all n 2 N.
The next two lemmas are quite useful in order to check for an exponential
output size as they allow to restrict one's attention to the case of deterministic
total string-to-tree transducers, which are considerably easier to deal with than
general top-down tree transducers. In particular, some of the constructions used
in the following (see, e.g., the proof of Lemma 3.5) rely on the assumption that
every state of the given top-down tree transducer determines a partial function
from trees to trees. Therefore, we shall first show that logarithmic space is
sufficient in order to transform a top-down tree transducer into a deterministic
one, thereby affecting its output size only modestly. To simplify the proof, only
top-down tree transducers in \Theta are considered. By Lemma 3.3 (and the fact
that logarithmic space reductions are closed under composition) this restriction
does not make a difference.
Let us first informally discuss the idea underlying the construction. To get
rid of the nondeterminism (while preserving the output size), a modified input
signature is used. Intuitively, the nodes of an input tree are augmented with
additional information which determines the rules to be applied. Proceeding in
f
f
f
e e!
Figure
1: A sample derivation of a nondeterministic top-down tree transducer.
this way, some potential output trees may get lost since the new transducer is
forced to use the same rules whenever it processes two copies of a subtree in the
same state. Fortunately, this does not matter with respect to output size, as it
does not affect the maximal size of output trees.
As an example, let consists of the
rules ae
duplicates the ith subtree of the tree it is applied to and deletes the other one.
A sample derivation (applying rules in parallel) is shown in Figure 1. Note that,
in the second step, different rules have been chosen to process the copies of the
tree f [f [e; e]; e]: the left one is processed by ae 2 and the right one by ae 1 . To
obtain an output tree of maximal size, ae 1 would have to be chosen for both
copies.
In order to convert td into a deterministic top-down tree transducer td 0 , one
may use two versions of f , say f 1 and f 2 . This accounts for the fact that we
have to choose between two possible flf -rules. The rules are thus turned into
there is
only one fle-rule, it can be kept as is. Note that we cannot provide the input
tree in Figure 1 with appropriate indices in order to make td 0 simulate the shown
derivation, because of the contradictory choice of rules in the second step. How-
ever, it is possible to find indices which lead to an output tree of maximal size,
namely (where the dashes indicate irrelevant indices).
Generalising the construction, every symbol of the original input signature \Sigma
would have to be provided with as many indices as there are states in td , because
the choice of rules for copies being processed in different states must of course be
independent of each other. Unfortunately, this would result in an exponential
number of states, which could not be handled on logarithmic space. Therefore,
instead of indices we shall use additional input symbols hfl; ii of rank 1, where
fl is a state of td and i is the index of a rule. Intuitively, an occurrence of hfl; ii
in the input tree may be read: "If fl is the current state, use the i-th flf -rule of
td in order to process the next input symbol f 2 \Sigma that will be encountered."
In order to remember i, states of the form fl i will be used.
3.4 Lemma For every top-down tree transducer td 2 \Theta one can construct a
deterministic top-down tree transducer td 0 2 \Theta such that, for some constant
a 2 N+ ,
for all n 2 N. The construction preserves totality and can be carried out on
logarithmic space.
Proof. Let define m to be the largest natural number
such that, for some are m distinct flf -rules in R.
(with
For all contains the rule fl i [hfl;
as, for all fflg, the rule In order to define R 1 ,
consider some such that R contains at least one flf -rule.
Let
be an arbitrary order on the set of all flf -rules in R. Then R 1 contains the rules
The rules for which have the same right-hand side as the one for
are needed in order to preserve totality. The choice of the superscript 1
for all states in the right-hand sides is arbitrary; it could be replaced with any
other
It should be clear that td 0 can be constructed on logarithmic space, since this
requires mainly the manipulation of a fixed number of counters ranging over
[m]. Furthermore, td 0 is deterministic and the construction preserves totality.
Moreover, for every (s; t) 2 td 0 , removing the symbols of the form hfl; ii from
s yields a tree s 0 such that (s This proves that os td 0 (n) - os td (n)
for all n 2 N. In order to see that the other inequality holds as well, one has
to cope with the difficulty discussed above, namely that some of the possible
derivations of td have no counterpart in td 0 since the latter will always apply
the same rules when copies of a subtree are processed in the same state. The
proof proceeds by explicitly turning an arbitrary input tree for td into one for
td 0 which yields an output tree of maximal size.
For every tree s 2 T \Sigma and every choose an arbitrary but fixed derivation
td s 0 with s 0 2 TH such that js 0 j is maximal, and let i(s; if the
first rule applied in this derivation is the j-th flf -rule in R (with respect to the
order used to define R 1 is the root symbol of s. If there is
no derivation
can be chosen arbitrarily.
(Recall the informal discussion preceding the lemma: i(s; fl) is the index of
the rule which must be applied to fl[s] in order to obtain an output tree of
maximum size.) Now, suppose and define for every tree
Clearly, Therefore, defining a
have established the required inequality os td (n=a) - os td 0 (n) once the following
claim is proved.
Claim. Let fl[s] !
td t for some . Then there is a
derivation
To prove the claim, proceed by induction on the structure of s. Suppose
Due to the definition of i(s; fl) we can assume without loss of
generality that the rule applied in the first step of the derivation fl[s] !
td t is
the i(s; fl)-th flf -rule of td . Thus, if the given derivation has the form
applying the induction hypothesis to the subderivations
trees t 0
l 2 TH such that
es k
l [es i l
td
l
for all j 2 [l], which proves the claim (by taking t
l ]) and thus
finishes the proof of the lemma.
Intuitively, if a derivation of a top-down tree transducer produces a large output
tree, there must be a path in its input tree whose nodes are copied an exponential
number of times. Thus, if we turn a top-down tree transducer td into a string-
to-tree transducer st which interprets its input as a path in an input tree of
td and simulates the corresponding part of the derivation, the output size of st
should not differ from the output size of td very much. The following lemma
proves that this is indeed the case.
Lemma For every deterministic top-down tree transducer td 2 \Theta one can
construct a total deterministic string-to-tree transducer st 2 \Theta such that, for
some constant a 2 N,
st (n) - max(1; os td (a \Delta n))
for all n 2 N. The construction can be carried out on logarithmic space if td is
total, and in exponential time otherwise.
Proof. Let As indicated above, the main idea is to
construct st in such a way that an input tree of st corresponds to a path in
an input tree of td . A computation of st on such a path produces the output
tree which consists of all nodes td produces by processing symbols on this path.
technical reasons, the leaf f (0) at the end of a path will not be taken into
account; it is treated as ffl.) In order to cope with the possible non-totality of
td , the states are enriched by a second component which is a set of states and is
used in order to keep track of all states in which copies of the remaining input
are being processed. To make this precise, some auxiliary definitions turn out
to be useful.
For every f (k) 2 \Sigma, denote by lstates(f) the set of all states
the flf -rule in R exists. Furthermore, for
rstates denote the set of all states
of the right-hand side of some flf -rule for which fl 2 \Delta. Intuitively, if \Delta is
the set of states processing copies of a tree f [s after a simultaneous
derivation step rstates \Delta (f; i) will be the set of states processing copies of s i .
Finally, for every set of states \Delta ' \Gamma, let
dom
st
constructed as follows.
Consider some hfl; \Deltai suppose
dom(rstates
In this case, if fl[f [x
is the flf -rule in R, then
R 0 contains the rule
where
(Note that the h in the original flf -rule has rank l, whereas the new one has
rank p.) Otherwise (i.e., if it is not the case that (1) and (2) hold), R 0 contains
the rule hfl; \Deltai[f i Furthermore, the rule hfl; \Deltai[ffl] !h is in R 0 for every
state hfl; \Deltai
By construction, st is total and deterministic. In order to show that the stated
inequalities hold, two claims are proved. The first claim concerns the inequality
os st (n) - max(1; os td (a \Delta n)).
Claim 1. There is a constant a 2 N such that the following holds for every
state hfl; \Deltai ;. For every derivation hfl; \Deltai[s] !
st t (where
there is a tree s depending only on s and \Delta,
such that js 0 j - a \Delta jsj and fl[s 0
To prove is the maximum rank of symbols
in \Sigma and a 0 is the smallest positive natural number such that every nonempty
3 Recall that all symbols f (0) 2 \Sigma are treated as ffl in st. Therefore, only f i
for f of rank
need to appear in S.
set dom contains a tree of size at most a 0 . (Notice that a 0
exists because the powerset of \Gamma is finite.) Let us proceed by induction on the
length of derivations. The claim certainly holds if the considered derivation has
the form hfl; \Deltai[s] ! st h, choosing as s 0 a smallest tree in dom (\Delta). Now, assume
that the derivation hfl; \Deltai[s] !
st t reads
st h[hfl
st
where the rule applied in the first step is the one constructed from a rule
in R as described in the definition of R 0 .
k ] be the tree whose subtrees s 0
are defined as fol-
lows. For
j is a smallest tree in dom(rstates \Delta (f; j)) (which, by the
construction of R 0 , is a nonempty set). Furthermore, s 0
i is the tree (provided
by the induction hypothesis) of size - a \Delta js i j for which there are derivations
p such that t 0
(Notice that the induction hypothesis yields the same input tree s 0
all p derivations because s 0
depends only on s i and \Delta i .) It follows that
and
td
td
is a tree containing in particular the subtrees t 0
p , which
means
This finishes the proof of Claim 1. The claim yields os st (n) - os td (a \Delta n) for all
every tree s 2 T mon(S) , which means that os st
To formulate the second claim, it is convenient to formalise the notion of paths
through a tree s 2 T \Sigma . For this, define paths(s) 2 T mon(S) as follows. For
2. Let hfl; \Deltai dom (\Delta). For every tree t 2 TH with
td t it holds that
Again, the proof is by induction on the length of derivations. For derivations
of length 1 we have so the assertion trivially holds. Now,
assume that the given derivation has the form
td
td
where l - 1. First of all, notice that s i 2 dom This is
because, by assumption, s 2 dom (\Delta) and, by definition, \Delta i is the set of all
states occurs in s 0 for some fl 0 2 \Delta, where s 0 is the
unique tree such that
for one such state
would mean that there did not exist a derivation
0 with
thus violating the assumption s 2 dom(td fl ).
The fact that s i 2 dom implies (2) in the definition of R 0 ,
and (1) is obviously satisfied as well. Consequently, if I
then the derivation step
st
exists and, by the induction hypothesis,
for all j 2 I i . Summing up, we get
(st
as claimed.
By choosing
there is a path s 0 2 paths(s) such that wd(st(s 0 wd(t)=wd(s). This is due to
the fact that
This proves the inequality os td (n)=n 2 - os st (n) for all n 2 N since
How much time does it take to construct st? Clearly, the most time consuming
part is to determine the set (which is necessary
in order to construct the rules). This can be done by a standard algorithm,
as follows. Define be the set of all
such that there exists an input symbol f (k) 2 \Sigma for which \Delta ' lstates(f) and
rstates \Delta (f; time is sufficient in order
to determine D i+1 since we can just enumerate the 2 j\Gammaj sets test for
each of them whether it satisfies the requirement. It follows by straightforward
inductions that
i2N D i . Furthermore, by definition D i ' D i+1 for all
, where
is the smallest index such that D i 0
this shows that D is computable in exponential time.
This completes the proof for the general case. It remains to consider the special
case where td is total. Totality of td means that dom
Therefore, in the construction of rules conditions (1) and (2) are always satisfied,
regardless of \Delta. As a consequence, the second component of a state in \Gamma 0 is
useless and one can simplify the construction:
is the
set of all rules fl[f i
Figure
2: A tree of branching depth 2 and branching index 3.
I i is as in the proof of Claim 2 and these rules can be
computed on logarithmic space, which completes the proof of the lemma.
The reader should notice that the inequalities in the three lemmas above guarantee
that polynomial boundedness and exponentiality are preserved. With
respect to polynomial boundedness this is clear because the upper bounds are
obviously polynomials if os td is one. Exponentiality is preserved as well. For
os td (n=a), where a is a constant, this is clear because c
For os td (n)=p(n), where p is a polynomial, choose
order to get c n which is exponential because the second
factor is larger than 1 for sufficiently large n (since d ? 1).
4 The branching index of output trees
In this section it will be shown that, intuitively, a tree whose size is exponential
in its height must necessarily contain a subtree with many ramifications on every
path. In order to formalise this, the branching depth and the branching index
of trees are introduced.
4.1 Definition (branching depth and branching index) Let T be a tree. The
branching depth of T is the smallest natural number b such that there is a leaf
which has exactly b distinct ancestors of rank - 2. The branching index
of T is the maximum branching depth of all trees T 0 ' T . \Pi
An example is shown in Figure 2. The branching depth of the tree is 2 while
the branching depth of the subtree indicated by hollow edges is 3. The latter
turns out to be the branching index of the tree as a whole because there is no
subtree with a larger branching depth. The reader should notice that every tree
T contains a tree T 0 ' T of rank - 2 which has the same branching depth b
as T . Therefore, the tree T 0 in Definition 4.1 can be assumed to have at most
the rank 2, without loss of generality. It may furthermore be instructive to
note that one could remove all nodes which have more than b pairwise distinct
ancestors of rank 2, yielding a tree in which all leaves have exactly b ancestors
of rank 2. Intuitively, this turns the tree into a full binary tree of height b
if nodes of rank 1 are disregarded. Hence, the branching index of T is one less
than the height of the largest full binary tree which can be "embedded" in T .
The following lemma yields an equivalent recursive description of the branching
index. The straightforward inductive proof is omitted.
4.2 Lemma Let T be a tree with direct subtrees T . The branching
index of T is 0 if
the branching index of T i . Then the branching index of T is there are
distinct and it is b if such indices do not exist. \Pi
The lemma below states that the size of trees is polynomially bounded in their
height if we place an upper bound on their branching index (provided that the
rank is bounded).
4.3 Lemma Let S be the set of trees of rank - r and branching index - b, for
some N. There is a polynomial p b of degree b+1 such that jT j - p b (ht(T
for all trees T 2 S.
Proof. Proceed by induction on b. A tree T of branching index 0 can at
most be of rank 1, which implies jT
let T be a tree of branching index - b having k - r direct subtrees T
If the branching index of one of T was greater than b or there were
distinct such that the branching index of both T i and T j is b, then
the branching index of T would be at least b Therefore,
at most one of the direct subtrees (T 1 , say) is of branching index b, all the
remaining ones having a strictly smaller branching index. According to the
induction hypothesis, T
polynomial p b\Gamma1 of degree b (since the coefficients of p b\Gamma1 can be assumed to
be positive). Therefore, jT Repeating the
argument for T 1 until a tree of size 1 is reached, yields
which is a polynomial in ht(T ) of degree b is one of degree b.
As a corollary, the branching index cannot be bounded if the size of the trees
in a set grows exponentially in their height.
4.4 Corollary Let S be a set of trees of bounded rank, and let size S
ng for all n 2 N. If size S is not polynomially bounded,
then then there is no upper bound on the branching index of trees in S. \Pi
Let us say that a tree t contains a bifurcation if there is a node v 0 2 N(t)
with two distinct descendants v 0 v;
The next lemma states that in every set of labelled trees (with
finitely many labels) of unbounded branching index there is a tree containing
a bifurcation. This will be used in the proof of Theorem 4.6 in order to create
a kind of pumping situation which characterises string-to-tree transducers of
exponential output size.
Lemma Let S be a set of trees with labels in a finite set L. If the
branching index of trees in S is unbounded, then there exist a tree t 2 S
containing a bifurcation.
Proof. For denote the set of labels on the path to
g. We mainly have to prove
the following claim, which is done by induction on n.
Claim. Let t 2 S and n 2 N, and let T ' N(t) be a tree of branching depth
does not contain distinct nodes v 0 v; v 0 v 0 which satisfy
there is a node w 2 T such that
For this is trivial, taking as w any node in T . Therefore, let n - 1 and
assume that the claim holds for trees T 0 ' N(t) of branching depth B(n \Gamma 1).
As pointed out after Definition 4.1, it may be assumed without loss of generality
that the rank of T is 2. Consider the tree T 0 ' T which consists of all nodes
having at most B(n \Gamma 1) ancestors of rank 2. Thus, the branching depth
of T 0 is 1). By the induction hypothesis this implies
some node v 0 2 T 0 . There is nothing to show if
that
choose a leaf v 0 u 2 T such that juj is minimal. Since v 0 has at
most ancestors of rank 2 in T whereas v 0 u has at least B(n) (since
this is the branching depth of T ), there are at least n ancestors v 0 u 1 of v 0 u
whose rank is 2. By the minimality assumption on juj this means that the set
jujg has at least n+1 elements. By assumption, no label in
twice among the labels of nodes in N . This implies the existence
of a node v 0 v 2 N such that t(v 0 v) 62 t(v 0 ), and thus completes the proof of the
claim since it means that
Now, in order to prove the lemma, choose a tree t 2 S of branching index at least
B(jLj). Then there is a tree T ' N(t) of branching depth B(jLj). However,
since there cannot be a node w 2 T such that t(w) ? jLj, it follows from the
claim that T contains a bifurcation.
The decision algorithm to be developed in the next section is based on a theorem,
to be proved next, which characterises the class of total deterministic string-to-
tree transducers st 2 \Theta of exponential output size. In fact, the theorem could be
generalised to arbitrary top-down tree transducers, but this would be technically
more difficult and is not needed to prove the results of this paper. In order to formulate
the theorem (and for further use as well), the notion of computation trees
is needed. Intuitively, the computation tree of a derivation is the tree of states in
which copies of subtrees of the input tree are processed. We only need this definition
for string-to-tree transducers in \Theta. Therefore, let st = (\Sigma; H; \Gamma; R;
The computation tree of a derivation fl[s] !
st t with
is the tree with labels in \Gamma which is defined as follows. If the derivation has the
st h then its computation tree is the tree fl. Otherwise, the derivation
must have the form fl[f [s 0 st
st In this
case, its computation tree is fl[t 0
i is the computation tree of
the i-th subderivation
st [k]. The set of all computation trees
of derivations
st t with s 2 T \Sigma and t 2 TH is denoted by ct(st).
4.6 Theorem The output size of a total deterministic string-to-tree transducer
st 2 \Theta is exponential if and only if there is a tree in ct(st) containing a
bifurcation.
Proof. Let st = (\Sigma; H; \Gamma; R; fl 0 ). For the proof, notice that the computation
tree ct of a derivation fl[s] !
st t has the same structure as t, i.e.,
Thus, it makes no difference whether we consider the size of output trees or the
size of computation trees.
Due to the remark above, and by the fact that for the
computation tree ct of a derivation fl[s] !
st t, we have size ct(st) (n) - os st (n)
for all n 2 N (where size ct(st) is defined as in Corollary 4.4). By Corollary 4.4
this means that the branching index of trees in ct(st) is unbounded. Thus, the
implication follows from Lemma 4.5.
Consider some derivation
st t whose computation tree ct contains
nodes of the required type. Then one can decompose s into
ct i be the computation tree of the (unique) derivation
st
, where By induction on i it follows
that jct i this is trivial. For i ? 0, since ct(v 0
there is a derivation fl[s i
st s,
where s is a tree containing at least two subtrees of the form fl[s
As
a consequence, the computation tree ct i of the derivation fl[s i
st
st t i
contains ct twice as a subtree, which proves jct
The fact that ct(v 0 implies the existence of a derivation
st s, where s is a tree containing the subtree fl[s i
quently, the computation tree of the derivation
st
st t 0
i with
contains ct i as a subtree, which means that its size is at least 2 i . For
st
which is exponential because 2 \Gammam 0 is a constant factor and 2 1=m1 ? 1. (Note
that 1=m 1 is defined as m
As a by-product of the results in this section a result similar to Theorem 5.2 of [1]
is obtained: If the output size of a top-down tree transducer is not exponential,
then it is polynomially bounded. (In fact, the result in [1] is slightly stronger
as it states that, in this case, the output size satisfies c 1
for some c 1 In other words, it cannot be an element of
4.7 Corollary The output size of a top-down tree transducer is either polynomially
bounded or exponential.
Proof. Let td be a top-down tree transducer. By Lemmas 3.3, 3.4, and 3.5
there is a total deterministic string-to-tree transducer st 2 \Theta, such that os st is
polynomially bounded (exponential) if and only if td is polynomially bounded
(respectively exponential). If os st is not polynomially bounded, the only-if
direction of the proof of Theorem 4.6 remains valid. This shows that there is
a computation tree in ct(st) containing a bifurcation. Using the if direction
of the theorem it follows that os st is exponential, which means that os td is
exponential.
5 The main result
In this section the main result of the paper is proved: The exponential output
size problem is NL-complete for total top-down tree transducers and DEXP-
TIME-complete in the general case. First, it is shown that there are decision
algorithms which obey these resource bounds, starting with the total case.
5.1 Lemma For total deterministic string-to-tree transducers in \Theta, the exponential
output size problem is in NL.
Proof. By Theorem 4.6 it suffices to prove the following.
For a total deterministic string-to-tree transducer st 2 \Theta it can be
decided by a nondeterministic Turing machine on logarithmic space
whether there is a computation tree in ct(st) containing a bifurcation.
To sketch how such a Turing machine M could work, let st
For every state fl 2 \Gamma and every input symbol f (1) 2 \Sigma, let next(fl; f) denote
the set of all states which occur in the right-hand side of the unique flf -rule in
R. A computation of M consists of two phases. In the first phase, starting with
repeatedly makes nondeterministic choices to "guess" the next symbol
f (1)
of an arbitrary input string, and a state fl i+1 2 next(fl
is the initial state of st). Thus, fl 0 is the sequence of labels on a path in
the unique computation tree which is determined by the guessed input string
. During this phase of its computation, M nondeterministically selects
one of the encountered states (say
stores it on the tape.
At some step j 0 - i 0 , the next phase is initiated by guessing two states
which need not be distinct, but must correspond to two
distinct nodes in the right-hand side of the respective rule. (Formally, if t is
the right-hand side of the fl j0 f j0 -rule, then fl j0
distinct nodes v; Intuitively, this is the place
where the two paths of the bifurcation separate. From now on, two sequences
are constructed in parallel, always choosing
some f (1)
states
accepts st if it encounters some i ? j 0 such that
Since st is total, every derivation finally yields a tree in TH . Therefore, if M
accepts its input in step i, the computation tree ct of the derivation on input
exists. Obviously, the acceptance condition means that ct
contains a bifurcation. Conversely, if there exists an input tree leading to a
computation tree that contains a bifurcation, it is clear that one of the possible
runs of M will make suitable nondeterministic choices in order to detect this
fact. Moreover, M requires only logarithmic space since the only things it must
keep track of are
and the current symbols f i ,
i . This completes the
proof.
5.2 Theorem The exponential output size problem is NL-complete for total
top-down tree transducers and DEXPTIME-complete for general ones. Both
parts of the statement remain true if only deterministic top-down tree transducers
are considered.
Proof. Let td be a top-down tree transducer. By Lemmas 3.3-3.5 one can
construct a total deterministic string-to-tree transducer st 2 \Theta such that os st
is exponential if and only if os td is exponential. Furthermore, the construction
can be performed on logarithmic space if td is total and in exponential time
otherwise. Using Lemma 5.1 it can be tested on logarithmic space (in the size
of st) whether os st is exponential. Thus, the exponential output size problem
is in NL for total top-down tree transducers and in DEXPTIME in general. It
remains to prove NL-hardness and DEXPTIME-hardness, respectively.
In order to establish this for the total case, it is shown that the NL-complete
problem Reachability (see, e.g., [12]), also known as the graph accessibility
problem, can be reduced to the exponential output size problem for total top-down
tree transducers. Given a directed graph G and two nodes v; v 0 , Reachability
is the problem to determine whether there exists a vv 0 -path in G, i.e.,
a directed path leading from v to v 0 .
If V is the set of nodes of the input graph G to Reachability, let st =
g, and R
is defined as follows.
(1) For all u; u contains the rule fl u [u 0
[x] if there is an edge
from u to u 0 in G. If there is no such edge then R contains the rule
(2) For all nodes contains the rule fl u [ffl] ! ffl.
(3) In addition, R contains the rule
Clearly, a work tape of logarithmic size is sufficient for a Turing machine to
construct st . Furthermore, if G does not contain any vv 0 -path then the rule
in (3) will never be applied, so that Conversely,
if there is a vv 0 -path given by a sequence e 1
is the target node of e j for
By the rules in (1) and (3) there is a derivation
st
st
which means that st contains all pairs (s is a full binary tree of
height i. Consequently, os st ((k +1) \Delta proving that os st is exponential
(since k is a constant).
Finally, consider the general case. We are going to make use of a DEXPTIME-
completeness result by Seidl. A (deterministic top-down) finite tree automaton
is a deterministic top-down tree transducer ta = (\Sigma; \Sigma; \Gamma; R;
every flf -rule in R has the form fl[f [x
ously, the computed relation ta is a partial identity. Seidl [14] showed that it
is DEXPTIME-hard to decide whether dom(ta 1
tree automata ta ta n given as input. Let ta
assume without loss of generality that the sets of states are pairwise
disjoint. Now, let given by the following components
(where the symbols f; and the states are supposed to be new ones):
and R 0 contains the rules
Clearly, td is deterministic, and dom(ta 1 implies that
the computed tree transduction is empty. Otherwise, choose an arbitrary tree
be the tree
1. Then there is a derivation
where t i is a complete binary tree of height i +1 over f and ffl. Thus, the output
size of td is exponential, which completes the proof of the theorem.
By Corollary 4.7, the set of all top-down tree transducers whose output size is
polynomially bounded is the complement of those whose output size is expo-
nential. Because of the famous result by Immerman and Szelepsc'enyi [10, 15]
stating that NL is closed under complement (and the fact that the same holds
for deterministic classes like DEXPTIME anyway), the polynomial output size
problem (i.e., to determine whether os td is polynomially bounded) turns thus
out to be in NL respectively DEXPTIME.
5.3 Corollary The polynomial output size problem is in NL for total top-down
tree transducers and in DEXPTIME for general ones. \Pi
6 Bottom-up tree transducers
In this section the output size of bottom-up tree transducers is considered. By
mistake, it was claimed in the conclusion of [3] that the results of Section 5 were
true also for bottom-up tree transducers. In fact, this holds neither in the total
nor in the general case (assuming that NL 6= P 6= DEXPTIME). Bottom-up
tree transducers cannot copy subtrees and then process them individually in
different states because copying takes place after the copied subtree has been
processed. This results in a considerably easier emptiness problem, which makes
it possible to apply a construction similar to the one in the proof of Lemma 3.5,
but using only polynomial instead of exponential time. On the other hand, the
result of a bottom-up tree transduction may depend on deleted subtrees because,
like copying, deletion takes place after processing a subtree. This means that,
intuitively, the deletion of large subtrees can simulate the effect of non-totality
on the output size. As a consequence, the exponential output size problem for
bottom-up tree transducers does not become easier if restricted to total bottom-up
tree transducers. The main result of this section states that the problem is
P-complete in both cases.
Let us first recall the definition of bottom-up tree transducers.
6.1 Definition (bottom-up tree transducer) A bottom-up tree transducer is a
tuple
are as in the definition of top-down tree transducers,
is a finite set of rewrite rules, and
is the set of final states.
The bottom-up tree transduction computed by bu, which is denoted by bu as well,
is the set of all pairs (s;
bu fl[t] for some
bu denotes the rewrite relation !R . \Pi
As in the top-down case it is assumed without loss of generality that the variables
in the left-hand side of a rule, read from left to right, are always x
some appropriate k 2 N. Thus, every rule of a bottom-up tree transducer has
the form f [fl 1 [x 1
are states, t is a tree in T \Sigma 0
l ) for some l 2 N (containing every variable in X l
exactly once), and x
A bottom-up tree transducer as in the definition is deterministic if R contains
at most one rule whose right-hand side is f [fl 1 [x 1
and \Gamma. Similarly, bu is total if there is at least one rule
whose right-hand side is f [fl 1 [x 1
The output size of bu is given by the function os bu which is defined
exactly as in the case of top-down tree transducers. Similarly, the definition of
the exponential output size problem carries over to the bottom-up case in the
obvious way.
Call a state fl of a bottom-up tree transducer
there are trees s 2 T \Sigma and t 2 T \Sigma 0
such that s !
bu fl[t]. The following problem
called Usefulness will be used below.
Instance A bottom-up tree transducer and a state fl 2 \Gamma.
Question Is fl useful?
As stated in the following lemma, State Usefulness is P-complete. This
is a rather obvious reformulation of the fact that the emptiness problem for
context-free string languages is P-complete [11, Corollary 11], using the well-known
relationship between context-free grammars and finite tree automata on
the one hand, and finite tree automata and bottom-up tree transducers on the
other (cf. the definition of finite tree automata in the proof of Theorem 5.2).
For the sake of completeness, an explicit proof is nevertheless added below.
6.2 Lemma State Usefulness is P-complete. The same holds if the problem
is restricted to total deterministic bottom-up tree transducers whose output
signature is fffl (0) g.
Proof. The proof is similar to the proof of [11, Corollary 11]. Let bu =
\Gamma. In order to decide whether there are trees s 2 T \Sigma
and
such that s !
bu fl[t], just apply the following standard technique:
and compute S 0 (which can obviously be done in polynomial
time). Then it follows by a straightforward induction that fl is useful if
and only if
It remains to prove P-hardness of the restricted variant. By [11, Corollary 9]
the following problem is P-complete: For a finite set S, a binary operation \Delta on
S (given by a multiplication table), a subset S 0 of S, and an element a 2 S,
decide whether a is in the closure of S 0 under \Delta (i.e., whether a is an element
of the smallest superset S 0 of S 0 such that b \Delta c
In the obvious way, every tree s 2 T \Sigma can be seen as an expression over \Delta and
constants in S 0 . By construction, there is a derivation s !
bu fl b [ffl] if and only if
b is the value of this expression. In particular, fl a is useful if and only if a 2 S 0 ,
as required. Furthermore, bu can be computed on logarithmic space as it is easy
to convert a multiplication table for \Delta into R. This completes the proof.
Similar to the top-down case it is convenient to simplify a given bottom-up
tree transducer in such a way that each right-hand side contains exactly one
output symbol. For this, let \Theta 0 be the set of all bottom-up tree transducers
and every right-hand
side of a rule in R is an element of \Gamma(H (X)).
6.3 Lemma For every bottom-up tree transducer bu one can construct a
bottom-up tree transducer bu 0 2 \Theta 0 such that, for some constant a 2 N,
os bu (n)=a - os bu 0
for all n 2 N. The construction can be carried out on logarithmic space.
Proof. Simply replace every rule f [fl 1 [x 1
in R
with
]]. By the same arguments as in the
proof of Lemma 3.3 this satisfies the claimed inequalities. Furthermore, it can
obviously be done on logarithmic space.
We can now show how to turn a bottom-up tree transducer into a total (top-
string-to-tree transducer, using a construction similar to the one in the
proof of Lemma 3.5.
6.4 Lemma For every bottom-up tree transducer bu 2 \Theta 0 one can construct
a total string-to-tree transducer st 2 \Theta such that, for some constant a 2 N,
os bu (n)=n 2 - os st (n) - max(1; os bu (a \Delta n))
for all n 2 N. The construction can be carried out in polynomial time.
Proof. Let 6.2 the set of useful states can be
determined in polynomial time. Obviously, the remaining states and the rules in
which they appear can be deleted without affecting the computed transduction.
Therefore, in the following we can assume without loss of generality that \Gamma
contains only useful states. Based on this assumption, let st = (mon (\Sigma);
is a new state and R defined as follows.
(1) For every rule f [fl 1 [x 1
in R and every i 2 [k],
contains the rule fl[f is the number of
times x i occurs among x
. If, for some does
not contain any rule of the form f [fl 1 [x 1
then R 1 contains the rule fl[f [x]] !h (in order to account for totality).
(2) For every rule fl[f contains the rule fl 0 [f
and the rule fl 0 [f
gg.
The remainder of the proof is a simplified variant of the reasoning in the proof
of Lemma 3.5.
Claim 1. There is a constant a 2 N such that the following holds for every state
. For every derivation fl[s] !
st t (where t 2 TH )
there is a tree s
bu
For the proof, let a 0 be the smallest natural number such that, for every
there is a tree s of size at most a 0 such that s
bu fl[t] for a tree t 2 TH .
Now define a is the maximum rank of symbols in \Sigma.
We proceed by induction on the length of derivations. If fl[s] ! st t then h, so
the claim holds with s . For the inductive step, let
a derivation
st
st
be such that jt jg. By the induction
hypothesis, there exists a tree s 0
bu
i is a tree in TH which satisfies jt 0
jg. By
construction, R contains a rule f [fl 1 [x 1
such that
times among x
. Define s
for all j 2 [k] n fig. A computation similar to the corresponding one in the
proof of Lemma 3.5 verifies the inequality js 0 j - a \Delta jsj. Furthermore, there is a
derivation
bu
bu
occurs p times among t 0
. For
we thus get
as claimed.
For the second claim let, for every tree s 2 T \Sigma , paths(s) ' T mon (\Sigma) be defined
as follows. For
)g. Furthermore, for every state
every tree s an arbitrary but fixed tree t 0 2 TH of
maximum width such that fl[s 0
st t 0 . (Notice that such a tree exists because
st is total.)
2. For every derivation s !
bu fl[t], where
holds that
Again, the proof is by induction on the length of derivations. For derivations
of length 1 the assertion is trivial as in this case
and consider a derivation
s
bu
bu
assuming that
tion, if i occurs p i times among contains the rule
fl[f Consequently, wd(- fl (f [s 0
for all s 0
in particular), which yields
as claimed.
Claims 1 and 2 prove the inequalities in the lemma, which can be seen as follows.
By the rules in R 2 , (s; st yields
st t for some state
Using Claim 1, the latter implies the existence of a pair (s bu such that
jtj. Thus, we have os st st (n) - os bu (a \Delta n),
which proves that os st (n) - max(1; os bu (a \Delta n)). For the other inequality, as
in the proof of Lemma 3.5, for all (s; implies the existence
of a pair (s st such that s As
shows that os bu (n)=n 2 - os st (n) for all n 2 N.
It is now easy to prove the main result of this section.
6.5 Theorem The exponential output size problem for bottom-up tree transducers
is P-complete. The same holds for total deterministic bottom-up tree
transducers.
Proof. By Lemma 6.3, Lemma 6.4, and Theorem 5.2 the exponential output
size problem for bottom-up tree transducers is in P. It remains to prove P-
hardness for the total deterministic case. This is done by reducing the problem
State Usefulness to the exponential output size problem for this class of tree
transducers.
By Lemma 6.2 it suffices to consider a total deterministic bottom-up tree transducer
\Gamma) and a state fl 2 \Gamma. We have to show how to
construct on logarithmic space a total deterministic bottom-up tree transducer
bu 0 such that os bu 0
is exponential if and only if fl is a useful state of bu. For
this, let bu and
Clearly, bu 0 is total and deterministic, and the construction can be performed
on logarithmic space. In order to verify the required equivalence, let s !
bu fl[ffl]
for some s 2 T \Sigma and define f
there is a derivation f
bu 0
bu 0
fl[t] for every i 2 N, where t is a
full binary tree over g and ffl of height i + 1. Thus, the output size of bu 0 is
exponential.
Conversely, if fl is not useful in bu, it is clear that the rule f
never be applied in a derivation s !
bu 0
Therefore,
for all such derivations it holds that is the only output symbol which
occurs in the remaining rules). Thus, os bu 0 is not exponential, which finishes
the proof.
Using Lemmas 6.3 and 6.4, Corollary 4.7 extends to bottom-up tree transducers.
Thus, since P is closed under complement, a corollary similar to Corollary 5.3
is obtained.
6.6 Corollary The polynomial output size problem for bottom-up tree transducers
is in P. \Pi
7 Conclusion
It was shown in this paper that the exponential output size problem is NL-
complete for total top-down tree transducers, DEXPTIME-complete for general
ones, and P-complete for bottom-up tree transducers. Intuitively, the reason
for the huge complexity gap between the two top-down variants is that, in the
general case, solving the problem requires to solve the emptiness problem for
top-down tree transductions.
There are several directions for future research which could be interesting. The
complexity of the exponential output size problem for compositions of top-down
or bottom-up tree transductions, or for more general classes of tree transducers
(like, e.g., macro tree transducers [7]) seems to be an interesting open problem.
Another point is that, as mentioned in the introduction, for every k 2 N one
can construct top-down tree transductions whose output size is bounded from
above by a polynomial of degree k (but not by a polynomial of degree k \Gamma 1).
In fact, by Corollary 4.7 the output size of a top-down tree transducer is either
bounded by a polynomial or exponential. For macro tree transducers, Engelfriet
and Maneth [6] showed that it is decidable whether the output size is linearly
bounded. Thus, it may be interesting to search for efficient algorithms which
determine, for a given top-down or even macro tree transducer - , the smallest
natural number k such that os - 2 O(n k ) (provided that such a k exists). Finally,
are there natural classes of non-total top-down tree transducers for which the
exponential output size problem is at least solvable on polynomial space?
Acknowledgement
I thank Joost Engelfriet, who told me where to find the
completeness results in [11] and pointed out the related work in [1], as well
as Helmut Seidl and an anonymous referee for their careful reading of the
manuscript and helpful lists of suggested improvements.
--R
Translations on a context free
The complexity of the exponential output size problem for top-down tree transducers
Some open questions and recent results on tree transducers and tree languages.
Three hierarchies of transducers.
Characterizing and deciding MSO- definability of macro tree transductions
Macro tree transducers.
Zolt'an F-ul-op and
Tree languages.
Nondeterministic space is closed under complement.
Complete problems for deterministic polynomial time.
Computational Complexity.
Mathematical Systems Theory
Haskell overloading is DEXPTIME-complete
The method of forced enumeration for nondeterministic automata.
Generalized 2 sequential machine maps.
There's a lot more to finite automata theory than you would have thought.
Tree automata: an informal survey.
--TR
The method of forced enumeration for nondeterministic automata
Nondeterministic space is closed under complementation
Haskell overloading is DEXPTIME-complete
Tree languages
Syntax-Directed Semantics
Characterizing and Deciding MSO-Definability of Macro Tree Transductions
Exponential Output Size of Top-Down Tree Transducers
--CTR
Frank Drewes , Joost Engelfriet, Branching synchronization grammars with nested tables, Journal of Computer and System Sciences, v.68 n.3, p.611-656, May 2004 | complexity;tree transducer;output size;completeness |
501982 | The architecture and performance of security protocols in the ensemble group communication system. | Ensemble is a Group Communication System built at Cornell and the Hebrew universities. It allows processes to create process groups within which scalable reliable fifo-ordered multicast and point-to-point communication are supported. The system also supports other communication properties, such as causal and total multicast ordering, flow control, and the like. This article describes the security protocols and infrastructure of Ensemble. Applications using Ensemble with the extensions described here benefit from strong security properties. Under the assumption that trusted processes will not be corrupted, all communication is secured from tampering by outsiders. Our work extends previous work performed in the Horus system (Ensemble's predecessor) by adding support for multiple partitions, efficient rekeying, and application-defined security policies. Unlike Horus, which used its own security infrastructure with nonstandard key distribution and timing services, Ensemble's security mechanism is based on off-the shelf authentication systems, such as PGP and Kerberos. We extend previous results on group rekeying, with a novel protocol that makes use of diamondlike data structures. Our Diamond protocol allows the removal of untrusted members within milliseconds. In this work we are considering configurations of hundreds of members, and further assume that member trust policies are symmetric and transitive. These assumptions dictate some of our design decisions. | Introduction
Group Communication Systems (GCSs) are used today in industry where reliability and high-availability
are required. Group Communication is a subject of ongoing research and many GCSs
have been built [1, 2, 3, 4, 5, 6, 7], some of them commercial products [8]. Example GCS applications
include: group-conferencing, distributed simulation, server replication, and more. As the
Internet emerged into mainstream use, the role of GCSs in Internet settings, and their security,
has emerged as an important topic. A secure GCS must be efficiently protected against malicious
behavior or outright attack. This paper describes the security architecture of Ensemble [6], our
group communication system, which achieves the desired properties.
Fundamentally, a GCS introduces a process group abstraction. A process group coherently
binds together many processes into one entity. Within the context of a group, reliable per-source
ordered messaging is supported. Processes may dynamically join and leave a group. Groups may
dynamically partition into multiple components due to network failures/partitions. The GCS is
responsible for simplifying these complex scenarios, overcoming the asynchronous nature of the
network and keeping the group abstraction consistent. Processes are provided with membership
"views" specifying the list of currently alive and connected group members. A notification is
provided whenever network connectivity changes or when processes join/leave the group. The
Virtual Synchrony Model (VS) [9, 8] specifies the relationship between message delivery and
membership notification.
Ensemble was developed at Cornell and the Hebrew Universities. It is written in a dialect of the
ML programming language [10] in order to facilitate system verification. The design methodology
behind Ensemble stresses modularity and flexibility [11]. Thus, Ensemble is divided into many
layers, each implementing a simple protocol. Stacking together these layers, much like one uses
lego blocks, the user may customize the system to suit its needs.
Underlying our work are two fundamental presumptions: First, we have access to a standard
off-the-shelf authentication mechanism; second, the application itself can perform authorization.
To secure group messages from tampering and eavesdropping, they are all signed and encrypted.
While it is possible to use public key cryptography for this task, we find this approach unacceptably
expensive. Since all group members are mutually trusted, we share a symmetric encryption and
signature key among them. This key is used to protect all group messages, making the encryption
and signature operations very fast (roughly 1000 times faster). Using such a key raises two
challenges:
A rekeying mechanism: This is the problem of secure replacement of the current group key
once it is deemed insecure, or if there is danger that it was leaked to the adversary. This
is challenging since switching to a new key should be done without using the old, possibly
compromised key for dissemination. Naturally, one could use public keys for this task yet
doing so leads to high latency.
If one assumes the simple "primary partition" model, where only a single component of the
group may function, then a simple solution is available. One may designate a centralized
whose responsibility is to disseminate, revoke, and refresh group keys. Only
group members in contact with the server have the key and hence can function. Supporting
multiple partitions is more difficult since one cannot rely on any centralized service.
Secure key agreement in a group: This is the problem of providing a protocol whereby secure
agreement can be reached among group members which need to select a mutual key.
Such a protocol should not restrict the Ensemble stack, i.e., all legal layer combinations
should still be possible, it should be unobtrusive, and support multiple partitions. That is,
the protocol should "compose" cleanly with Ensemble stacks, regardless of their functionality
Our protocol must efficiently handle the case where two group components merge after
a network partitioning, where the network partitions into two or more components, and
the resulting group components use different keys. A simple approach (taken for example
in [12]) is to add members one by one, in effect transferring them from the smaller group
to the larger one. However, this is potentially slow since members are added one at a time:
it incurs cost proportional to the number of added members. Our solution is much more
efficient.
Our contributions are:
ffl We demonstrate how security properties can be decomposed and introduced to a layered
protocol architecture.
ffl We support security properties for multiple partitions. Earlier work either does not address
the issue of group partition or only supports security semantics for the primary partition [7].
ffl We provide support for dynamic application-defined authorization polices.
We focus on benign failures and assume that authenticated members will not be corrupted.
Byzantine fault tolerant systems have been built by other researchers [13, 14], but suffer from
limited performance since they use costly protocols and make extensive use of public key cryptog-
raphy. We believe that our failure model is sufficient for the needs of most practical applications.
As demonstrated in the performance section, our system has good performance and scalability.
Our security architecture is composable with most other Ensemble layers. The user thus has
the freedom to combine layers and properties including security.
Virtually Synchronous (VS) group communication has inherently limited scalability. For ex-
ample, Transis [2] scales to members and Ensemble (with a VS stack) scales to 100 members.
Since a group should be resilient to all scenarios of network partitions, each group component
should be completely autonomous. Therefore, our architecture does not rely on any centralized
servers or services; when some form of leader is required it is elected dynamically.
Throughout this paper we use the term authentication or signature referring both to public
signature and to keyed-MD5 1 signatures.
The remainder of the paper is structured as follows: Section 2 describes the model we use to
describe the system and the model of attack. Section 3 describes some Ensemble specifics. The
subsequent two sections describe the architecture components situated on the message critical
path. These parts are tailored to run efficiently. Section 4 describes Ensemble routers and
the secure router we have added, and Section 5 describes the Encrypt layer. The next three
sections describe the more subtle part of the architecture that is off the message critical path.
1 MD5 where the IV (Initial Vector) is fed by a secret key.
Section 6 describes our key agreement protocol - Exchange, Section 7 sketches a proof of its
correctness. Section 8 describes the Rekey protocol and its optimization. Section 9 describes
system performance, and Section 10 lists related work. Section 11 gives our conclusions and
Section 12 contains acknowledgments. The appendix contains some protocol details that were
removed from the main body of the protocols for clarity of exposition.
Model
Consider a universe that consists of a finite group U of n processes. Processes communicate with
each other by passing messages through a network of channels. The system is asynchronous: clock
drifts are unbounded and messages may be arbitrarily delayed or lost in the network. We do not
consider Byzantine failures.
may get partitioned from each other. A partition occurs when U is split into a set
of disjoint subgroups. Each process in P i can communicate only with processes in P i .
The subsets P i are sometimes called network-components. We shall consider dynamic partitions,
where in network-components dynamically merge and split. The partition model is more general
than the more common "crash failure" model since crash failures may be modeled as partitions,
but not the converse.
As described earlier a GCS creates process groups in which reliable ordered multicast and
point-to-point messaging is supported. Processes may dynamically join and leave a group. Groups
may dynamically partition into many components due to network failures/partitions; when net-work
partitions are healed group components remerge through the GCS protocols. Information
about groups is provided to group members in the form of view notifications. For a particular
process p a view contains the list of processes currently alive and connected to p. When a membership
change occurs due to a partition or a group merge, the GCS goes through a (short) phase
of reconfiguration. It then delivers a new view to the applications reflecting the (new) set of
connected members.
For this paper, we focus on messages delivered in the order they were sent: "fifo" or "sender-
property.
Ensemble follows the Virtual Synchrony (VS) model. This model describes the relative ordering
of message deliveries and view notifications. It is useful in simplifying complex failure
and message loss scenarios that may occur in distributed environments. For example, a system
adhering to VS ensures "atomic failure". If process q in view V fails then all the members in
this event at the "same time".
We assume the existence of an authentication service available to all group members. An
authentication service allows members to authenticate each other as well as create private and
authentic messages. When member p uses the authentication service to create a secure message
m for member q, we shall say that sealed message m. The reverse operation, performed by q to
open the message will be called unseal.
In what follows p; q; and s denote Ensemble processes and V; views.
Each member decides on its own trust policy (more on this in the Ensemble section below).
If p trusts q then we mark this by p t
q. Ensemble forms group components according to the
symmetric transitive closure of this relationship, marked by p t
q, and named st-trust. St-trust
is created as follows:
Symmetry: If p t
Transitivity: If p t
St-trust is a distributed relation. When trust policies are stable for sufficiently long at all
processes in U then st-trust becomes an equivalence relation. At such a point, U is separated
into disjoint equivalence classes of processes called st-domains. Partitions may prevent members
of an st-domain from merging together. For example, assume that vg, the st-
domains are S 1
vg, and there are two network-components C 1
vg. Then there are four components: fp; qg; fsg; ftg and fr; vg.
Figure
1: vg, the st-domains are S vg, the network-
components are C 1
vg. There are four components: fsg; fp; qgftg and
R
and q to be in the same component. A member may dynamically
change its trust policy and request Ensemble to reform its component accordingly. The
system will exclude untrusted members and allow trusted members to join.
The adversary has access to all untrusted (potentially dishonest) machines and may corrupt
or eavesdrop on any packet traveling through the network. Our goal is to protect messages sent
between trusted members of U . We do not provide protection against denial of service or traffic
analysis attacks. Rather, we restrict ourselves to the authenticity and secrecy of message content.
We work with an existing operating system and assume its security and correctness. An OS
vulnerability would cause a breach in Ensemble security.
Throughout this paper we assume all members of U belong to a single Ensemble group. While
describing the Exchange and Rekey protocols we assume that all machines able to authenticate
themselves are trusted. Later we refine this model with an application trust policy. Thus, the
trust relationship may be dynamic and may include multiple st-domains.
3 Ensemble
Ensemble is a GCS supporting process groups as described above. In addition to reliable fifo-
ordered multicast and point-to-point communication, it also supports many other protocols and
communication properties such as: multicast total order, multicast flow control, protocol switching
on the fly, several forms of failure detection, and more (see [6] for more details).
Ensemble is typically configured as a user-level library linked to the application. It is divided
into many layers, each implementing a simple protocol. Applications may customize the Ensemble
library to use the set of layers they require: the set of layers desired is composed into an Ensemble
stack. All members in a group must have the same stack to communicate.
Ensemble keeps view-state information. This information is replicated at all group members
and includes such data as: current protocol stack in use, group member names and addresses, the
number of members, the group key, etc. In order to change any of this information, a new view
has to be installed.
In Ensemble, each view has a unique leader known to all view members. The leader is selected
automatically by ranking group members, and the VS model ensures that, in a given view, all
members have a consistent belief concerning which member is the leader.
If the group key needs to be changed the group will be prompted for a view change. During
the process the leader will broadcast the new view-state, that includes the new group key, and all
members will use the new group key in the upcoming view.
Ensemble divides messages into two classes. There are intra-group or regular messages sent
between members of a view. They are usually application-generated messages, though some
messages may be generated as part of the Ensemble protocols on behalf of the application. In
addition, there are inter-component messages or so-called gossip messages. These are messages
generated by Ensemble for communication between separate components of an Ensemble group.
A gossip message is multicast to U , to anyone who can hear. Normally, communication is not
possible between group components due to network partitions. Gossip messages are used to merge
components together and they arrive at their destination when network partitions and link failures
are repaired. Receipt of a gossip message when partitioned triggers the merge sequence by which
separate components are fused together. Protocols that use gossip messages typically make very
few assumptions about them: they may be lost, reordered, or be received multiple times.
The regular and secure Ensemble stacks are depicted in Table 1. The Top and Bottom layers
cap the stack from both sides. The Group Membership Protocol (GMP) layer 2 computes the
current set of live and connected machines. Appl intf interfaces with the application and provides
reliable send and receive capabilities for point-to-point and multicast messages. It is situated in
the middle of the stack to allow lower latency to user send/receive operations. The RFifo layer
provides reliable per-source fifo messaging.
The Exchange layer guarantees secure key agreement through the group. Through it, all
members obtain the same symmetric key for encryption and signature. The Rekey layer performs
group rekeying upon demand. Both layers manage the group-key that is part of the view-state
and hence are regarded as GMP extensions. Furthermore, these layers are not on the message
critical path. Normally, they are dormant, they become active either when the user asks for a
rekey or when components merge. The Encrypt layer encrypts all user messages. It is on the
message critical path, situated below the Appl intf layer.
Some "layers", as discussed here, are actually sets of layers in the implementation. Also, some layer names
have been changed for clarity of exposition.
Table
1: The Ensemble stack. On the left is the default stack that includes an application
interface, the membership algorithm and a reliable-fifo module. The secure stack, to the right,
includes all the regular layers and also the Exchange, Rekey, and Encrypt layers.
Regular Secure
Top
Exchange
Rekey
Gmp
Appl intf
Encrypt
RFifo
Bottom
Routers
3.1 Policies
The user may specify a security policy for an application. The policy specifies for each address 3
whether that address is trusted or not 4 . Each application maintains its own policy, and it is up to
Ensemble to enforce it and to allow only mutually trusted members into the same component. A
policy allows an application to specify the members that it trusts and exclude untrusted members
from its component.
Members should use trust policies that are symmetric and transitive. Otherwise, member p,
trusting member q, will be in a component containing members that q trusts but p does not. When
a member changes its security policy, it request Ensemble to rekey. During the rekey members
that are no longer trusted will be excluded and a new key will be chosen for the component. Thus,
old untrusted members will not be able to eavesdrop on the group conversations.
3.2 Cryptographic infrastructure
Our design supports the use of a variety of authentication, signature and encryption mechanisms.
By default the system uses PGP for authentication, MD5 [15] for signature, and RC4 [16] for
encryption. Because these three functionalities are carried out independently any combination of
supported authentication, signature, and encryption systems can be used. Other systems, such
as Kerberos [17], IDEA [18], and DES [16], have been interfaced with Ensemble at various stages.
3.3 Random number generation
Cryptographically secure random numbers are a vital resource for any secure system. It is not
possible to generate truly random numbers and therefore one uses pseudo-random number gen-
erators. We have plugged in an off-the-shelf, cryptographically strong, random number generator
3 An Ensemble address is comprised of a set of identifiers, for example an IP address and a PGP principal
name. Generally, an address includes an identifier for each communication medium the endpoint is using
fUDP,TCP,MPI,ATM,.g.
4 We shall see later, in sections 6,7, how the authenticity of members' addresses is ensured.
to our system.
4 The authentication router
We first describe the simplest part of the security architecture: the authentication router module.
Ensemble routers reside at the bottom of each protocol stack, as seen in Table 1.
In Ensemble, the router is the module responsible for getting messages from member p to some
set of members g. Routers use transport-level protocols such as MPI, UDP, TCP, and
IP-multicast to send and receive messages. An Ensemble application may use several stacks, all
sharing a single router. Hence, routers need to decide through which transport to send a message,
and when one is received - which protocol stack to deliver it to.
We have modified the normal router to create a signing router which is used when the application
requests a secure protocol stack. A signed router adds a keyed-MD5 signature to each sent
message and verifies the signature of each incoming message before handing it to the protocol
stack.
Ensemble signs all outgoing messages using the group key. Regular messages may be verified
by other group members since they all share the group key. Gossip messages are problematic
since, initially, different components do not share the same group key. Hence, they are protected
using the authentication service.
When message m arrives at a signing router, belonging to group component A, the router
attempts to verify m using the group key. There are several cases:
m is a regular message:
1. Correctly signed: Pass up the stack. Message m was sent by a group member in A.
2. Incorrectly signed: Drop. Message m may come from a different group component that
shares no key with A. It may also be a message sent by an attacker (that does not
know the key).
m is a gossip message:
1. Correctly signed: Pass up the stack. Message m is of gossip type, it was sent by a
member of a different component that shares the same group key.
2. Incorrectly signed: Mark as insecure and pass up the stack. This is a message from
a different component B that is signed with B's group key. We ignore the keyed-
MD5 signature, since we cannot verify it. Possibly, the inner message is sealed by the
authentication service. The Exchange layer will attempt to unseal it, if successful, it
will process m's contents. Exchange is the only layer that examines such messages,
while other protocol layers that use gossip messages ignore insecure gossip messages.
The signing router uses the HMAC [19] standard to compute message signatures. A cryptographically
secure one-way hash function, MD5, is used to hash the message content. MD5 is
keyed with the current group key such that the adversary will not be able to forge messages. The
router at the sender calculates the keyed hash of M - H(M ). Then it sends H(M) concatenated
to the clear-text message M . On receipt, H(M) is recalculated from M with the receiver's key
and compared with the received signature. If there is a match - the message has been verified.
To summarize, the authentication router attempts to authenticate all messages. Regular
unauthenticated messages are dropped, gossip unauthenticated messages are still delivered but
marked insecure.
5 The Encrypt Layer
Ensemble optionally supports user message privacy. The Encrypt layer encrypts/decrypts all user
messages with the group key. User messages are reliably delivered in fifo (sender) order allowing
use of chained encryption 5 . Ensemble messages are signed, but not encrypted. Such messages do
not contain any secret user information and their encryption would only degrade performance.
Currently we use the group key for both authentication and encryption. Since MD5 keys are
bytes long, we use only the first 5 bytes for the RC4 key. To improve performance, upon a
view change we create all security-related data structures and henceforth use them while the view
remains current.
Using the group key for both signature and encryption makes the Encrypt layer as strong as
the weaker cryptographic system. In the default configuration, the group-key would thus be as
strong as the RC4 key. To prevent this from emerging as a weakness of our architecture, we shall
switch the group key as frequently as needed to prevent the weaker encryption key from being
cracked.
6 Exchange layer
In the event of a network failure, a process group may become partitioned into several disjoint
components, communication among which is impossible. Ensemble automatically elects a leader
for each group component. Later, such a partitioned group may need to merge if communication
is restored. Ensemble treats the former situation as the failures of one or more group members
(the system does not distinguish communication failures to operational processes from process
crashes). The system uses gossip messages to discover opportunities to merge a group.
More specifically, it is the responsibility of the Heal protocol to discover partitioned group
components. It is active at each group component leader. Each leader gossips an IamAlive
message periodically that includes its name and address. When a leader hears a remote leader
from the same group, it initiates the merge sequence.
Group components cannot communicate with each other unless they possess the same key:
only insecure gossip messages are allowed to pass through by the router. The Exchange layer
uses these messages to achieve secure agreement on a mutual group key. The idea is that one of
the components securely switches its key to that used by the other component. The Heal layer
will activate the merge sequence after both components have the same key. The Exchange layer
is active at each component leader acting as a filter of gossip messages. All outbound/inbound
gossip messages pass through it. The layer functions via the creation and recognition of two types
of gossip message headers. These are, for process p whose principal name 6 is R p , and whose view
key is
5 Modern encryption ciphers separate a message into fixed sized blocks. One can encrypt each block separately,
or, using chained encryption, use early blocks to help encrypt the current block.
6 This is the name, by which the user is known to the authentication service.
Contains R p and a nonce 7 . This header is cheap to create.
Ticket: Contains data to be sent securely to some process q. This header is created by sealing the
data for q. The header is expensive to generate, since its creation involves the authentication
service, and it is usually long (currently about 1/2KBytes).
The following event handlers are applied to gossip messages by process q:
ffl Onto each gossip message, add an
ffl Upon receiving an Id(R p ; nonce p ), if it is insecure, p is trusted, and R q
a Ticket to p and gossip it. The ticket data contains key q and nonce p to prove message
freshness.
ffl Upon receiving a Ticket from p intended for q, where p is trusted, authenticate it and check
the freshness of the nonce. Decrypt it to get key p . If key p == key q , then ignore it (we have
the same key), otherwise new key := key p . Prompt the component to go through a view
change, with new key as the group key. The group key is part of the view-state, when the
view change is complete new key will be installed at all the group's routers.
When a group component leader q receives a gossip message from a remote component leader
checks whether it has a lower id 9 . If so, q securely sends to p its key key q . The remote leader
authenticates q, decrypts key q and switches its component key to key q . From now on, all gossip
messages (correctly signed by key q ) will be accepted and the components will merge using the
membership mechanism.
When a process fails within a component, the (new) leader initiates a view change. The group
key is not switched: since we assume honesty of failed members, they will not use knowledge of
the current key in a malicious manner.
6.1 Example execution
Figures
2,3,4, and 5 show an example execution of the Exchange protocol. Initially, two compo-
nents, A and B, are executing. They begin by using different keys, key A and key B , so initially
only gossip messages marked insecure are delivered to the coordinators. No other communication
occurs between the components. The authentication sequence only involves the coordinators, so
in the following steps we refer to the coordinators by the name of the component they are in.
1. Both coordinators, A and B, regularly broadcast gossip messages announcing their presence
in the system. The gossip messages contain both Id headers from the Exchange protocol
and Heal headers from the Heal protocol (the Heal protocol heals group partitions). Only
the Exchange layer will examine gossip messages marked as insecure.
2. Coordinator A receives an Id(B; nonceB ) gossip message from B marked as insecure. A
sends a Ticket(key A ; nonceB ) gossip message to B.
7 A one time random string used to prove message freshness.
Any type of comparison function may be used here.
9 Any comparison function can be used here.
Figure
2: Leaders A and B send out gossip messages.
A
Figure
3: Leader A hears B's gossip message and sends its key to B.
A
3. B receives a Ticket(key A ; nonceB ) gossip message from A, authenticates and verifies the
freshness of the nonce. If the check is passed then B also gets key A . B then prompts its
group to proceed through an empty view change 10 . When installing the new view, B installs
key A in the group. Now both components are using the same key.
4. After B has installed key A , when A and B broadcast gossip messages, they are accepted
by the receiving coordinator and not marked as insecure. Now the membership layers will
examine the gossip messages, and this results in the two components merging into a single
component using key A .
We say that the view change is empty because there is no group membership changes. Only the key is switched.
Figure
4: B's component switches to Key(A).
A
Figure
5: The components merge since they both use the same key.
A
We use nonces in this protocol so as not to rely on local clocks being in synchrony. In order to
use local time as a nonce, one needs to use a secure time service which is not currently a standard
Internet service.
In this section we discuss why the security of the Exchange protocol. Layers in the stack, not
belonging to the security protocols, do not handle the group key. This is done in order to separate
the security functionality from the rest of the stack. Hence, in order to verify that Ensemble is
secure we only need to examine its security protocols. Here, we confine ourselves to discussing
the Exchange layer, and the manner in which Exchange achieves secure key agreement.
The protocol has two properties:
Safety: Only st-trusted members learn the group key.
Progress: Assuming the network remains stable for a sufficiently long period then all members
will eventually agree on the same group key.
We begin with a short discussion of the properties of the st-trust relation.
7.1 St-trust
St-trust is a distributed relation. When trust policies are stable for sufficiently long st-trust
becomes an equivalence relation consistent throughout U . It separates U into disjoint st-domains.
For the purpose of this section we assume no partitions. In such conditions all disjoint components
merge and form group components according to the trust relationship.
Components are set-wise contained in st-domains. For example, if a set of members S all trust
each other then S is an st-domain. Clearly, any component containing a single member of S will
include the rest. Hence, in this case we have a component equal to an st-domain.
It is possible for a component to be a proper subset of its st-domain. Assume
and q t
j s, but p and s do not trust each other. The components are C
and where the leaders are p and s. C 1 and C 2 will not merge since p and s do not
trust each other.
7.2 Safety
Define a safe key to be one known only to members of a single st-domain. A key may serve as a
group key through several views.
The Exchange protocol may be invoked at any time; it has several stages, all of which may be
interrupted by failure. More specifically, the protocol is designed so that if a failure occurs, the
protocol can be restarted without risk of a possible security compromise.
We classify failure into two types: malicious and benign.
Benign failures: These are simple message loss scenarios of which there are three cases:
ffl An IamAlive(R p ; nonce p ) message may not reach its destination.
ffl A Ticket(R q ; nonce p ) may not reach its destination, or reach an untrusted member.
ffl The view installation phase at p may fail.
We must show that none of these occurrences breaches safety.
ffl If an IamAlive(R p ; nonce p ) message does not reach other component leaders then p's
component will not merge. While this may (temporarily) prevent components from
merging together, it does not breach safety.
ffl A Ticket(key q ; nonce p ) may not reach its destination, or reach an untrusted member.
If the ticket gets lost then no information is revealed. The ticket can be opened by p
alone, thus its capture by a dishonest member does not breach safety.
ffl If the view installation phase at p fails then only some of p's component members learn
key q . All of p's component is st-trusted by q's component and only st-trusted members
learn key q .
Malicious case: The adversary may try to send a corrupt message:
ffl IamAlive: the adversary will pretend to be some trusted process q, and send IamAlive(q; nonce q ).
Process p receives this and sends a Ticket(key p ; nonce q ) to q that only q can decrypt.
The adversary cannot make use of this ticket since it cannot unseal it.
ffl Ticket: the adversary may send p a ticket from some trusted process q. However, the
adversary cannot forge such Tickets, hence the ticket will be rejected.
Another form of attack is a replay attack. Resending old IamAlive messages will only
create new Ticket messages which the attacker cannot decrypt (as above). Replayed Ticket
messages are discarded since they contain a stale nonce.
The view-change protocol involves another step. Assume that two group components, A with
leader a and key key A , and B with leader b and key B are merging together. Assuming that key A
and key B are safe, then:
ffl Exchange: a passes key A to b, key A remains safe.
ffl Second step: b encrypts key A with key B and multicasts it to B. Since key B is safe, only
trusted members learn key A , hence key A remains safe.
If a partition occurs then new members learn nothing, hence the component key remains safe.
7.3 Progress
To show that the protocol makes progress, assume that no network faults occur for a sufficiently
long period, and that sent messages are received in timely fashion without loss or corruption.
Under such conditions, assume that in st-domain S all members are mutually trusted. If S is
split into two components A and B then eventually A's IamAlive messages will reach B, causing
B to send A its key and the components to merge. Hence, eventually, all of S's components will
merge.
8 The Rekey protocol
Occasionally we need to switch the group key. There may be several reasons for doing this:
Lifetime expiration: Symmetric encryption keys have a bounded lifetime in which they are
secure. After such time a dedicated adversary will be able to crack them. Currently,
Ensemble uses RC4 [20] as the default encryption mechanism, this is a relatively weak
encryption scheme with a 40bit key. Clearly, there are much stronger algorithms such as
DES [16], triple DES [16], and IDEA [18] that employ longer keys (56bits, 112bits, and
128bits respectively). A 128bit key would be safe for years, even using top of the line
machines to try cracking it. However, using a weak encryption key allows us to export the
Ensemble system from the USA while maintaining a reasonable level of security. It takes 64
MIPS-years to break an RC4 key, which we believe is more than what a casual eavesdropper
is willing to pay. Furthermore, RC4 is faster than stronger encryption algorithms. This
promotes efficient communication.
Dynamic Trust Policy: Members may dynamically change their trust policy. Thus, old members
may not be authorized to listen to current group conversations. This requires the
ability to switch the group key, preventing old members from eavesdropping.
OS vulnerability: We work with an existing OS that is not perfect and may be penetrated by
a persistent and knowledgeable intruder. The intruder may penetrate old group members
and discover the group key. In order not to rely on the assumption that old members will
erase their key, we need to switch the key after old members leave the group.
The Rekey protocol provides a way to securely and synchronously switch the communication
used by a group. Unlike the Exchange protocol where we use the old key to disseminate the
new key, here the new key is unrelated to the old key and we do not rely on the old key for its
dissemination. Thus, possession of the old group key does not allow discovery of the new key or
eavesdropping on current group conversations. We authenticate members using their public keys,
which we assume are never broken.
The Rekey protocol works as follows:
1. The leader chooses a new random key, unrelated to the old group key.
2. The leader seals the new key for each group member separately, and sends the sealed messages
point-to-point to the members.
3. A group member, upon receipt of the new key, sends an acknowledgment to the leader.
4. When the leader receives acknowledgments from all group members it starts a view change.
The new key will be installed in the router in the upcoming view.
If some members do not acknowledge the receipt of the new key, they may have crashed or
became partitioned by a network failure. A new view excluding the faulty members takes place,
and the old key stays unchanged. The user is notified that rekeying has failed and may ask
to rekey again. The second invocation is likely to succeed since the faulty members have been
removed.
8.1 Authorization polices
When a Rekey is invoked, the leader checks that all current members are trusted. If not, it
removes the untrusted members from the view, installing a new (smaller) view. The Exchange
protocol will prevent untrusted members from rejoining the group. Rekey requires the correct
participation of untrusted members. It cannot exclude a Byzantine member.
The application may dynamically change its security policy. This entails the revocation of the
old group key to prevent old untrusted members from eavesdropping and altering current group
conversation. Thus, the Rekey protocol needs to be invoked by the application whenever a policy
change is performed.
8.2 Optimizing the Protocol
The protocol as described above is fairly slow because seal/unseal operations are very costly in
terms of CPU time and memory. For example, a single operation using PGP on a Pentium2
300Mhz takes about 0:25 seconds. Consider the latency of a rekey, operation taking into account
only seal/unseal operations, in a group of 64 members. The leader performs 63 seal operations
while each member performs another unseal operation prior to acknowledging the key. Thus the
latency is: (63
Our protocol aims to be efficient and scalable. Hence we added the following optimizations:
ffl We spawn a process to perform the seal/unseal operation in the background. This removes
the expensive authentication service calls from the critical path so that the protocol stack
can keep running as usual. A similar optimization was used in Horus [21].
ffl To increase the scalability of the protocol we use a tree structure to disseminate the new
key. The leader sends the new key to its children, who in turn pass it down the tree
Figures
6,7). Each member sends an acknowledgment to its ancestor after it has collected
acknowledgments from all its children (Figure 8). When the leader receives acknowledgments
from all children it multicasts a RekeyDone messages to the group and starts a new view
Figure
9).
ffl Using the tree structure, the latency of a rekey operation can be improved substantially.
Suppose a binary tree is used. The cost for each level of the tree, except the first, is: leader
performs two seal operations, children perform an unseal operation each. This amounts to
3 operations, and the latency now becomes: (log 2
The latency using a tree-structure as analyzed above is still high. To improve it, we introduced
secure channels. A secure channel between members p and q is created as follows:
1. p generates a random symmetric key k pq and seals it for q.
2. p sends the sealed key to q.
3. q acknowledges the receipt of k pq .
Henceforth any message sent on the secure channel is encrypted and signed using k pq . A secure
channel allows sending private information between two peers. In contrast, a group key allows
multicasting private information to the whole group.
We added a Secchan layer to Ensemble that handles a cache of secure channels. Whenever
private information needs to be passed from p to q, a secure channel between them is created if
one does not already exist. The operation to create a secure channel is expensive: it takes two
seal/unseal operations which cost approximately 0:5 seconds on our test platform. On the other
hand, the next private message between p and q will be encrypted with symmetric key k pq , a
much quicker operation lasting a couple of microseconds.
Using this cache, a typical group rekey operation will run much quicker. Assuming a static
view V , the first Rekey invocation will run several seconds since new channels must be set up 11 .
The next invocation will run much faster since secure channels have already been set up. If we
examine the 64 member case, and focus on the actual rekeying performed (without the following
view installation), then the latency is:
ffl Ensemble achieves latency of around 1.2ms between members of a LAN. The average latency
of a reliable multicast is the same.
ffl Sending the information down/up the tree costs: 1:2ms 2 (tree 2:4ms (log 2
14:4ms.
ffl To this we must add he latency of a reliable multicast - 15:6ms
All in all, the latency is less then 20ms, orders of a magnitude less then the latency of the initial
implementation.
Channels are refreshed periodically to avoid exposure to cracking. Whenever a rekey operation
is invoked we discard any channels that violate the authorization policy.
11 Note that we do not need set up secure channels between each pair of members in the group. Only those pairs
corresponding to the edges in the dissemination tree need be set up.
Figure
Leader sends the new group key, k, down the tree. E are sealed electronic
envelopes containing k for members p and q, respectively.
Leader
s
Figure
7: Upon receipt, p and q in turn pass k down their subtrees.
Leader
s
9 Performance
The division of work between the Exchange/Rekey layers and the Encrypt/Routers is computationally
efficient. During normal run time we use the symmetric key which is fast and uses little
memory. On the relatively rare occasion of a merge or a requested rekey we use the authentication
service. These tickets require more computation and memory, one typically uses 1024bit
RSA keys or waits for RPC style calls to an authentication server.
With one exception, measurements were taken on PentiumPro 200Mhz machines running the
MOSIX operating system [22] connected with 2.5Gbit/sec Myrinet. Current OS and communication
stack do not achieve maximal hardware performance.
In
Figure
we depict the latency of an Ensemble stack. The numbers are given for a send/recv
operation: a message arrives at the stack and then is handed to the application which sends an
immediate response. The latency is measured from message arrival to message departure. The
X-axis measures message size in bytes and the Y-axis time in seconds. As we can see, latency is
constant for all message sizes with a regular stack. This is because the stack does not process
message content at all. Basic latency increases for Authentication and Privacy stacks since they
have not been as aggressively optimized as the regular stack. They also initialize encryption and
Figure
8: Acknowledgments climb up the tree
Leader
Ack Ack
s
Figure
9: Once the leader receives acknowledgments from all its children, it multicasts a Rekey-
Done message to the group.
Leader
s
RekeyDone
RekeyDone
RekeyDone RekeyDone
RekeyDone
authentication contexts and allocate and add 16bytes of signature space to each message. For such
stacks latency also increases as a function of message size since MD5 hashing and RC4 encryption
pass over message content. The theoretic processing times for an x-byte message are:
Disregarding the initial costs of encryption and hashing, these linear lines asymptotically approach
the measured latencies.
Table
2: MD5 and RC4 performance on different CPUs. Performance is measured by the number
of bytes processed in a microsecond.
RC4 2.03 7.52 12.45
Figure
10: Latency of a send/recv operation using a regular stack, authenticated stack and a
private authenticated stack. The X-axis show message size in bytes, and the Y-axis shows latency
in seconds (the multiplier is 10 \Gamma4 ).
Regular
Auth
Encrypt
We also tested achievable throughput with Ensemble, as shown in Table 3. We ran an Ensemble
application on two machines, one is chosen as leader and it sends as many 1000byte messages to
the other member as possible. The maximal achievable throughput using a regular stack is
3330Kbyte=sec. As we add authentication, throughput drops to 2850Kbyte=sec. When we add
encryption throughput drops to 1600Kbyte=sec. The bottleneck is the CPU. The third column
in the table shows the amount of CPU time per second used for encryption and verification.
Table
3: Ensemble throughput with different stacks. As authentication and encryption are added
to the stack, performance drops due to the increasingly heavy CPU load.
Stack Kbyte/sec CPU time/sec
Regular
Auth 2850 0.132
Auth+Privacy 1600 0.286
Next, we measured the latency of the rekey operation. We used 8 PentiumPro 200Mhz and
machines. The Pentiums are inter-connected using a 10Mbit/sec shared hub
and through a proxy to the PentiumPro's. We created a group of static size and performed
300 rekey operations in succession. Time was measured from the initiation of a rekey to the
installation of the new view. In the same manner, We measured the time required for a view
change. Since a rekey includes a view change, the added cost of rekeying is the difference between
the two measurements. Figure 11 depicts times for groups of size 2 to 12 where each process is
on a different machine. The difference between the two lines grows logarithmically as group size
increases. Rekeying using the binary tree we employ for dissemination costs 2 * depth * (point-
to-point latency) (multicast latency). In our setting, the latency is 1:2ms and for a group of
12 members this should be 2 3 1:2ms 8:4ms.
Figure
11: Latency of a rekey operation. The X-axis shows the number of members, and the
Y-axis shows latency measured in seconds.
View Change
Related Work
Ensemble is descended from an earlier system named Horus, itself descended from the Isis system
[23]. Early work on group communication security was performed in Horus [24, 21]. Our
work extends the Horus security architecture but differs in many ways. We added support for
multiple partitions (Horus permitted progress only in the primary partition), group rekey upon
demand, application defined security policies, and plugged in off-the-shelf authentication systems.
A group communication system designed for Computer Supported Collaborative Work (CSCW)
applications has been built in the university of London [25]. In the context of CSCW, objects
and files are typically shared between applications. As such, different applications are allowed
to perform different operations on shared objects. To enforce these restrictions the most trusted
member of the group is chosen as leader. Any message a member wishes to multicast is forwarded
to the leader. The leader filters all such messages: discards the malicious ones, enforces the shared
objects security policy, and multicasts all legal messages. This work however is still in a preliminary
stage, and at the time of this writing it does not provide for leader failure. Furthermore,
the project is oriented only towards CSCW applications.
Rampart [13] is a group communication system built in AT&T which is resistant to Byzantine
attacks. Up to a third of the members in a Rampart group may behave in Byzantine manner yet
the group would still provide reliable multicast facilities. A system providing similar guarantees
has been built in the university of Santa-Barbara in California [14]. Byzantine security is rather
costly however, and it is difficult to develop applications resistant to such faults. We chose not to
support such a fault model in Ensemble.
Other works in the IP multicast security area includes [26, 27, 28]. These papers describe
the management of session keys for (very) large groups, such that the infrastructure required
is scalable and efficient. Recent work [29, 12, 30] has dealt with the efficient rekeying of large
multicast groups. IP multicast is concerned mainly with one-to-many multicast, where a single
application multicasts to many clients whose membership is dynamic and not necessarily known.
Ensemble is concerned mainly with many-to-many multicasts where any member may multicast
to the group and where membership is known. In secure IP multicast, trusted centralized servers
may be used to disseminate group keys; in Ensemble, which possesses a completely distributed
architecture, no such single point of failure is required.
Conclusions
We have developed a security architecture for Ensemble, which supports multiple partitions (not
just primary partition), group rekeying upon demand, application-specific security policies and
off-the-shelf authentication. Our software is freely available as part of the Ensemble project. We
believe that ours is the first freely available secure group communication system, and the highest
performance secure system available at the time of this writing.
Acknowledgments
We would like to thank Tal Anker for improvements to the optimized Rekey protocol, Yaron
Minsky for helping develop the Exchange protocol and for insightful comments, and Idit Keidar
for helpful reviews.
--R
"View Synchronous Communication in Large Scale Networks,"
"Transis: A Communication Sub-System for High Availability,"
"Fast Message Ordering and Membership using a Logical Token-Passing Ring,"
"Partitionalbe Group Membership: Specification and Algorithms,"
"A high perfomance totally ordered multicast protocol,"
"Building adaptive systems using ensemble,"
"Horus, a flexible group communication system,"
Reliable Distributed Computing with the Isis Toolkit
"Exploiting virtual synchrony in distributed systems,"
"The Objective Caml system release 1.07,"
The Ensemble System
"Secure group communication using key graphs,"
"Secure agreement protocols: Reliable and atomic group multicast in rampart,"
"The securering protocols for securing group communication,"
"The md5 message digest algorithm,"
"Data encryption standard,"
"Kerberos: An authentication service for computer networks,"
"Markov ciphers and differential cryptanalysis,"
"Hmac: Keyed-hashing for message authentica- tion,"
"A stream cipher encryption algorithm,"
"A security arcihtecture for fault-tolerant systems,"
"The mosix multicomputer operating system for high performance cluster computing,"
The ISIS System Manual
"Integrating security in a group oriented distributed system,"
"Secure group communication for groupware applications,"
"Scalable multicast key distribution,"
"Group key management protocol architecture,"
"Group key management protocol specification,"
"Iolus: A framework for scalable secure multicasting,"
"Multicast security: A taxonomy and efficient constructions,"
--TR
Fault tolerance in networks of bounded degree
Entity authentication and key distribution
Secure agreement protocols
Totem
Horus
Iolus
Fault-Tolerant Meshes with Small Degree
Secure group communications using key graphs
Bimodal multicast
A review of experiences with reliable multicast
Simple and fault-tolerant key agreement for dynamic collaborative groups
Reliable Distributed Computing with the ISIS Toolkit
An Information Theoretic Analysis of Rooted-Tree Based Secure Multicast Key Distribution Schemes
Authorization and Attribute Certificates for Widely Distributed Access Control
A High Performance Totally Ordered Multicast Protocol
ISAAC
The Design and Architecture of the Microsoft Cluster Service - A Practical Approach to High-Availability and Scalability
The SecureRing Protocols for Securing Group Communication
Fast Replicated State Machines Over Partitionable Networks
CLIQUES
Secure Group Communication in Asynchronous Networks with Failures
The State Machine Approach: A Tutorial
Building Adaptive Systems Using Ensemble
A Study of Group Rekeying
A Scalable Framework for Secure Multicast
Partitionable Group Membership: Specification and Algorithms
The ensemble system
--CTR
Eunjin Jung , Alex X. Liu , Mohamed G. Gouda, Key bundles and parcels: secure communication in many groups, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 August 2006
Yair Amir , Cristina Nita-Rotaru , Jonathan Stanton , Gene Tsudik, Secure Spread: An Integrated Architecture for Secure Group Communication, IEEE Transactions on Dependable and Secure Computing, v.2 n.3, p.248-261, July 2005
Randal Burns, Fastpath Optimizations for Cluster Recovery in Shared-Disk Systems, Proceedings of the 2004 ACM/IEEE conference on Supercomputing, p.5, November 06-12, 2004
Yair Amir , Yongdae Kim , Cristina Nita-Rotaru , John L. Schultz , Jonathan Stanton , Gene Tsudik, Secure Group Communication Using Robust Contributory Key Agreement, IEEE Transactions on Parallel and Distributed Systems, v.15 n.5, p.468-480, May 2004
Miguel Correia , Nuno Ferreira Neves , Lau Cheuk Lung , Paulo Verssimo, Worm-IT - A wormhole-based intrusion-tolerant group communication system, Journal of Systems and Software, v.80 n.2, p.178-197, February, 2007
Yair Amir , Yongdae Kim , Cristina Nita-Rotaru , Gene Tsudik, On the performance of group key agreement protocols, ACM Transactions on Information and System Security (TISSEC), v.7 n.3, p.457-488, August 2004
Emmanuel Bresson , Olivier Chevassut , David Pointcheval, Provably secure authenticated group Diffie-Hellman key exchange, ACM Transactions on Information and System Security (TISSEC), v.10 n.3, p.10-es, July 2007 | security;group communication |
502094 | Efficient generation of shared RSA keys. | We describe efficient techniques for a number of parties to jointly generate an RSA key. At the end of the protocol an RSA modulus publicly known. None of the parties know the factorization of N. In addition a public encryption exponent is publicly known and each party holds a share of the private exponent that enables threshold decryption. Our protocols are efficient in computation and communication. All results are presented in the honest but curious scenario (passive adversary). | Introduction
We present e-cient protocols for a number of parties to jointly generate an RSA modulus
where p; q are prime. At the end of the computation the parties are convinced that N is indeed a
product of two large primes. However, none of the parties know the factorization of N . We then
show how the parties can proceed to compute a public exponent e and shares of the corresponding
private exponent. Our techniques require a number of steps including a new distributed primality
test. The test enables two (or more) parties to test that a random integer N is a product of two
large primes without revealing the primes themselves.
Several cryptographic protocols require an RSA modulus which none of the participants
know the factorization. A good example is the original Fiat-Shamir authentication protocol
[19] where all parties use the same modulus N , but none of them know its factorization. For
other examples see [18, 23, 28, 30, 31]. Usually a modulus N with an unknown factorization is
obtained by asking a dealer to generate it. The dealer must be trusted not to reveal the factorization
of N . Our results eliminate the need for a trusted dealer since the parties can generate the
modulus N themselves.
Threshold cryptography is a concrete example where shared generation of RSA keys is very
useful. We give a brief motivating discussion and refer to [24] for a survey. Let pq be an
RSA modulus and d; e be signing and verication exponents respectively, i.e. de
threshold RSA signature scheme involves k parties and enables any subset of t of them to generate
an RSA signature of a given message. No subset of t 1 parties can generate a signature. Unlike
standard secret sharing [35], the signature is generated without having to reconstruct the private
decryption exponent d. A simple approach for obtaining a k-out-of-k threshold signature scheme is
as follows [20]: pick random d i 's satisfying
and give d i to party i. Abstractly,
to sign a message m each party computes s sends s i to a combiner (that holds
no secrets). The combiner multiplies the s i 's and obtains the signature. This way the parties are
Supported by darpa contract #F30602-97-C-0326
able to generate a standard RSA signature without having to reconstruct the private key d at a
single location. This is clearly advantageous for securing a sensitive private RSA key such as the
one used by a Certicate Authority. Constructions providing a t-out-of-k RSA threshold signatures
schemes can be found in [16, 13, 21, 34].
An important issue left out of the above discussion is the initial generation of the RSA modulus
N and the shares d i . Traditionally the modulus N and the shares of the private key are assumed
to be generated by a trusted dealer. Clearly, the dealer, or anyone who compromises the dealer,
can forge signatures. Our results eliminate the need for a trusted dealer since the k parties can
generate N and the private shares themselves. Such results were previously known for the ElGamal
public key system [32], but not for RSA.
The paper is organized as follows: we give a high level description of our protocol in Section 2.
In Sections 3{5 we explain the various stages for generating a modulus with an unknown
factorization. In Section 6 we describe protocols that given a public exponent e, generate shares
of the corresponding private key To achieve fault tolerance one often shares d
using a t-out-of-k scheme. That is, any subset of t users can apply the private key d. We explain
how a t-out-of-k sharing of d is obtained in Section 6.3. We discuss various practical improvements
of our protocol in Section 7.
We note that generic secure circuit evaluation techniques, e.g. [38, 26, 3, 9] can also be used
to generate shared RSA keys. After all, a primality test can be represented as a boolean circuit.
However, such general techniques are too ine-cient.
1.1 Communication and privacy model
The communication and privacy model assumed by our protocol are as follows:
Full connectivity Any party can communicate with any other party.
Private and authenticated channels Messages sent from party A to party B are private and
cannot be tampered with en route. This can be achieved by having A and B share a secret
which they use for encryption, integrity and authentication.
Honest parties We assume all parties are honestly following the protocol. At the end of the
protocol no threshold of parties have enough information to factor the generated modulus.
This is often called the honest but curious scenario. In Section 8 we discuss some results
showing how this assumption can be relaxed. Recently, Frankel, MacKenzie and Yung [22]
showed how our protocol can be made robust against b k 1
malicious parties.
Collusion Our protocol is b k 1c private. That is, any coalition of size at most b k 1c learns no
information about the factorization of pq. However, a coalition of more than b k 1c can
recover the factorization. The reason for this bound is our reliance on the BGW [3] protocol
in one step of our algorithm. The BGW protocol provides information theoretic security and
hence is limited to achieving b k 1c privacy (in the honest parties model). We note that it
is possible to achieve k 1 privacy by replacing the BGW step with a heuristic protocol
due to Cocks [10]. Unfortunately, Cocks' protocol is by far slower than the BGW method.
Furthermore, its security is based on a heuristic argument.
To prove the security properties (i.e. privacy) satised by our protocol we provide a simulation
argument for each of its components.
Overview
In this section we give a high level overview of the protocol. The k parties wish to generate a shared
RSA key. That is, they wish to generate an RSA modulus public/private pair of
exponents The factors p and q should be at least n bits each. At
the end of the computation N and e are public, and d is shared between the k parties in a way
that enables threshold decryption. All parties should be convinced that N is indeed a product of
two primes, but no coalition of at most should have any information about the
factors of N .
At a high level the protocol works as follows:
(1) pick candidates: The following two steps are repeated twice.
(a) secret choice: Each party i picks a secret n-bit integer p i and keeps it secret.
(b) trial division: Using a private distributed computation the k parties determine that
k is not divisible by any prime less than some bound B 1 . Details are
given in Section 5. If this step fails repeat step (a).
Denote the secret values picked at the rst iteration by , and at the second iteration
by
(2) Using a private distributed computation the k parties compute
Other than the value of N , this step reveals no further information about the secret values
. Details are given in Section 4.
Now that N is public, the k parties can perform further trial divisions and test that N is not
divisible by small primes in the range [B
(3) primality test: The k parties engage in a private distributed computation to test that N is
indeed the product of two primes. If the test fails, then the protocol is restarted from step 1.
We note that the primality test protocol is k 1 private and applies whenever two (or more)
parties are involved. Details are in section 3.
Given a public encryption exponent e, the parties engage in a private distributed
computation to generate a shared secret decryption exponent d. Details are in
Section 6.
Notation Throughout the paper we adhere to the following notation: the RSA modulus is denoted
by N and is a product of two n bit primes p; q. When
we denote by p i the share in
possession of party i. Similarly for q i . When the p i 's themselves are shared among the parties we
denote by p i;j the share of p i that is sent to party j.
Performance issues Our protocol generates two random numbers and tests that
product of two primes. By the prime number theorem the probability that both p and q are prime is
asymptotically 1=n 2 . Therefore, naively one has to perform n 2 probes on average until a suitable N
is found. This is somewhat worse than the expected 2n probes needed in traditional generation of an
RSA modulus (one rst generates one prime using n probes and then a second prime using another
n probes). This n=2 degradation in performance is usually unacceptable (typically
Fortunately, thanks to trial division things aren't so bad. Our trial division (Step 1b) tests each
prime individually. Therefore, to analyze our protocol we must analyze the eectiveness of trial
division. Suppose a random n-bit number p passes the trial division test where all primes less than
are tested. We take How likely is p to be prime? Using a classic
result due to Mertens, DeBruijn [12] shows that asymptotically
Pr[p prime j trial division up to
Hence, when bits and the probability that p is prime is
approximately 1=22. Consequently, traditional RSA modulus generation requires 44 probes while
our protocol requires 484 probes. This eleven fold degradation in performance is unfortunate, but
manageable. We discuss methods to avoid this slowdown in Section 7.
Generation of shares In step (1) of the protocol each party uniformly picks a random n bit
integer p i as its secret share. The prime p is taken to be the sum of these shares. Since the sum of
uniform independent random variables is not uniformly distributed, p is picked from a distribution
with slightly less entropy than uniform. We show that this is not a problem. The sum
at most an n + log k bit number. One can easily show that p is chosen from a distribution with at
least n bits of entropy (since the n least signicant bits of p are a uniformly chosen n bit string).
These log k bits of \lost" entropy can not help an adversary, since they can be easily guessed (the
number of parties k is small, certainly k < n). This is formally stated in the next lemma.
A second issue is the fact that the shares p i themselves leak some information about the factors
of N . For instance, party i knows that p > p i . We argue that this information does not help an
adversary either.
The two issues raised above are dealt with in the following lemma. Let Z (2)
n be the set of RSA
that can be output by our protocol above when k parties are involved. We assume
k < log N .
Lemma 2.1 Suppose there exists a polynomial time algorithm A that given a random N 2 Z
(2)
chosen from the distribution above and the shares hp
at least 1=n d . Then there exists an expected polynomial time algorithm B that factors 1=4k 3 n d of
the integers in Z (2)
n .
Assuming the hardness of factoring, the lemma shows that even an adversary who is given N
and the private shares of k 1 parties cannot factor the modulus N generated by the protocol.
The proof of the lemma is somewhat tedious and is given in Appendix B so as not to distract the
reader from the main thrust of the paper.
3 Distributed primality test
We begin the detailed discussion of the protocol with the distributed primality test (step 3). Party
i has two secret n-bit integers parties know N where
to determine if N is the product of two primes without revealing any information about the factors
of N . We refer to this test as a distributed primality test. Our primality test is a probabilistic
test [36, 33] carried out in both Z
N and a quadratic extension of Z
N .
Throughout the section we are assuming that (hence the resulting
Blum integer). This can be arranged ahead of time by having party 1 pick shares
All other parties pick shares
Before describing the test we brie
y discuss the structure of the quadratic extension of Z
N we
will be using. We will be working in the twisted group
N . Suppose all
prime factors of N are 3 mod 4. In this case, x irreducible in ZN [x] and ZN
a quadratic extension of ZN . A linear polynomial
and only if gcd(; ; It follows that elements of TN can be viewed as linear polynomials
linear polynomials f; g 2 ZN [x] represent the
same element of TN if ag for some a 2 Z
N . We note that elements of TN can also be viewed
as points on the projective line over ZN .
Distributed primality test:
1: The parties agree on a random g 2 Z
N . The value g is known to all k parties.
Step 2: Party 1 computes the Jacobi symbol of g over N . If g
the protocol is restarted at
step (1) and a new random g is chosen.
Step 3: Otherwise, party 1 computes
. All other parties compute
. The parties exchange the v i values with each other and verify that
Y
If the test fails then the parties declare that N is not a product of two primes.
Step 4: The parties perform a Fermat test in the twisted group
N .
To carry out the Fermat test in TN the parties pick a random h 2 TN . Party 1 computes
. The parties then exchange the u i
values with each other and verify that
Y
If the test fails N is rejected. Otherwise they declare success.
the exponents in the computation of
the v i 's (Step are guaranteed to be integers after division by 4. The correctness and privacy of
the protocol is proved in the next two lemmas.
Lemma 3.1 Let pq be an integer with p q 3 mod 4. If N is a product of two distinct
primes then success is declared in all invocations of the protocol. Otherwise, the parties declare that
N is not a product of two primes with probability at least 1
2 (over the random choice of g and h).
Proof Observe that in step (3) of the protocol we test that v
which amounts
to testing g (N p q+1)=4 1 mod N .
Suppose p and q are distinct primes. In step (2) we verify that g
1. This implies
. Also, since q 1and p 1are odd we have,
q 1=
(mod p)
(mod q)
Since
it follows that g (N)=4 1 mod N . Since
are prime, it follows that the test in Step 3 always succeeds.
Similarly, we show that when p and q are distinct primes the test in Step 4 succeeds. Since
no root in F p and F q . Therefore, F p [x]=(x
and F q are quadratic extensions of F p and F q respectively. It follows that the group
p is of order p + 1. Similarly, jT q 1. By the Chinese Remainder
Theorem, 1). We conclude that all h 2 TN satisfy
1. Consequently, the test of Step 4 always succeeds.
To prove the converse, suppose at least one of p; q is not prime. That is,
ds s is a
non-trivial factorization of N with
to be the exponent used in step (3). Note that e is odd since p q 3 mod 4. Dene the following
two subgroups of Z
To prove the lemma we show that jHj 1jGj. Since H is a subgroup of G it su-ces to prove proper
containment of H in G, i.e. prove the existence of g 2 G n H. There are four cases to consider.
Case 1. Suppose s 3. Let r 2 ; r 3 be distinct prime factors of N such that d (such a
must exist by the pigeonhole principle). Let r 1 be a prime factor of N distinct from both
r 2 and r 3 and let a be a quadratic non-residue modulo r 3 . Dene g 2 ZN to be an element
satisfying
a mod r 3 if
and 3. Observe that g 2 G. Since e is odd
Consequently, g e 6= 1 mod N i.e. g 62 H.
Case 2. Suppose gcd(p; q) > 1. Then there exists an odd prime r such that r divides both p and
q. Then r 2 divides N implying that r divides (N ). It follows that in Z
there exists an
element g of order r. Since r is odd we have g
G. Since
r divides both p and q we know that r does not divide N Consequently
implying that g e 6= 1 mod N . Hence, g 62 H.
Case 3. The only way does not fall into both cases above is if
1 and
are distinct primes and at least one of d 1 ; d 2 is bigger than 1 (Case 2 handles N that
are a prime power By symmetry we may assume d 1 > 1. Since Z
p is a cyclic
group of order r d 1 1
contains an element of order r d 1 1
1 . It follows that Z
N also
contains an element g of order r d 1 1
1 . As before, g
G. If q 1 is not divisible by r d 1 1
1 . Consequently, g 4e 6= 1 mod N , i.e. g 62 H.
Case 4. We are left with the case
above and q
1 . Since we know that r 4. In this case it may
indeed happen that G. We show that in this case, Step (4) of the primality test will fail
with probability at least half (over the choice of h 2 TN ).
Dene the group H 1g. We show that jH 0 j 1jT N j. Since H 0
is a subgroup of TN it su-ces to prove proper containment, i.e. we must exhibit an element
1 the group T p has order r d 1 1
1). Therefore it contains an
element h of order r 1 . It follows that there exists an element w 2 TN of order r 1 . Since by
assumption q 1 mod r 1 we know that r 1 does not divide q + 1. Hence r 1 does not divide
N+p+q+1 and therefore w completing
the proof of the lemma.
We note that Step (4) of the protocol is needed to lter out integers that fall into Case 4 above.
Indeed, such integers will pass steps (1)-(3). For example, consider integers
and prime. For these integers we have are
as dened in the body of the proof above. Consequently, such integers always pass steps (1)-(3)
even though they are not a product of two distinct primes. However, they will fail step (4). In
Section 3.1 we give an alternate approach for ltering out integers that fall into Case 4.
The following lemma shows that when N is the product of two distinct primes, the primality
test protocol reveals no other information about the private shares of the participants.
Lemma 3.2 Suppose p; q are prime. Then any coalition of k 1 parties can simulate their view
of the primality testing protocol. Consequently, the protocol is k 1 private.
Proof Since p; q are prime we know that v
where the v i 's are dened as in
step (3) of the protocol. Also,
the u i 's are dened in step (4). Let U be a coalition
of k 1 parties. Say party m is not a member of the coalition. The coalition's view consists of
We construct a simulator of the coalition's view as follows: the simulator is given
for (this is the coalition's input when the protocol is started). To simulate the
coalition's view, the simulator rst picks a random g 2 Z
N with g
It then computes the u using the values it is given as input. Next the
simulator must generate um Generating um is trivial since um = (
i6=m Generating v m
is a bit harder. From the proof of Lemma 3.1 it follows that
That is, the sign of v
is a quadratic residue modulo N . To simulate v m
the simulator computes
It then
ips an unbiased coin and sets v
accordingly. The resulting distribution on v m is computationally indistinguishable from
the true distribution, assuming the hardness of quadratic residuosity modulo a Blum integer. To
conclude, given that N is the product of two distinct primes, the set of values hu produced
by the simulator is computationally indistinguishable from a real transcript. Consequently,
the coalition learns no information other than what it had at the beginning of the protocol.
We note that step (2) of the protocol is crucial. Without it the condition of step (3) might
fail (and reveal the factorization) even when p and q are prime. We also note that in practice the
probability that a non RSA modulus passes even one iteration of this test is actually far less than
a half.
3.1 An alternative to Step (4)
Step (4) of the distributed primality protocol is necessary to lter out integers that fall into Case 4
in the proof of Lemma 3.1. We describe an alternative simpler approach to lter out such integers.
It requires less computation, although there is more communication between the parties.
Observe that if N falls into Case 4 then 1. The alternative to Step (4) is to
directly test this condition. To test this condition with no information leakage the parties do the
following: each party picks a random r i 2 ZN and keeps it secret. Using the protocol of the next
section they compute
(mod N)
without leaking any information about the private shares . Finally, the parties check if
gcd(z; N) > 1. If so, N is rejected. Using the BGW method (Section 4.1) this approach is b k 1c
private. All N that fall into case 4 are rejected. All that are a product of two distinct primes pass
the test with overwhelming probability. We note that this alternate test eliminates a few valid RSA
moduli, i.e. moduli prime and
4 Distributed computation of N
Next we describe the computation of N . Each party has a secret They wish to make the
product
revealing any information about their private shares
beyond what is revealed by the knowledge of N .
4.1 The BGW method
BenOr, Goldwasser and Wigderson [3] describe an elegant protocol for private evaluation of general
functions for three or more parties. Their full technique is an overkill for the simple function we
have in mind. We adapt their protocol to the computation at hand so as to minimize the amount
of computation and communication between the parties. From here on, let P > N be some prime.
Unless otherwise stated, all arithmetic operations are done modulo P . The protocol works as
follows:
c. For all picks two random degree l polynomials
In other words, the constant term of f
are set to p i ; q i and all other coe-cients are chosen at random in Z P . Similarly, party i picks
a random degree 2l polynomial h
Step 2: For all computes the 3k values:
Party i then privately sends the triple hp i;j ; q i;j ; h i;j i to party j for all j 6= i. Note that the
are standard l-out-of-k Shamir secret sharings of p i . The same holds for
Step 3: At this point, each party i has all of hp
Party i broadcasts N i to all other parties.
Step 4: At this point each of the parties has all values N i for be the
polynomial
Observe that by denition of f Furthermore, (x) is
a polynomial of degree 2l. We note that l is dened so that k 2l +1. Consequently, since all
parties have at least 2l 1 points on (x) they can interpolate it and discover its coe-cients.
Finally, each party evaluates (0) and obtains N mod P . Since N < P the parties learn the
correct value of N .
From the description of the protocol it is clear that all parties learn the value N . We note that
the protocol requires that at least three parties be involved. In the case of exactly three parties,
linear polynomials are used and the protocol is 1-private.
The following lemma shows that no coalition of b k 1
parties learns any further information
about the private shares. This statement holds in the information theoretic sense { no complexity
assumptions are needed. For completeness we sketch the proof of the lemma and refer to [3] for
the complete details.
Lemma 4.1 Given N , any coalition of b k 1c parties can simulate the transcript of the protocol.
Consequently, the protocol is b k 1c private.
Proof Sketch Set 1c. By symmetry we may assume the coalition is made up of
In what follows we consistently let the index i vary in l, the index j vary
in and the index r vary in k. Then the coalition's view consists of
To simulate the coalition's view, the simulator is given as input. It rst picks random
(j). It picks p r;i ; q r;i ; h r;i as random independent elements of Z P and computes
picks a random degree 2l polynomial (x) 2 Z P [x]
satisfying It completes the simulation by setting N r = (r). These
values are a perfect simulation of the coalition's view.
The above protocol consists of one phase of the full BGW method. It diers from the BGW
protocol in that there is no need for a truncation step. Also, we combine the addition and multiplication
stages into one phase. The resulting computation is surprisingly e-cient. Essentially,
there is only one multi-precision multiplication performed by each party (the one in Step 3). We
note that communication between the parties can be reduced by a factor of two using a variant of
the protocol as described in [6, Section 4].
4.2 BGW Modulo non primes
In our description of the BGW protocol all arithmetic operations were carried out modulo a prime
. In the coming sections it will be useful to run the BGW protocol while working modulo a
non prime P . That is, the k parties wish to compute
M , not necessarily prime. One can easily show that if M has no prime divisors smaller than k then
the protocol can be used as is. Indeed, all Lagrange coe-cients used during the interpolation at
Step (4) exist. Lemma 4.1 remains correct.
Running the protocol modulo M containing small factors require a slight modication. Write
has no prime factors smaller than k and M 2 has only prime factors smaller
than k. As mentioned above the protocol immediately works modulo M 1 . The problem when
working modulo M 2 is that Shamir secret sharing (which is the basis of BGW) is not possible. For
instance, consider the case when M 3. It is not possible to use Shamir secret sharing in F 3
among k > 3 parties since F 3 does not contain enough points (in Shamir secret sharing each party
must be given its own unique point). A simple solution is to run the entire protocol in an algebraic
extension of F 3 that contains more than k points. The simulation argument immediately extends
to this case. In general, one can factor M 2 into its prime factors and run the protocol in a large
enough extension for each factor. Using the Chinese Remainder Theorem one can recover the value
of N mod M .
4.3 Sharing the nal outcome
In some cases (as in Section 6.2) we wish to have the parties evaluate the function
however the result should be additively shared among the parties rather than become publicly
available. That is, at the end of the computation each party should have an M i such that
and no information is revealed about the private shares or the nal result.
The modication to BGW in order to achieve the above goal is immediate. The parties do not
perform Step 4 of the protocol and do not perform the broadcast described at the end of Step 3.
Consequently, they each end up with a point on a polynomial (x) of degree 2l that evaluates to
N at Using Lagrange interpolation we know that
(i is the appropriate Lagrange coe-cient. Therefore, rather than broadcast
N i at the end of Step 3, party i simply sets M . The resulting M i 's are an additive sharing
of N as required. As before, there is a simple simulation argument showing that any minority of
parties obtains no information from this protocol.
5 Trial division
In this section, we consider the trial division step (Step 1b in Section 2). Let
be an integer shared among k parties. Let p be a small prime. To test if p divides q each party
picks a random r i 2 Z p . Using the BGW protocol (as described in Section 4.1) they compute
does not divide q. Furthermore, since r is unknown to
any minority of parties, qr provides no other information about q.
Note that in the above approach, a bad candidate q is always rejected. Unfortunately, a good
candidate might also be rejected. Indeed, even if p does not divide q it is still possible that
To alleviate this problem one can repeat the above test twice for each small prime
p. Then the probability that a good candidate is rejected is at most 1
Y
One caveat in the above approach is that the BGW protocol as described in Section 4.1 cannot
be applied to test divisibility of q by very small p, namely p k. The reason is that for such small
p the eld F p does not contain enough points to do Shamir secret sharing among k parties. For
such small p one must run the BGW protocol in an extension eld of F p that contains at least k +1
elements, as explained in Section 4.2.
6 Shared generation of public/private keys
Once the parties successfully construct an RSA modulus
may wish to
compute shares of for a given encryption exponent e. We have two approaches
for computing shares of d. The rst only works for small e (say e < 1000) but is very e-cient
requiring very little communication between the parties. The second works for any e and is still
e-cient, but requires more communication.
Throughout the section we set Recall that the public modulus
k. For all
locally compute i .
To compute shares of d the parties must invert e modulo
exposing their i 's.
Unfortunately, traditional inversion algorithms, e.g. extended gcd, involve computations modulo
. We do not know how to e-ciently perform modular arithmetic when the modulus is shared
among the participants. Fortunately, there is a trick for computing e 1 mod with no modulo
reductions. When only a single user is involved the inversion is done in three steps: (1) Compute
1. Observe that T 0 mod e. (3) Set Indeed
since d e 1 mod . There is no need for modulo reductions since e 1 mod
can be immediately deduced from 1 mod e. Both methods rely on this observation.
6.1 Small public exponent
We begin by describing an e-cient technique for generating shares of d when the public exponent e
is small. The method leaks the value of (N) mod e (hence it can only be applied when e is small).
On the plus side, it is k 1 private.
1: The parties jointly determine the value of
it is possible
to compute l without revealing any other information about the private shares. To do so we
use a simple protocol due to Benaloh [4] which is k 1 private.
Step 2: Let Each party i locally
computes:
i
e
As a result we have
Step 3: The above sharing of d enables shared decryption [20] using the equality c d c r
N . Party 1 can determine the value of r by trying all possible values of 0 r k during a
trial decryption.
The above approach leaks (N) mod e and r. This is a total of log e bits.
6.2 Arbitrary public exponent
Unlike the previous technique, our second method for generating shares of d works for arbitrary
public exponent e. It leaks less than 2 log k bits. This information cannot help an opponent since
it can be easily guessed. Rather than exposing mod e and then inverting it (as we did above)
we show how to invert mod e while it is shared among the parties. As a result, no information
about is revealed. The protocol is k 1private.
Step 1. Each party picks a random r i 2 Z e .
Step 2. Using the protocol of Section 4.2 they
e. At the end
of the computation is known to all parties. If is not invertible modulo e the protocol is
restarted at Step 1.
Step 3. Each party locally computes
Hence, the parties are able to share the inverse of mod e without revealing any information.
Step 4. Next, the parties agree on a prime P > 2Ne. They view the shares 0 i < e as elements
of Z P . Using the modied BGW protocol of Section 4.3 they compute an additive sharing
Each party has a T i and any minority of parties learns no other information.
Step 5. From here on we regard the T i 's as integers 0 T i < P . Our objective is to ensure that
over the integers
We know that at the end of Step 4 we have
Therefore,
Given a candidate value of s 2 [0;
If the given s is correct then 0
and the above equality holds over the integers.
To determine the correct s the protocol proceeds to Step 6 with each possible value of s until
the trial decryption in Step 6 succeeds.
Step 6. Assuming equality (1) holds over the integers we know that e divides
To see this
observe that:
Therefore,
Each party i now sets d As a result we have
can determine the value of r by trying all
possible values of 0 r k during a trial decryption.
The protocol leaks the value of r and s. Hence, a total of at most 2 log k bits is exposed.
The value of r is found using only one trial decryption: a gateway rst picks a random message
It then asks the parties to decrypt c. Each party computes
and sends the result to the gateway. By performing at most k multiplications the
gateway nds an r 2 [0;
It then relates the value of r back to
the parties who x their shares of d accordingly. To determine s this procedure is repeated for each
of the k candidate values of s. Recall that k is typically small (e.g. less than 10).
Note 6.1: Observe that Step 5 of the above protocol is needed due to the fact that the BGW
method is carried out modulo P . However, it is possible to carry out the BGW protocol of Step 4
directly over the integers, avoiding Step 5 altogether. This can be done using a variant of Shamir
secret sharing over the integers (see [21, 22]). Let picks two polynomials
as follows: the constant terms are set to f
other coe-cients are chosen at random from f0; g. By
interacting with the other parties, each party i computes a point on the polynomial (
over the integers. These points, multiplied by the appropriate Lagrange coe-cients, become the
additive sharing
the integers. Unfortunately, in this approach, the
resulting shares d i of the private key are of order N 2 rather than N as in the above protocol. This
results in a factor of two slowdown during threshold signature generation. On the positive side,
there is no need to leak the value of s.
The computation of 1 mod e (steps 1-3 above) is based on a technique due to Beaver [2].
6.3 t-out-of-k sharing
The previous two subsections explain how one can obtain a k-out-of-k sharing of d. However, to
provide fault-tolerance it is often desirable to have a t-out-of-k sharing enabling any subset of t
parties to apply the private key. The simplest solution, due to T. Rabin [34], makes use of a generic
technique for converting k-out-of-k sharing of a private RSA key into a t-out-of-k sharing scheme.
T. Rabin's approach immediately applies once our k-out-of-k sharing of d is obtained.
Optimizations
We describe several practical techniques to improve the performance of our distributed protocol.
These optimizations are incorporated in an implementation of our protocol [29].
Sieving In Step (1) of our protocol (Section 2) the parties repeatedly pick a random shared integer
until they nd one that is not divisible by small primes. For this they engage in an interactive
trial division protocol. We brie
y outline a more e-cient approach. Let M be the product
of all odd primes less than some bound B 1 . Suppose the k parties could generate an additive
sharing
a i of a random integer a in Z
M . Party i could then set its share of p to be
random number of the appropriate length to make p i be an n-bit
integer. The resulting candidate prime
is a random n-bit integer that is relatively
prime to M . There is no need to run the trial division protocol on p with primes less than
. The only question that remains is how do the parties generate an additive sharing of a
random integer a 2 Z
To do so, each party i generates a random element b i 2 Z
M . Then
is a random element of Z
M . The parties then use the BGW method of
Section 4.3 to convert the multiplicative sharing of a into an additive sharing. One caveat is
that for this to work the BGW method must be made to work in ZM (which is not a eld).
This is done as explained in Section 4.2.
Load balancing In Step (3) of the primality test party 1 computes
while all other parties only had to compute v . Notice that N + 1 is
roughly 2n bits while bits. Consequently, party 1 works twice as hard
as the other parties. To even things out it makes sense to test k candidate N in parallel. In
each of these tests the role of party 1 is played by a dierent party. This way g N+1 mod N
is computed by a dierent party for each N . This results in better load balancing improving
over all performance by up to a factor of two.
Parallel trial division Recall that once N is computed the parties perform trial division on it
before invoking the distributed primality test (Step 2 in Section 2). The k parties can perform
this trial division in parallel { each party is in charge of verifying that N is not divisible by
some set of small primes. This can be e-ciently done by hard-coding all primes
in the range [B in a list. Party i is in charge of testing that N is not divisible by any
primes p j for all j i mod k. This factor of k speedup enables us to use a large bound B 2 ,
increasing the eectiveness of trial division.
Fermat lter There is no need to run the full primality test of Section 3 on every candidate
modulus N . Instead, one can do a Fermat test rst and proceed to perform the full test only
if the Fermat test succeeds. We refer to the test
as a Fermat test. If equality does not hold, then N is not a product of two primes. To carry
out this test in zero-knowledge all k parties compute v
and verify that
This protocol is easily shown to be k 1 private. It
saves the computation of the Jacobi symbol of g for most integers N .
Avoiding quadratic slowdown In Section 2 we noted that our protocols suer from a quadratic
slowdown in comparison to single user generation of an RSA key. The main reason is that
both primes p; q are generated at once. We brie
y outline a potential solution. Consider the
case of three parties, Alice, Bob and Carol. They could generate N as
where Alice has p a ; r a and p a is an n-bit prime. Bob has q b ; r b and q b is an n-bit prime. Carol
has r c . The number of probes until r a is found to be prime is just as in single
user generation of N . Hence, we are able to avoid the quadratic slowdown. Furthermore, no
single party knows the complete factorization of the resulting N . Unfortunately, this approach
doesn't scale well. To enable t-privacy N must be a product of t primes. Also note that
in the example above, the parties must perform a distributed primality test to verify that N
is the product of three primes. The techniques of this paper do not easily generalize to enable
such a test. We note that one can use recent results of [7] to do just that: test that a shared
modulus is the product of three primes without revealing its factorization.
8 Robustness
Throughout the paper we use a model in which parties honestly follow the protocol. This is ne
when parties are honestly trying to generate a shared RSA key. For some applications it is desirable
to make the protocol robust against active adversaries that cheat during the protocol. Since the
RSA function is veriable (the parties can simply check that they correctly decrypt encrypted
messages) active adversaries are limited in the amount of damage they can cause. However, it
may still be possible that a party cheat during the protocol and consequently be able to factor the
resulting N . Similarly, a party can cheat and cause a non-RSA modulus to be incorrectly accepted.
Recently, Frankel, MacKenzie and Yung [22] showed how our protocol can be made to withstand
malicious parties. The protocol enables the parties to detect and exclude the malicious party.
In practice, one could run our non-robust protocol until a modulus N is found which is believed
to be a product of two primes. Then, the robust Frankel-MacKenzie-Yung protocol can be used to
determine that no majority of parties cheated during the non-robust phase. For more results on
robust generation of shared RSA keys see [5].
We describe a simple method for making our non-robust protocol robust when the number of
participants is small (e.g. less than ten). Consider the case of four parties where at most one of
them is malicious. One can run our non-robust protocol until a candidate modulus N is found.
At this point the protocol is run four more times, once for each triplet of users. In the rst run,
with the other three parties by writing
4 and
are random integers in the range [0; N ]. Party 1 then sends p 0
to party i for these values to its own our
non-robust protocol among the three of them (ignoring party 1). If the resulting N does not match
the N computed when all four parties were involved, or if N turns out to not be an RSA modulus,
the N is rejected and the parties announce that one of them is misbehaving. This experiment is
repeated with all four triplets { each time exactly one party is excluded from the computation.
Assuming at most one party is malicious, the resulting N must be a product of two large primes.
Furthermore, the malicious party cannot know the factorization of N since at no point in the
protocol does an honest party reveal any information about its share to another single party. We
note that this approach enables the parties to detect cheating, but it does not help in detecting
who the malicious party is.
In general, when k parties are engaged in our non-robust protocol, and c of them are malicious,
the protocol can be made robust at the cost of k
c
extra invocations. The resulting computation is
private. Clearly this approach can only be applied as long as both k and c are very small.
9 Summary and open problems
We presented techniques that allow three or more parties to generate an RSA modulus
such that all parties are convinced that N is indeed a product of two primes; however none of
them can factor N . Our methods achieves b k 1c privacy. We also show show how the parties can
generate shares of a private decryption exponent to allow threshold decryption. To test that N is
the product of two primes we presented a distributed double-primality test. We note that our test
was recently extended to a triple-primality test [7] enabling k parties to test that N is the product
of three primes, without revealing any information about the factors.
To demonstrate the eectiveness of our protocols we implemented them. Generating a 1024
bits shared RSA key among three 300MhZ Pentium machines takes 90 seconds. See [29] for a
description of the implementation as well as detailed timing measurements.
An important open problem is the generation of shared keys of special form. For example,
a modulus which is a product of \safe primes" (i.e., where both p 1and q 1are prime) has
been considered for security purposes [27] as well as for technical reasons related to threshold
cryptography [16, 25]. Currently, our techniques do not enable shared generation of moduli of
special form. Progress in this directions would be very helpful.
Acknowledgments
We thank Don Beaver for helpful discussions on our results.
--R
"Universal classes of hash functions"
"On the number of uncanceled elements in the sieve of Eratosthenes"
", FOCS 1986, pp. 162-167. A. Cocks' multiplication method The BGW protocol described in Section 4.1 achieves b k 1 2 c privacy and is information theoretically secure. Cocks [10] describes a multiplication protocol that heuristically appears to provide k 1 privacy. Unfortunately, Cocks' protocol is far slower than the BGW method. Furthermore, it does not seem possible to prove its security using \natural"
and
--TR
Strong primes are easy to find
How to play ANY mental game
How to prove yourself: practical solutions to identification and signature problems
sharing homomorphisms: keeping shares of a secret secret
A practical zero-knowledge protocol fitted to security microprocessor minimizing both transmission and memory
Zero-knowledge proofs of identity
Completeness theorems for non-cryptographic fault-tolerant distributed computation
Multiparty unconditionally secure protocols
The knowledge complexity of interactive proof systems
Non-cryptographic fault-tolerant computing in constant number of rounds of interaction
A modification of the Fiat-Shamir scheme
A practical protocol for large group oriented networks
Fast signature generation with a Fiat ShamirMYAMPERSANDmdash;like scheme
How to share a function securely
Robust efficient distributed RSA-key generation
How to share a secret
Two Party RSA Key Generation
Secure Computation (Abstract)
Shared Generation of Authenticators and Signatures (Extended Abstract)
Robust and Efficient Sharing of RSA Functions
A Simplified Approach to Threshold and Proactive RSA
Generation of Shared RSA Keys by Two Parties
Efficient Dynamic-Resharing "Verifiable Secret Sharing" Against Mobile Adversary
Knowledge Generation of RSA Parameters
Generating a Product of Three Primes with an Unknown Factorization
Optimal-resilience proactive public-key cryptosystems
Security, fault tolerance, and communication complexity in distributed systems
--CTR
Jaimee Brown , Juan M. Gonzalez Nieto , Colin Boyd, Efficient and secure self-escrowed public-key infrastructures, Proceedings of the 2nd ACM symposium on Information, computer and communications security, March 20-22, 2007, Singapore
Dan Boneh , Xuhua Ding , Gene Tsudik, Fine-grained control of security capabilities, ACM Transactions on Internet Technology (TOIT), v.4 n.1, p.60-82, February 2004 | primality testing;threshold cryptography;RSA;multiparty computation |
502097 | Quantum lower bounds by polynomials. | We examine the number of queries to input variables that a quantum algorithm requires to compute Boolean functions on {0,1}N in the black-box model. We show that the exponential quantum speed-up obtained for partial functions (i.e., problems involving a promise on the input) by Deutsch and Jozsa, Simon, and Shor cannot be obtained for any total function: if a quantum algorithm computes some total Boolean function f with small error probability using T black-box queries, then there is a classical deterministic algorithm that computes f exactly with O(Ts6) queries. We also give asymptotically tight characterizations of T for all symmetric f in the exact, zero-error, and bounded-error settings. Finally, we give new precise bounds for AND, OR, and PARITY. Our results are a quantum extension of the so-called polynomial method, which has been successfully applied in classical complexity theory, and also a quantum extension of results by Nisan about a polynomial relationship between randomized and deterministic decision tree complexity. | Introduction
The black-box model of computation arises when one is given a black-box containing
an N -tuple of Boolean variables The box is equipped
to output the bit x i on input i . We wish to determine some property of X , accessing
the x i only through the black-box. Such a black-box access is called a query.
A property of X is any Boolean function that depends on X , that is, a property
is a function f : {0, 1} N
# {0, 1}. We want to compute such properties using as
few queries as possible. For classical algorithms, this optimal number of queries is
known as the decision tree complexity of f .
Consider, for example, the case where the goal is to determine whether or not
X contains at least one 1, so we want to compute the property
# x N-1 . It is well known that the number of queries required to compute OR N
by any classical (deterministic or probabilistic) algorithm is #(N ). Grover [1996]
discovered a remarkable quantum algorithm that can be used to compute OR N with
small error probability using only O( # N ) queries. His algorithm makes essential
use of the fact that a quantum algorithm can apply a query to a superposition of
different i , thereby accessing different input bits x i at the same time, each with some
amplitude. This upper bound of O( # N ) queries was shown to be asymptotically
optimal [Bennett et al. 1997; Boyer et al. 1998; Zalka 1999] (the first version of
Bennett et al. [1997] in fact appeared before Grover's algorithm).
Most other existing quantum algorithms can be naturally expressed in the black-box
model. For example, in the case of Simon's problem [Simon 1997], one is given
a function -
X: {0, 1} n
satisfying the promise that there is an s # {0, 1} n
such that -
(addition
mod 2). The goal is to determine whether not. Simon's quantum algorithm
yields an exponential speed-up over classical algorithms: it requires an expected
number of O(n) applications of -
X , whereas every classical randomized algorithm
for the same problem must make #(
queries. Note that the function -
can
be viewed as a black-box bits, and that an -
application can be simulated by n queries to X . Thus, we see that Simon's problem
fits squarely in the black-box setting, and exhibits an exponential quantum-classical
separation for this promise-problem. The promise means that Simon's problem
# {0, 1} is partial; it is not defined on all X # {0, 1} N but only on X
that correspond to an -
X satisfying the promise. (In the previous example of OR N ,
the function is total; however, the quantum speed-up is only quadratic instead
of exponential.) Something similar holds for the order-finding problem, which is
the core of Shor's [1997] efficient quantum factoring algorithm. In this case, the
780 R. BEALS ET AL.
promise is the periodicity of a certain function derived from the number that we
want to factor (see Cleve [2000] for the exponential classical lower bound for order-
finding). Most other quantum algorithms are naturally expressed in the black-box
model as well. 1
Of course, upper bounds in the black-box model immediately yield upper bounds
for the circuit description model in which the function X is succinctly described as
a (log N ) O(1) -sized circuit computing x i from i . On the other hand, lower bounds
in the black-box model do not imply lower bounds in the circuit model, though
they can provide useful guidance, indicating what certain algorithmic approaches
are capable of accomplishing. It is noteworthy that, at present, there is no known
algorithm for computing OR N (i.e., satisfiability of a log N -variable propositional
formula) in the circuit model that is significantly more efficient than using the circuit
solely to make queries. Some better algorithms are known for k-SAT [Sch-oning
1999] but not for satisfiability in general (though proving that no better algorithm
exists is likely to be difficult, as it would imply P #= NP).
It should also be noted that the black-box complexity of a function only considers
the number of queries; it does not capture the complexity of the auxiliary
computational steps that have to be performed in addition to the queries. In cases
such as the computation of OR, PARITY, MAJORITY, this auxiliary work is not
significantly larger than the number of queries; however, in some cases it may be
much larger. For example, consider the case of factoring N -bit integers. The best
known algorithms for this involve #(N ) queries to determine the integer, followed
by 2 N #(1)
operations in the classical case but only N 2 (log N ) O(1) operations in the
quantum case [Shor 1997]. Thus, the number of queries seems not to be of primary
importance in the case of factoring. However, the problem that Shor's quantum
algorithm actually solves is the order-finding problem, which can be expressed in
the black-box model as mentioned above.
In this paper, we analyze the black-box complexity of several functions and
classes of functions in the quantum computation setting, establishing strong lower
bounds. In particular, we show that the kind of exponential quantum speed-up that
algorithms like Simon's achieve for partial functions cannot be obtained by any
quantum algorithm for any total function: at most a polynomial speed-up is possible.
We also tightly characterize the quantum black-box complexity of all symmetric
functions, and obtain exact bounds for functions such as AND, OR, PARITY, and
MAJORITY for various error models: exact, zero-error, bounded-error.
An important ingredient of our approach is a reduction that translates quantum
algorithms that make T queries into multilinear polynomials of degree at most
2T over the N variables. This is a quantum extension of the so-called polynomial
method, which has been successfully applied in classical complexity theory
(see e.g., Nisan and Szegedy [1994] and Beigel [1993]). Also, our polynomial
relationship between the quantum and the classical complexity is analogous to
earlier results by Nisan [1991], who proved a polynomial relationship between
randomized and deterministic decision tree complexity.
1 See, for example, Deutsch and Jozsa [1992], Boneh and Lipton [1995], Kitaev [1995], Boyer et al.
[1998], Brassard and H-yer [1997], Brassard et al. [1997], H-yer [1997], Mosca and Ekert [1998],
Cleve et al. [1998], Brassard et al. [2000], Grover [1998], Buhrman et al. [1998], van Dam [1998],
Farhi et al. [1999b], H-yer et al. [2001], Buhrman et al. [2001], and van Dam and Hallgren [2000].
Quantum Lower Bounds by Polynomials 781
The only quantum black-box lower bounds known prior to this work were Jozsa's
[1991] limitations on the power of 1-query algorithms, the search-type bounds of
Bennett et al. [1997], Boyer et al. [1998], and Zalka [1999], and some bounds
derived from communication complexity [Buhrman et al. 1998]. The tight lower
bound for PARITY of Farhi et al. [1998] appeared independently and around the
same time as a first version of this work [Beals et al. 1998], but their proof technique
does not seem to generalize easily beyond PARITY. After the first appearance of
this work, our polynomial approach has been used to derive many other quantum
lower bounds. 2 Recently, an alternative quantum lower-bound method appeared
[Ambainis 2000] that yields good bounds in cases where polynomial degrees are
hard to determine (for instance, for AND-OR trees), but it seems, on the other
hand, that some bounds obtainable using the polynomial method cannot easily be
obtained using this new method (see, e.g., Buhrman et al. [1999]).
2. Summary of Results
We consider three different settings for computing f on {0, 1} N in the black-box
model. In the exact setting, an algorithm is required to return f (X ) with certainty
for every X . In the zero-error setting, for every X , an algorithm may return "incon-
clusive" with probability at most 1/2, but if it returns an answer, this must be the
correct value of f (X ) (algorithms in this setting are sometimes called Las Vegas
algorithms). Finally, in the two-sided bounded-error setting, for every X , an algorithm
must correctly return the answer with probability at least 2/3 (algorithms in
this setting are sometimes called Monte Carlo algorithms; the 2/3 is arbitrary and
may be replaced by any 1/2 + # for fixed constant 0 < # < 1/2).
Our main results are: 3
(1) In the black-box model, the quantum speed-up for any total function cannot be
more than by a sixth-root. More specifically, if a quantum algorithm computes
f with bounded-error probability by making T queries, then there is a classical
deterministic algorithm that computes f exactly making at most O(T 6 ) queries.
If f is monotone, then the classical algorithm needs at most O(T 4 ) queries,
and if f is symmetric, then it needs at most O(T 2 ) queries. If the quantum
algorithm is exact, then the classical algorithm needs O(T 4 ) queries.
As a by-product, we also improve the polynomial relation between the decision
tree complexity D( f ) and the approximate degree
of Nisan and
Szegedy [1994] from D( f ) # O(
(2) We tightly characterize the black-box complexity of all nonconstant symmetric
functions as follows: In the exact or zero-error settings #(N ) queries are necessary
and sufficient, and in the bounded-error setting # N (N - #( f ))) queries
2 See, for example Nayak and Wu [1999], Buhrman et al. [1999], Farhi et al. [1999a], Ambainis
[1999], de Wolf [2000], and Servedio and Gortler [2000].
3 All our results remain valid if we consider a controlled black-box, where the first bit of the state
indicates whether the black-box is to be applied or not. (Thus, such a black-box would map |0, i, b, z#
to |0, i, b, z# and |1, i, b, z# to |1, i, b # x i , z#.) Also, our results remain valid if we consider mixed
rather than only pure states. In particular, allowing intermediate measurements in a quantum query
algorithm does not give more power since all measurements can be delayed until the end of the
computation at the cost of some additional memory.
I. SOME QUANTUM COMPLEXITIES
Exact Zero-error Bounded-error
are necessary and sufficient, where #( f
if the Hamming weight of the input changes from k to k + 1} (this #( f ) is a
number that is low if f flips for inputs with Hamming weight close to N/2
[Paturi 1992]). This should be compared with the classical bounded-error query
complexity of such functions, which is #(N ). Thus, #( f ) characterizes the
speed-up that quantum algorithms give for all total functions.
An interesting example is the THRESHOLDM function, which is 1 iff its input
X contains at least M 1s. This has query complexity # M(N
(3) For OR, AND, PARITY, MAJORITY, we obtain the bounds shown in Table I.
(all given numbers are both necessary and sufficient). These results are all new,
with the exception of the # N )-bounds for OR and AND in the bounded-error
setting, which appear in Bennett et al. [1997], Boyer et al. [1998] and
Zalka [1999]. The new bounds improve by polylog(N ) factors previous lower-bound
results from Buhrman et al. [1998], which were obtained through a
reduction from communication complexity. The new bounds for PARITY were
independently obtained by Farhi et al. [1998].
Note that lower bounds for OR imply lower bounds for the search problem,
where we want to find an i such that x if such an i exists. Thus, exact or
zero-error quantum search requires N queries, in contrast to # N ) queries
for the bounded-error case. (On the other hand, if we are promised in advance
that the number of solutions is t , then a solution can be found with probability
using O( # N/t) queries [Brassard et al. 2000].)
3. Some Definitions
Our main goal in this paper is to find the number of queries a quantum algorithm
needs to compute some Boolean function by relating such algorithms to polyno-
mials. In this section, we give some basic definitions and properties of multilinear
polynomials and Boolean functions, and describe our quantum setting.
3.1. Boolean Functions and Polynomials. We assume the following setting,
mainly adapted from Nisan and Szegedy [1994]. We have a vector of N Boolean
variables we want to compute a Boolean function f :
{0, 1} N
# {0, 1} of X . Unless explicitly stated otherwise, f will always be to-
tal. The Hamming weight (number of 1s) of X is denoted by |X |. For example,
odd, and MAJ N (X
We can represent Boolean functions using N -variate polynomials p: R N
# R.
1}, we can restrict attention to multilinear p. If
then we say that represents f . It is easy to see
that every f is represented by a unique multilinear polynomial p of degree # N . We
use deg(f) to denote the degree of this p. If | p(X )- f (X )| # 1/3 for all X # {0, 1} N ,
Quantum Lower Bounds by Polynomials 783
we say p approximates f , and
denotes the degree of a minimum-degree
p that approximates f . For example, x 0 x 1 - x N-1 is a multilinear polynomial of
degree N that represents AND N . Similarly, 1-(1-x 0
. The polynomial 1
but does not represent
Nisan and Szegedy [1994, Theorem 2.1] proved a general lower bound on the
degree of any Boolean function that depends on N variables:
THEOREM 3.1 [NISAN AND SZEGEDY 1994]. If f is a Boolean function that
depends on N variables, then deg( f ) # logN - O(log log N ).
Let p: R N
# R be a polynomial. If # is some permutation on {0, . , N - 1},
be the set of all
permutations. The symmetrization p sym of p averages over all permutations of
the input, and is defined as:
Note that p sym is a polynomial of degree at most the degree of p. Symmetrizing
may actually lower the degree: if
0. The following lemma,
originally due to Minsky and Papert [1968], allows us to reduce an N -variate
polynomial to a single-variate one.
LEMMA 3.2 [MINSKY AND PAPERT 1968]. If p: R n
# R is a multilinear poly-
nomial, then there exists a polynomial q: R # R, of degree at most the degree of
p, such that p sym (X
PROOF. Let d be the degree of p sym , which is at most the degree of p. Let V j
denote the sum of all ( N
products of j different variables, so
can be written
as
for some a i # R. Note that V j assumes value ( |X |
which is a polynomial of degree j of |X |. Therefore, the single-
variate polynomial q defined by
d
satisfies the lemma.
A Boolean function f is symmetric if permuting the input does not change the
function value (i.e., f (X ) only depends on |X |). Paturi has proved a powerful
theorem that characterizes
. For such f , let f
low if f k "jumps" near the middle (i.e., for some k # N/2). Now Paturi
[1992, Theorem 1] gives:
784 R. BEALS ET AL.
THEOREM 3.3 [PATURI 1992]. If f is a nonconstant symmetric Boolean function
on {0, 1} N , then
For functions like OR N and AND N , we have #( f
#(
# N ). For PARITY N (which is 1 iff |X | is odd) and MAJ N (which is 1 iff
|X | > N/2), we have #( f is even and #( f
3.2. THE FRAMEWORK OF QUANTUMNETWORKS. Our goal is to compute some
Boolean function f of given as a black-box: calling
the black-box on i returns the value of x i . Wewant to use as few queries as possible.
A classical algorithm that computes f by using (adaptive) black-box queries to
X is called a decision tree, since it can be pictured as a binary tree where each
node is a query, each node has the two outcomes of the query as children, and the
leaves give answer f (X 1. The cost of such an algorithm is the
number of queries made on the worst-case input X , that is, the depth of the tree.
The decision tree complexity D( f ) of f is the cost of the best decision tree that
computes f . Similarly, we can define R( f ) as the worst-case number of queries for
randomized algorithms that compute f (X ) with error probability # 1/3 for all X .
By a well-known result of Nisan [1991, Theorem 4], the best randomized algorithm
can be at most polynomially more efficient than the best deterministic algorithm:
For a general introduction to quantum computing, we refer to Nielsen and Chuang
[2000]. A quantum network (also called quantum algorithm) with T queries is the
quantum analogue to a classical decision tree with T queries, where queries and
other operations can now be made in quantum superposition. Such a network can
be represented as a sequence of unitary transformations:
where the U i are arbitrary unitary transformations, and the O j are unitary transformations
that correspond to queries to X . The computation ends with some measurement
or observation of the final state. We assume each transformation acts
on m qubits and each qubit has basis states |0# and |1#, so there are 2 m basis
states for each stage of the computation. It will be convenient to represent each
basis state as a binary string of length m or as the corresponding natural num-
ber, so we have basis states |0#, |1#, |2#, . , |2 m
- 1#. Let K be the index set
- 1}. With some abuse of notation, we sometimes identify a set
of numbers with the corresponding set of basis states. Every state |# of the net-work
can be uniquely written as |# k#K # k |k#, where the # k are complex
numbers such that # k#K |# k | 2
1. When |# is measured in the above basis,
the probability of observing |k# is |# k | 2 . Since we want to compute a function
of X , which is given as a black-box, the initial state of the network is not very
important and we disregard it hereafter; we may assume the initial state to be
|0# always.
The queries are implemented using the unitary transformations O j in the following
standard way. The transformation O j only affects the leftmost part of a basis
state: it maps basis state |i, b, z# to |i, b # x i , z# denotes XOR). Here, i has
length #log N# bits, b is one bit, and z is an arbitrary string of m - #log N# - 1
bits. Note that the O j are all equal.
Quantum Lower Bounds by Polynomials 785
How does a quantum network compute a Boolean function f of X? Let us
designate the rightmost qubit of the final state of the network as the output bit.
More precisely, the output of the computation is defined to be the value we observe
if we measure the rightmost qubit of the final state. If this output equals f (X )
with certainty, for every X , then the network computes f exactly. If the output
with probability at least 2/3, for every X , then the network computes
f with bounded error probability at most 1/3. To define the zero-error setting, the
output is obtained by observing the two rightmost qubits of the final state. If the
first of these qubits is 0, the network claims ignorance ("inconclusive"); otherwise,
the second qubit should contain f (X ) with certainty. For every X , the probability
of getting "inconclusive" should be less than 1/2. We use Q
to denote the minimum number of queries required by a quantum network
to compute f in the exact, zero-error and bounded-error settings, respectively.
It can be shown that the quantum setting generalizes the classical setting, hence
4. General Lower Bounds on the Number of Queries
In this section, wewill provide some general lower bounds on the number of queries
required to compute a Boolean function f on a quantum network, either exactly or
with zero- or bounded-error probability.
4.1. THE ACCEPTANCE PROBABILITY IS A POLYNOMIAL. Here, we prove that
the acceptance probability of a T -query quantum network can be written as a
multilinear N -variate polynomial P(X ) of degree at most 2T . The next lemmas
relate quantum networks to polynomials; they are the key to most of our
results.
LEMMA 4.1 Let N be a quantum network that makes T queries to a black-box
. Then there exist complex-valued N -variate multilinear polynomials
, each of degree at most T , such that the final state of the network is
the superposition
for any black-box X .
PROOF. Let |# i # be the state of the network (using some black-box X ) just
before the i th query. Note that |# i +1 # =U i O i |# i #. The amplitudes in |# 0 # depend
on the initial state and on U 0 but not on X , so they are polynomials of X of degree
query maps basis state |i, b, z# to |i, b # x i , z#. Hence, if the amplitude of
|i, 0, z# in |# 0 # is # and the amplitude of |i, 1, z# is #, then the amplitude of |i, 0, z#
after the query becomes (1 - x and the amplitude of |i, 1, z# becomes
are polynomials of degree 1. (In general, if the amplitudes
before a query are polynomials of degree # j , then the amplitudes after the query
will be polynomials of degree Between the first and the second query lies
the unitary transformation U 1 . However, the amplitudes after applying U 1 are just
linear combinations of the amplitudes before applying U 1 , so the amplitudes in |# 1 #
are polynomials of degree at most 1. Continuing in this manner, the amplitudes of
the final states are found to be polynomials of degree at most T . We can make these
786 R. BEALS ET AL.
polynomials multilinear without affecting their values on X # {0, 1} N , by replacing
i by x i .
Note that we have not used the assumption that the U j are unitary, but only their
linearity. The next lemma is also implicit in the combination of some proofs in
Fenner et al. [1993] and Fortnow and Rogers [1999].
LEMMA 4.2 Let Nbe a quantum network that makes T queries to a black-box
X, and B be a set of basis states. Then there exists a real-valued multilinear
polynomial P(X ) of degree at most 2T, which equals the probability that observing
the final state of the network with black-box X yields a state from B.
PROOF. By the previous lemma, we can write the final state of the network as
for any X , where the p k are complex-valued polynomials of degree # T . The probability
of observing a state in B is
If we split p k into its real and imaginary parts as
where pr k and pi k are real-valued polynomials of degree # T , then |
which is a real-valued polynomial of degree at most 2T .
Hence, P is also a real-valued polynomial of degree at most 2T , which we can
make multilinear without affecting its values on X # {0, 1} N .
Letting B be the set of states that have 1 as rightmost bit, it follows that we
can write the acceptance probability of a T -query network (i.e., the probability of
getting output 1) as a polynomial P(X ) of degree # 2T .
4.2. LOWER BOUNDS FOR EXACT AND ZERO-ERROR QUANTUM COMPUTATION.
Consider a quantum network that computes f exactly using queries.
Its acceptance probability P(X ) is a polynomial of degree # 2T that equals f (X )
for all X . But then P(X ) must have degree deg( f ), which implies the following
lower bound result for Q
THEOREM 4.3 If f is a Boolean function, then Q
Combining this with Theorem 3.1, we obtain a weak but general lower bound:
COROLLARY 4.4 If f depends on N variables, then Q
O(loglogN ).
For symmetric f , we can prove a much stronger bound. First, for the zero-error
setting:
THEOREM 4.5 If f is nonconstant and symmetric, then Q
PROOF. We assume f (X for at least (N different Hammingweights
of X ; the proof is similar if f (X for at least (N different Hamming
weights. Consider a network that uses queries to compute f with zero-
error. Let B be the set of basis states that have 11 as rightmost bits. These are
Quantum Lower Bounds by Polynomials 787
the basis states corresponding to output 1. By Lemma 4.2, there is a real-valued
multilinear polynomial P of degree # 2T , such that for all X , P(X ) equals the
probability that the output of the network is 11 (i.e., that the network answers 1).
Since the network computes f with zero-error and f is nonconstant, P(X ) is non-constant
and equals 0 on at least (N different Hamming weights (namely,
the Hamming weights for which f (X q be the single-variate polynomial
of degree #2T obtained from symmetrizing P (Lemma 3.2). This q is nonconstant
and has at least (N hence degree at least (N + 1)/2, and the
result follows.
Thus, functions like threshold functions, etc. all require
at least (N queries to be computed exactly or with zero-error on a quantum
network. Since N queries always suffice, even classically, we have
and
Secondly, for the exact setting we can prove slightly stronger lower bounds using
results by Von zur Gathen and Roche [1997, Theorems 2.6 and 2.8]:
THEOREM 4.6 [VON ZUR GATHEN AND ROCHE 1997]. If f is nonconstant and
symmetric, then deg( f
COROLLARY 4.7 If f is nonconstant and symmetric, then Q
O(N 0.548 ). If, in addition, N
In Section 6, we give more precise bounds for some particular functions. In
particular, this will show that the N/2 lower bound is tight, as it can be met for
PARITY N .
4.3. LOWER BOUNDS FOR BOUNDED-ERROR QUANTUM COMPUTATION. Here,
we use similar techniques to get bounds on the number of queries required for
bounded-error computation of some function. Consider the acceptance probability
of a T-query network that computes f with bounded-error, written as a polynomial
P(X ) of degree #2T . If f (X
then 2/3 # P(X ) # 1. Hence, P approximates f , and we obtain:
THEOREM 4.8 If f is a Boolean function, then Q
This result implies that a quantum algorithm that computes f with bounded
error probability can be at most polynomially more efficient (in terms of number
of queries) than a classical deterministic algorithm: Nisan and Szegedy [1994,
Theorem 3.9] proved that D( f ) # O(
together with the previous
theorem implies The fact that there is a polynomial relation
between the classical and the quantum complexity is also implicit in the generic
oracle-constructions of Fortnow and Rogers [1999]. In Section 5, we prove the
stronger result D( f
Combining Theorem 4.8 with Paturi's Theorem 3.3 gives a lower bound for
symmetric functions in the bounded-error setting: if f is nonconstant and sym-
metric, then Q ))). We can in fact prove a matching upper
bound, using the following result about quantum counting [Brassard et al. 2000,
Theorem 13]:
THEOREM 4.9 [BRASSARD ET AL. 2000]. There exists a quantum algorithm
with the following property. For every N -bit input X (with number T,
788 R. BEALS ET AL.
the algorithm uses T queries and outputs a number -
t such that
t | # 2#
with probability at least 8/# 2 .
THEOREM 4.10 If f is nonconstant and symmetric, then we have that
PROOF. We describe a strategy that computes f with small error probability.
First, note that since #( f
must be identically 0 or 1 for k #
In
order to be able to compute f (X ), it is sufficient to know t exactly if t < #(N -
or to know that #(N -#( f ))/2# t #
Run the quantum counting algorithm for # (N - #( f ))N ) steps to count
the number of 1s in X . If t is in one of the two tails (t < #(N -#( f ))/2# or
high probability, the algorithm gives us an
exact count of t . If #(N - #( f ))/2# t #(N +#( f ) - 2)/2#, then, with high
probability, the counting algorithm returns some - t that is in this interval as well.
Thus, with high probability, f - t equals f This shows that we can compute
f using only O( # N (N - #( f ))) queries.
Theorem 4.10 implies that the above-stated result about quantum counting
(Theorem 4.9) is optimal, since a better upper bound for counting would give a
better upper bound on Q know that
Theorem 4.10 is tight. In contrast to Theorem 4.10, it can be shown that a randomized
classical strategy needs #(N ) queries to compute any nonconstant symmetric
f with bounded-error.
Moreover, it can be shown that almost all functions f satisfy deg( f
Buhrman and de Wolf [2001], hence almost all f have reading
the preliminary version of this paper [Beals et al. 1998], Ambainis [1999] proved
a similar result for the approximate case: almost all f satisfy
O( # N log N ) and hence have On the other hand,
van Dam [1998] proved that, with good probability, we can learn all N variables in
the black-box using only N/2 queries. This implies the general upper bound
. This bound is almost tight, as we will show later
on that Q
4.4. LOWER BOUNDS IN TERMS OF BLOCK SENSITIVITY. Above, we gave lower
bounds on the number of queries used, in terms of degrees of polynomials that
represent or approximate the function f that is to be computed. Here we give
lower bounds in terms of the block sensitivity of f , a measure introduced in
Nisan [1991].
Definition 4.11. Let f : {0, 1} N
# {0, 1} be a function, X # {0, 1} N , and B #
{0, . , N - 1} a set of indices. Let X B denote the string obtained from X by
flipping the variables in B. We say that f is sensitive to B on X if f (X
The block sensitivity bs X ( f ) of f on X is the maximum number t for which there
Quantum Lower Bounds by Polynomials 789
exist t disjoint sets of indices B 1 , . , B t such that f is sensitive to each B i on X .
The block sensitivity bs( f ) of f is the maximum of bs X
For example, bs(OR N
flipping B i in X flips the value of OR N from 0 to 1.
We can adapt the proof of Nisan and Szegedy [1994, Lemma 3.8] on lower
bounds of polynomials to get lower bounds on the number of queries in a quantum
network in terms of block sensitivity. 4 The proof uses a theorem from Ehlich and
Zeller [1964] and Rivlin and Cheney [1966]:
THEOREM 4.12 [EHLICH AND ZELLER 1964; RIVLIN AND CHENEY 1966].
Let p: R#R be a polynomial such that b 1 # p(i) # b 2 for every integer
and the derivative p # satisfies | p # (x)| # c for some real 0 # x # N . Then
THEOREM 4.13. If f is a Boolean function, then
.
PROOF. We prove the lower bound on Q here, the bound on Q completely
analogous. Consider a network using queries that computes f
with error probability #1/3. Let p be the polynomial of degree #2T that approximates
f , obtained as for Theorem 4.8. Note that p(X ) # [0, 1] for all X # {0, 1} N ,
because p represents a probability.
be the input and sets that achieve the block
sensitivity. We assume without loss of generality that f (Z
replacing every x j in p as
follows:
occurs in none of the B i
Now it is easy to see that q has the following properties:
(1) q is a multilinear polynomial of degree # d # 2T
Let r be the single-variate polynomial of degree #d obtained from symmetrizing
q over {0, 1} b (Lemma 3.2). Note that 0 # r (i
and for some x # [0, 1] we have r # (x) # 1/3 (because r (0) # 1/3 and r (1) # 2/3).
Applying Theorem 4.12, we obtain d # (1/3)b/(1/3
We can generalize this result to the computation of partial Boolean functions,
which are only defined on a domain D# {0, 1} N of inputs that satisfy some
promise, by generalizing the definition of block sensitivity to partial functions in the
obvious way.
4 This theorem can also be proved by an argument similar to the lower-bound proof for quantum
searching in Bennett et al. [1997] (see, e.g., Vazirani [1998]).
790 R. BEALS ET AL.
5. Polynomial Relation for Classical and Quantum Complexity
Here we will compare the classical complexities D( f ) and R( f ) with the
quantum complexities. First some separations: in the next section, we show
. In the bounded-error setting
so we have a quadratic gap between Q on the one hand and
on the other. 5
Nisan proved that the randomized complexity is at most polynomially better than
the deterministic complexity: D( f As mentioned in
Section 4, we can prove that also the quantum complexity can be at most polynomially
better than the best deterministic algorithm: D( f Here we give
the stronger result that D( f In other words, if we can compute some
function quantumly with bounded-error using T queries, we can compute it classically
error-free using O(T 6 ) queries. We need the notion of certificate complexity:
Definition 5.1. Let C be an assignment C of values to some subset
S of the N variables. We say that C is consistent with X # {0, 1} N if x i =C(i) for
For b # {0, 1}, a b-certificate for f is an assignment C such that f (X
X is consistent with C . The size of C is |S|.
The certificate complexity C X ( f ) of f on X is the size of a smallest f (X )-
certificate that is consistent with X . The certificate complexity of f is C( f
The 1-certificate complexity of f is C (1) ( f
and similarly we define C
For example, if f is the OR-function, then the certificate complexity on the input
(1, 0, 0, . , 0) is 1, because the assignment x forces the OR to 1. The
same holds for the other X for which f (X On the other hand,
the certificate complexity on (0, 0, . , 0) is N , so C( f
The first inequality in the next lemma is obvious from the definitions, the second
inequality is Nisan [1991, Lemma 2.4]. We include the proof for completeness.
LEMMA 5.2 [NISAN 1991]. C (1)
PROOF. Consider an input X # {0, 1} N and let B 1 , . , B b be disjoint minimal
sets of variables that achieve the block sensitivity
show that that sets variables according to X , is a certificate for
X of size # bs( f ) 2 .
First, if C were not an f (X )-certificate, then let X # be an input that agrees
with C , such that f (X Now f is sensitive to B b+1 on X
and B b+1 is disjoint from B 1 , . , B b , which contradicts Hence, C is
an f (X )-certificate.
Second, note that, for must have
of the B i -variables in X B , then the function value must flip from f (X B ) to
5 In the case of randomized decision trees, no function is known for which there is a quadratic gap
between D( f ) and R( f ), the best-known separation is for complete binary AND/OR-trees, where
and it has been conjectured that this is the largest gap possible.
This applies to zero-error randomized trees [Saks and Wigderson 1986] as well as bounded-error trees
[Santha 1991].
Quantum Lower Bounds by Polynomials 791
would not be minimal), so every B i -variable forms a sensitive
set for f on input X B i . Hence, the size of C is |#
The crucial lemma is the following, which we prove along the lines of Nisan
PROOF. The following describes an algorithm to compute f (X ), querying at
most C (1) ( f )bs( f ) variables of X (in the algorithm, by a "consistent" certificate
C or input Y at some point we mean a C or Y that agrees with the values of all
variables queried up to that point).
(1) Repeat the following at most bs( f ) times:
Pick a consistent 1-certificate C and query those of its variables whose X -values
are still unknown (if there is no such C , then return 0 and stop); if the queried
values agree with C , then return 1 and stop.
(2) Pick a consistent Y # {0, 1} N and return f (Y ).
The nondeterministic "pick a C " and "pick a Y " can easily be made deterministic
by choosing the first C (respectively, Y ) in some fixed order. Call this algorithm
A. Since A runs for at most bs( f ) stages and each stage queries at most C (1)
variables, A queries at most C (1) ( f )bs( f ) variables.
It remains to show that A always returns the right answer. If it returns an answer
in step (1), this is either because there are no consistent 1-certificates left (and hence
must be 0) or because X is found to agree with a particular 1-certificate C ;
in both cases A gives the right answer.
Now consider the case where A returns an answer in step (2). We will show
that all consistent Y must have the same f -value. Suppose not. Then there are
consistent Y, Y # with f (Y
contains a consistent 1-certificate C b+1 . We will
derive from these C i disjoint sets B i such that f is sensitive to each B i on Y . For
every as the set of variables on which Y and C i disagree.
Clearly, each B i is non-empty. Note that Y B agrees with C i , so f (Y B
shows that f is sensitive to each B i on Y . Let v be a variable in some
then (v). Now for j > i , C j has been chosen consistent with all
variables queried up to that point (including v), so we cannot have X
. This shows that all B i and B j are disjoint. But then f is
sensitive to bs( f disjoint sets on Y , which is a contradiction. Accordingly, all
consistent Y in step (2) must have the same f -value, and A returns the right value
is one of those consistent Y .
The inequality of the previous lemma is tight, because if f =OR, then D( f
The previous two lemmas imply D( f . Combining this with
Theorem 4.13 (bs( f the main result:
THEOREM 5.4. If f is a Boolean function, then D( f
We do not know if the D( f )-relation is tight, and suspect that
it is not. The best separation we know is for OR and similar functions, where
792 R. BEALS ET AL.
However, for such symmetric Boolean function,
we can do no better than a quadratic separation: D( f ) # N always holds, and we
have Theorem 4.10, hence D( f
f . For monotone Boolean functions, where the function value either increases or
decreases monotonically if we set more input bits to 1, we can use [Nisan 1991,
Proposition 2.2] (bs( f . For the case of
exact computation, we can also give a better result: Nisan and Smolensky proved
f (they never published this, but allowed their proof
to be included in Buhrman and de Wolf [2001]). Together with our Theorem 4.3,
this yields
THEOREM 5.5. If f is a Boolean function, then D( f
As a by-product, we improve the polynomial relation between D( f ) and
Nisan and Szegedy [1994, Theorem 3.9] proved
Using our result D( f Nisan and Szegedy's bs( f
and Szegedy 1994, Lemma 3.8], we obtain
COROLLARY 5.6.
6. Some Particular Functions
In this section, we consider the precise complexity of various specific functions.
First, we consider the OR-function, which is related to search. By Grover's
well-known search algorithm [Grover 1996; Boyer et al. 1998], if at least one x i
equals 1, we can find an index i such that x high probability of success
in O( # N ) queries. This implies that we can also compute the OR-function with
high success probability in O( # N generate an index i ,
and return x i . Since bs(OR N Theorem 4.13 gives us a lower bound of # N/4
on computing OR N with bounded error probability (this #(
first
shown in Bennett et al. [1997] and is given in a tighter form in Boyer et al. [1998]
and Zalka [1999], but the way we obtained it here is rather different from those
proofs). Thus classically we require #(N ) queries.
Now suppose we want to get rid of the probability of error: can we compute OR N
exactly or with zero-error using O( # N ) queries? If not, can quantum computation
give us at least some advantage over the classical deterministic case? Both questions
have a negative answer:
PROPOSITION 6.1. Q 0 (OR N
PROOF. Consider a zero-error network for OR N that uses
queries. By Lemma 4.1, there are complex-valued polynomials p k of degree at
most T , such that the final state of the network on black-box X is
Let B be the set of all basis states having 10 as rightmost bits (i.e., where the output is
the answer 0). Then, for every k # B, wemust have
otherwise, the probability of getting the incorrect answer 0 on |# X
# would be
Quantum Lower Bounds by Polynomials 793
nonzero. On the other hand, there must be at least one
since the probability of getting the correct answer 0 on |# 0
# must be nonzero.
Let p(X ) be the real part of 1 - p k # (X )/p k ( # 0). This polynomial p has degree
at most T and represents OR N . But then p must have degree deg(OR N
so T # N .
COROLLARY 6.2. A quantum network for exact or zero-error search requires
queries.
In contrast, under the promise that the number of solutions is either 0 or t , for
some fixed known t , exact search can be done in O( # N/t) queries [Brassard et al.
2000]. A partial block sensitivity argument (see the comment following Theorem
shows that this is optimal up to a multiplicative constant.
Like the OR-function, PARITY has deg(PARITY N so by Theorem 4.3
exact computation requires at least #N/2# queries. This is also sufficient. It is
well known that the XOR of 2 variables can be computed using only one query
[Cleve et al. 1998]. Assuming N even, we can group the variables of X as N/2
pairs: compute the XOR of all pairs using
N/2 queries. The parity of X is the parity of these N/2 XOR values, which can
be computed without any further queries. If we allow bounded-error, then #N/2#
queries of course still suffice. It follows from Theorem 4.8 that this cannot be
improved, because
LEMMA 6.3 [MINSKY AND PAPERT 1968].
PROOF. Let f be PARITY on N variables. Let p be a polynomial of degree
f ) that approximates f . Since p approximates f , its symmetrization p sym
also approximates f . By Lemma 3.2, there is a polynomial q, of degree at most
inputs. Thus, we must have | f (X
(assuming N
even).
We see that the polynomial q(x) - 1/2 must have at least N zeroes, hence q has
degree at least N and
PROPOSITION 6.4. If f is PARITY on {0, 1} N , then Q
#N/2#. 6
Note that this result also implies that Theorems 4.3 and 4.8 are tight. For classical
algorithms, N queries are necessary in the exact, zero-error, and bounded-error
settings. Note that while computing PARITY on a quantum network is much harder
than OR in the bounded-error setting (#N/2# versus # N )), in the exact setting
PARITY is actually easier (#N/2# versus N ).
6 This has also been proved independently by Farhi et al. [1998], using a different technique. As noted
independently by Terhal [1997] and Farhi et al. [1998], this result immediately implies results by
Ozhigov [1998] to the effect that no quantum computer can significantly speed up the computation of
all functions (this follows because no quantum computer can significantly speed up the computation
of PARITY).
794 R. BEALS ET AL.
The upper bound on PARITY uses the fact that the XOR connective can be
computed with only one query. Using polynomial arguments, it turns out that XOR
and its negation are the only examples among all 16 connectives on 2 variables
where quantum gives an advantage over classical computation.
Since OR N can be reduced to MAJORITY on 2N -1 variables (if we set the first
variables to 1, then the MAJORITY of all variables equals the OR of the last
N variables) and OR requires N queries to be computed exactly or with zero-error,
it follows that MAJ N takes at least (N queries. Hayes et al. [1998] found
an exact quantum algorithm that uses at most N
is the number of 1s in the binary representation of N ; this can save up to log N
queries. This also follows from classical results [Saks and Werman 1991; Alonso
et al. 1993] that show that an item with the majority value canbe identified classically
deterministically with N -w(N ) comparisons between bits (a comparison between
two input bits is the parity of the two bits, which can be computed with 1 quantum
query). For the zero-error case, the same (N + 1)/2 lower bound applies; Hayes
et al. [1998] give a zero-error quantum algorithm that works in roughly 2
queries.
For the bounded-error case, we can apply Theorem 4.10: #(MAJ N so we need
queries. The best upper bound we have here is N/2
which follows from [van Dam 1998].
The #(N ) lower bound for MAJORITY also implies a lower bound for the
number of comparisons required to sort N totally ordered elements. It is well known
that N log N +#(N ) comparisons between elements are necessary and sufficient
for sorting on a classical computer. Note that if we can sort then we can compute
MAJORITY: if we sort the N -bit black-box, then the bit at the (N/2)th position
gives the MAJORITY-value (a comparison between 2 black-box bits can easily be
simulated by a few queries). Hence, our #(N )-bound for MAJORITY implies:
COROLLARY 6.5. Sorting N elements on a quantum computer takes at least
An #(N ) lower bound for sorting was also derived independently in Farhi et al.
[1999a], via a different application of our polynomial-based method. The bound
has recently been improved to the optimal #(N log N ) [H-yer et al. 2001].
ACKNOWLEDGMENTS
We thank Lance Fortnow for stimulating discussions on
many of the topics treated here; Alain Tapp for sending us a preliminary version
of Brassard et al. [1998] and subsequent discussions about quantum counting;
Andris Ambainis for sending us his proof that most functions cannot be computed
with bounded-error using significantly fewer than N queries; NoamNisan for
sending us his and Roman Smolensky's proof that D( f
Melkebeek, Tom Hayes, and Sandy Kutin for their algorithms for MAJORITY;
Hayes and Kutin for the reference to Gathen and Roche [1997]; and two anonymous
referees for some comments that improved the presentation of the paper.
--R
Determining the majority.
A note on quantum black-box complexity of almost all Boolean functions
Strengths and weaknesses of quantum computing.
Tight bounds on quantum searching.
Fortschritte der Physik
Quantum algorithm for the collision problem.
Quantum algorithms for element distinctness.
To appear.
The query complexity of order-finding
London A454
Schwankung von Polynomen zwischen Gitterpunkten.
A limit on the speed of quantum computation in determining parity.
How many functions can be distinguished with k quantum queries?
An oracle builder's toolkit.
Complexity limitations on quantum computation.
A fast quantum mechanical algorithm for database search.
Conjugated operators in quantum algorithms.
Quantum bounds for ordered searching and sorting.
Quantum measurements and the Abelian stabilizer problem.
Quantum searching
In MFCS'98 Workshop on Randomized Algorithms
The hidden subgroup problem and eigenvalue estimation on a quantum computer.
CREW PRAMs and decision trees.
On the degree of Boolean functions as real polynomials.
Quantum computer can not speed up iterated applications of a black box.
A comparison of uniform approximations on an interval and a finite subset thereof.
Probabilistic Boolean decision trees and the complexity of evaluating game trees.
On computing majority by comparisons.
On the Monte Carlo decision tree complexity of read-once formulae
On the power of quantum computation.
Characterization of non-deterministic quantum query and quantum communication complexity
--TR
Perceptrons: expanded edition
CREW PRAMs and decision trees
On the degree of polynomials that approximate symmetric Boolean functions (preliminary version)
Determining the majority
On the degree of Boolean functions as real polynomials
A fast quantum mechanical algorithm for database search
Quantum cryptanalysis of hash and claw-free functions
On the Power of Quantum Computation
Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
Strengths and Weaknesses of Quantum Computing
A framework for fast quantum mechanical algorithms
Quantum vs. classical communication and computation
The quantum query complexity of approximating the median and related statistics
A note on quantum black-box complexity of almost all Boolean functions
Complexity limitations on Quantum computation
Quantum lower bounds by quantum arguments
Complexity measures and decision tree complexity
The Hidden Subgroup Problem and Eigenvalue Estimation on a Quantum Computer
Quantum Computer Can Not Speed Up Iterated Applications of a Black Box
Quantum Complexities of Ordered Searching, Sorting, and Element Distinctness
Quantum Counting
Quantum Cryptanalysis of Hidden Linear Functions (Extended Abstract)
Characterization of Non-Deterministic Quantum Query and Quantum Communication Complexity
The Query Complexity of Order-Finding
Quantum Lower Bounds by Polynomials
Quantum Oracle Interrogation
Bounds for Small-Error and Zero-Error Quantum Algorithms
A Probabilistic Algorithm for k-SAT and Constraint Satisfaction Problems
An Exact Quantum Polynomial-Time Algorithm for Simon''s Problem
Quantum Algorithms for Element Distinctness
Quantum versus Classical Learnability
On the Quantum Complexity of Majority
--CTR
Yves F. Verhoeven, Enhanced algorithms for local search, Information Processing Letters, v.97 n.5, p.171-176, March 2006
Pascal Koiran , Vincent Nesme , Natacha Portier, The quantum query complexity of the abelian hidden subgroup problem, Theoretical Computer Science, v.380 n.1-2, p.115-126, June, 2007
Iordanis Kerenidis , Ronald de Wolf, Exponential lower bound for 2-query locally decodable codes via a quantum argument, Journal of Computer and System Sciences, v.69 n.3, p.395-420, November 2004
Harry Buhrman , Robert palek, Quantum verification of matrix products, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.880-889, January 22-26, 2006, Miami, Florida
Yaoyun Shi, Quantum and classical tradeoffs, Theoretical Computer Science, v.344 n.2-3, p.335-345, 17 November 2005
Peter Hoyer , Troy Lee , Robert Spalek, Negative weights make adversaries stronger, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA
Franois Le Gall, Exponential separation of quantum and classical online space complexity, Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures, July 30-August 02, 2006, Cambridge, Massachusetts, USA
Scott Aaronson, Lower bounds for local search by quantum arguments, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, p.465-474, June 13-16, 2004, Chicago, IL, USA
Miklos Santha , Mario Szegedy, Quantum and classical query complexities of local search are polynomially related, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Igor E. Shparlinski, Bounds on the Fourier coefficients of the weighted sum function, Information Processing Letters, v.103 n.3, p.83-87, July, 2007
Holger Spakowski , Rahul Tripathi, LWPP and WPP are not uniformly gap-definable, Journal of Computer and System Sciences, v.72 n.4, p.660-689, June 2006
Andris Ambainis , Robert palek , Ronald de Wolf, A new quantum lower bound method,: with applications to direct product theorems and time-space tradeoffs, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Andris Ambainis, Polynomial degree vs. quantum query complexity, Journal of Computer and System Sciences, v.72 n.2, p.220-238, March 2006
Shengyu Zhang, New upper and lower bounds for randomized and quantum local search, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Frdric Magniez , Miklos Santha , Mario Szegedy, Quantum algorithms for the triangle problem, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
A. Ambainis, Quantum search algorithms, ACM SIGACT News, v.35 n.2, June 2004
Shengyu Zhang, On the power of Ambainis lower bounds, Theoretical Computer Science, v.339 n.2, p.241-256, 12 June 2005
Scott Aaronson, Guest Column: NP-complete problems and physical reality, ACM SIGACT News, v.36 n.1, March 2005 | query complexity;quantum computing;lower bounds;black-box model;polynomial method |
502099 | Extractors and pseudorandom generators. | We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "e;weakly random"e; distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain.We demonstrate an unsuspected connection between extractors and pseudorandom generators. In fact, we show that every pseudorandom generator of a certain kind is an extractor.A pseudorandom generator construction due to Impagliazzo and Wigderson, once reinterpreted via our connection, is already an extractor that beats most known constructions and solves an important open question. We also show that, using the simpler Nisan--Wigderson generator and standard error-correcting codes, one can build even better extractors with the additional advantage that both the construction and the analysis are simple and admit a short self-contained description. | Introduction
An extractor is an algorithm that converts a \weak source of randomness" into an almost uniform
distribution by using a small number of additional truly random bits. Extractors have several
important applications (see e.g. [Nis96]). In this paper we show that pseudorandom generator
constructions of a certain kind are extractors. Using our connection and some new ideas we describe
constructions of extractors that improve most previously known constructions and that are simpler
than previous ones.
1.1 Denitions
We now give the formal denition of an extractor and state some previous results. We rst need
to dene the notions of min-entropy and statistical dierence.
We say that (the distribution of) a random variable X of range f0; 1g n has min-entropy at least
k if for every x 2 f0; 1g n it holds is an integer, then a canonical example of
a distribution having min-entropy k is the uniform distribution over a set S f0; 1g n of cardinality
Indeed, it is implicit in [CG88] that if a distribution has min-entropy k then it is a convex
combination of distributions each one of which is uniform over a set of size 2 k . We will consider
distributions of min-entropy k as the formalization of the notion of weak sources of randomness
containing k \hidden" bits of randomness. In the rest of this paper we will often call (n; k)-source a
An extended abstract of this paper is to appear in the Proceedings of the 31st ACM Symposium on Theory of
Computing [Tre99].
y luca@cs.columbia.edu. Computer Science Department, Columbia University.
random variable X ranging over f0; 1g n and having min-entropy at least k. The use of min-entropy
to measure \hidden randomness" has been advocated by Chor and Goldreich [CG88] and, in full
generality, by Zuckerman [Zuc90]. The statistical dierence between two random variables X and
Y with range f0; 1g n is dened as
and we say that X and Y are "-close if jjX Y jj ". For an integer l we denote by U l a random
variable that is uniform over f0; 1g l .
A function Ext : f0; 1g n f0; 1g t ! f0; 1g m is a (k; ")-extractor if for every random variable
X of min entropy at least k it holds that Ext(X; U t ) is "-close to the uniform distribution over
. A weaker kind of combinatorial construction has also been considered: A function Disp :
is a (k; ")-disperser if for every subset S f0; 1g m such that jSj > "2 m
and for every X of min-entropy k it holds Pr[Disp(X; U t
One would like to have, for every n and k, constructions where t is very small and m is as close
to k as possible. There are some limitations towards this goal: One can show that, if k < n 1
and " < 1=2, then it must be the case that t
it must be the case that m It is possible to show (non-
constructively) that for every n; k; ", there is a (k; ")-extractor
It is an open question to match such
bounds with polynomial-time computable functions Ext.
1.2 Previous Work and Applications
The natural application of extractors is to allow the simulation of randomized algorithms even in
(realistic) settings where only weak sources of randomness are available. This line of research has a
long history, that dates back at least to von Neumann's algorithm for generating a sequence of unbiased
bits from a source of biased but identically distributed and independent bits [vN51]. More
recent work by Santha and Vazirani [SV86] and Vazirani and Vazirani [VV85] considered much
weaker sources of randomness (that they call \slightly random" sources) that are still su-cient to
allow simulations of arbitrary randomized algorithms. These results were generalized by Chor and
Goldreich [CG88] and Cohen and Wigderson [CW89], and nally by Zuckerman [Zuc90], who introduced
the current denition (based on min-entropy) of weak random sources and a construction
of extractors (although the term extractor was coined later, in [NZ93]). The main question about
simulation of randomized algorithms using weak random sources can be stated as follows: suppose
that for every n we have access to a (n; k(n))-source, and that we are given a polynomial-time
randomized algorithm that we want to simulate given only one access to one of the sources: what
is the most slowly growing function k() such that we can have a polynomial-time simulations? For
a \black-box" simulation, where the randomized algorithm is given as an oracle, it is impossible to
solve the simulation problem in polynomial time with a family of (n; n o(1) )-sources. The best one
can hope to achieve is to have, for every - > 0, a simulation that works in polynomial time given
a (n; n -source. We will call such a simulation an entropy-rate optimal simulation. Improved constructions
of extractors appeared in [NZ93, SZ94, TS96, Zuc96b], but none of these constructions
implies an entropy-rate optimal simulation of randomized algorithms. Dispersers are objects similar
to, but less powerful than, extractors. Randomized algorithms having one-sided error probability
can be simulated by using weak random sources and dispersers. Saks et al. [SSZ98] give a construction
of dispersers that implies an entropy-rate optimal simulation of one-sided error randomized
Reference Min entropy k Output length m Additional randomness t Type
[GW97] n a n (a) O(a) Extractor
log log n) Extractor
poly log n
This paper
any k
Table
1: Parameters in previous constructions and our construction of (k; ") extractors Ext :
. In the expressions, " is xed and arbitrarily small, and > 0 is an
arbitrarily small constant. O() notations hide dependencies on " and .
algorithms with weak random sources. Andreev et al. [ACRT99] show how to use the dispersers
of Saks et al. in order to give entropy-rate optimal simulations of general randomized algorithms
using weak random sources. The result of Andreev et al. leaves open the question of whether there
exists a construction of extractors that is good enough to imply directly such entropy-rate optimal
simulations.
Extractors are also used to derandomize randomized space-bounded computations [NZ93] and
for randomness-e-cient reduction of error in randomized algorithms (see [Zuc96b, GZ97] and references
therein). They yield oblivious samplers (as dened in [BR94]), that have applications to
interactive proofs (see [Zuc96b] and references therein). They also yield expander graphs, as discovered
by Wigderson and Zuckerman [WZ93], that in turn have applications to superconcentrators,
sorting in rounds, and routing in optical networks. Constructions of expanders via constructions of
extractors and the Wigderson-Zuckerman connection appeared in [NZ93, SZ94, TS96], among oth-
ers. Extractors can also be used to give simple proofs of certain complexity-theoretic results [GZ97],
and to prove certain hardness of approximation results [Zuc96a]. An excellent introduction to extractors
and their applications is given by a recent survey by Nisan [Nis96] (see also [NTS98], and
[Gol99] for a broader perspective).
In
Table
1 we summarize the parameters of the previous best constructions, and we state two
special cases of the parameters arising in our construction.
1.3 Our Result
The extractors constructed in this paper work for any min-entropy
43 , extract a slightly
sub-linear fraction of the original randomness (i.e. the length of the output is for an
arbitrarily small > 0) and use O(log n) bits of true randomness. In fact, a more general result
holds, as formalized below.
Theorem 1 (Main) There is an algorithm that on input parameters n,
where
(log
log(k=2m) e
log(k=2m)
In particular, for any xed constants " > 0 and 0 <
< 1 we have for every n an explicit
polynomial-time construction of an (n
It should be noted that the running time of our extractor is exponential in the parameter t
(additional randomness), and so the running time is super-polynomial when the additional randomness
is super-logarithmic. However, the 2 t factors in the running time of the extractor is payed
only once, to construct a combinatorial object (a \design") used by the extractor. After the design
is computed, each evaluation of the extractor can be implemented in linear time plus the time that
it takes to compute an error-correcting encoding of the input of the extractor. It is possible to
generate designs more e-ciently, and so to have a polynomial-time extractor construction for every
min-entropy. We omit the details of such construction, since the construction of \weak designs" in
[RRV99] (see below) give better extractors and is also more e-ciently computable.
Our construction improves on the construction of Saks, Srinivasan and Zhou [SSZ98] since
we construct an extractor rather than a disperser, and improves over the constructions of Ta-
Shma [TS96] since the additional randomness is logarithmic instead of slightly super-logarithmic.
The best previous construction of extractors using O(log n) additional randomness was the one
of Zuckerman [Zuc96b], that only works when the min-entropy is a constant fraction of the input
length, while in our construction every min-entropy of the form n
is admissible. (On the other
hand, the extractor of [Zuc96b] extracts a constant factor of the entropy, while we only extract
a constant root of it.) Our construction yields an entropy-rate optimal simulation of randomized
algorithms using weak random sources. In contrast to the result of [ACRT99] we can use a weak
random source to generate almost uniformly distributed random bits independently of the purpose
for which the random bits are to be used. 1
Our construction is not yet the best possible, since we lose part of the randomness of the source
and because the additional randomness is logarithmic only as long as
42 . (See also discussion
in Section 1.6 below.)
1.4 Techniques
This paper contains two main contributions.
The rst one is a connection (outlined in Section 2) between pseudorandom generators of a certain
kind and extractors. Our connection applies to certain pseudorandom generator constructions
that are based on the (conjectured) existence of predicates (decision problems) that can be uniformly
computed in time t(n) but cannot be solved by circuits of size much smaller than t(n). The
analysis of such constructions shows that if the predicate is hard, then it is also hard to distinguish
the output of the generator from the uniform distribution. This implication is proved by means of
a reduction showing how a circuit that is able to distinguish the output of the generator from the
uniform distribution can be transformed into a slightly larger circuit that computes the predicate.
(Impagliazzo and Wigderson [IW97] present one such construction with very strong parameters.)
Our result is that if the (truth table of the) predicate is chosen randomly, according to a distribution
with su-ciently high min-entropy, then the output of the generator is statistically close to uniform.
This statement is incomparable with standard analyses: we use a stronger assumption (that the
predicate is random instead of xed and hard) and prove a stronger conclusion (that the output
is statistically close to, instead of indistinguishable from, the uniform distribution). An immediate
application is that a pseudorandom generator construction of this kind is an extractor. Our result
1 Andreev et al. [ACRT99] show how to produce a sequence of bits that \look random" to a specic algorithm,
and their construction works by having oracle access to the algorithm. So it is not possible to generate random bits
\oine" before xing the application where the bits will be used.
has a straightforward proof, based on a simple counting argument. The main contribution, indeed,
is the statement of the result, rather than its proof, since it involves a new, more general, way of
looking at pseudorandom generator constructions. The Impagliazzo-Wigderson generator, using
our connection, is an extractor that beats some previous constructions and that is good enough
to imply entropy-rate optimal simulations of randomized algorithms. We stress that although the
Impagliazzo-Wigderson generator is known to be a pseudorandom generator only under unproved
conjectures, it is unconditionally a good extractor (i.e. we do not use any complexity-theoretic
assumption in our work).
Our second contribution is a construction that is simpler to describe and analyse (the generator
of Impagliazzo and Wigderson is quite complicated) and that has somewhat better parameters. Our
idea is to use a pseudorandom generator construction due to Nisan and Wigderson [NW94], that is
considerably simpler than the one of Impagliazzo and Wigderson (indeed the construction of Impagliazzo
and Wigderson contains the one of Nisan and Wigderson as one of its many components).
The Nisan-Wigderson generator has weaker properties than the Impagliazzo-Wigderson generator,
and our ideas outlined in Section 2 would not imply that it is an extractor as well. In Section 3
we show how to use error-correcting codes in order to turn the Nisan-Wigderson generator into a
very good extractor. Section 3 contains a completely self-contained treatment of the construction
and the analysis.
Perspective
For starters, our construction improves upon previous ones and solves the question of constructing
extractors that use a logarithmic amount of randomness, work for any min-entropy that is
polynomially related to the length of the input and have an output that is polynomially related
to the amount of entropy. Such a construction has been considered an important open question
(e.g. in [NTS98, Gol99]), even after Andreev et al. [ACRT99] showed that one does not need such
extractors in order to develop an entropy-rate optimal simulation of randomized algorithms via
random sources. Indeed, it was not clear whether the novel approach introduced by Andreev
et al. was necessary in order to have optimal simulations, or whether a more traditional approach
based on extractors was still possible. Our result claries this point, by showing that the traditional
approach su-ces.
Perhaps more importantly, our construction is simpler to describe and analyse then the most
recent previous constructions, and it uses a very dierent approach. Hopefully, our approach
oers more room for improvement than previous, deeply exploited, ones. Raz et al. [RRV99] have
already found improvements to our construction (see below). Tight results may come from some
combination of our ideas and previous ones.
Our use of results about pseudorandomness in the construction of extractors may come as a
surprise: pseudorandom generation deals with (and takes advantage of) a computational denition
of randomness, while extractors are combinatorial objects used in a framework where information-theoretic
randomness is being considered. In the past there have been some instances of (highly
non-trivial) results about computational randomness inspired by (typically trivial) information-theoretic
analogs, e.g. the celebrated Yao's XOR Lemma and various kind of \direct product"
results (see e.g. [GNW95]). On the other hand, it seemed \clear" that one could not go the other
way, and have information-theoretic applications of computational results. This prejudice might
be one reason why the connection discovered in this paper has been missed by the several people
who worked on weak random sources and on pseudorandomness in the past decade (including those
who did foundational work in both areas). Perhaps other important results might be proved along
similar lines.
1.6 Related Papers
The starting point of this paper was an attempt to show that every disperser can be modied
into an extractor having similar parameters. This was inspired by the fact (noted by several
people, including Andrea Clementi and Avi Wigderson) that every hitting set generator can be
transformed into a pseudorandom generator with related parameters, since the existence of hitting
set generators implies the existence of problems solvable in exponential time and having high
circuit complexity [ACR98] and the existence of such problems can be used to build pseudorandom
generators [BFNW93, IW97]. The fact that an information-theoretic analog of this result could
be true was suggested by the work done in [ACRT99], where proof-techniques from [ACR98] were
adapted in an information-theoretic setting. We were indeed able to use the Impagliazzo-Wigderson
generator in order to show that any construction of dispersers yields a construction of extractors
with slightly worse parameters. Oded Goldreich then pointed out that we were not making any
essential use of the disperser in our construction, and that we were eectively proving that the
Impagliazzo-Wigderson generator is an extractor (this result is described in Section 2). The use of
error-correcting codes and of the Nisan-Wigderson generator (as in Section inspired by an
alternative proof of some of the results of [IW97] due to Sudan et al. [STV99].
Shortly after the announcement of the results of this paper, Raz, Reingold and Vadhan [RRV99]
devised an improvement to our construction. In our construction, if the input has min-entropy k
and the output is required to be of length m, and " is a constant, then the additional randomness
is O(m 1= log(k=2m) (log n) log(k=2m)), which is very bad if, say, In [RRV99], the dependency
is O((log n) log(k=m)), so the randomness is bounded by O(log 2 n) even when
Furthermore, the running of the extractors of [RRV99] is poly(n; t) rather than poly(n; 2 t ) as in
the construction presented in this paper. Raz et al. [RRV99] also show how to recursively compose
their construction with itself (along the lines of [WZ93]) and obtain another construction where
and the additional randomness is O(log 3 n). Constructions of extractors with parameters
have applications to the explicit construction of expander graphs [WZ93]. In particular, Raz
et al. [RRV99] present constructions of expander graphs and of superconcentrators that improve
previous ones by Ta-Shma [TS96]. Raz et al. [RRV99] also improve the dependency that we have
between additional randomness and error parameter ".
Organization of the Paper
We present in Section 2 our connection between pseudorandom generator constructions and ex-
tractors. The main result of Section 2 is that the Impagliazzo-Wigderson generator [IW97] is a
good extractor. In Section 3 we describe and analyse a simpler construction based on the Nisan-
Wigderson generator [NW94] and on error correcting codes. Section 3 might be read independently
of Section 2.
2 The Connection Between Pseudorandom Generators and Extractor
This section describes our main idea of how to view a certain kind of pseudorandom generator as
an extractor. Our presentation is specialized on the Impagliazzo-Wigderson generator, but results
might be stated in a more general fashion.
2.1 Computational Indistinguishability and Pseudorandom Generators
We start by dening the notion of computational indistinguishability, and pseudorandom genera-
tors, due to Blum, Goldwasser, Micali and Yao [GM84, BM84, Yao82].
Recall that we denote by U n the uniform distribution over f0; 1g n . We say that two random
variables X and Y with the same range f0; 1g n are (S; ")-indistinguishable if for every
computable by a circuit of size S it holds
One may view the notion of "-closeness (that is, of statistical dierence less than ") as the
\limit" of the notion of (S; ")-indistinguishability for unbounded S.
Informally, a pseudorandom generator is an algorithm G : f0; 1g
")-indistinguishable from Um , for large S and small ". For derandomization of randomized
algorithms, one looks for generators, say,
is (m O(1) ; 1=3)-indistinguishable from Um . Such generators (if they were uniformly computable in
time poly(m)) would imply deterministic polynomial-time simulations of randomized polynomial-time
algorithms.
2.2 Constructions of Pseudorandom Generators Based on Hard Predicates
Given current techniques, all interesting constructions of pseudorandom generators have to rely on
complexity-theoretic conjectures. For example the Blum-Micali-Yao [BM84, Yao82] construction
(that has dierent parameters from the ones exemplied above) requires the existence of strong
one-way permutations. In a line of work initiated by Nisan and Wigderson [Nis91, NW94], there
have been results showing that the existence of hard-on-average decision problems in E is su-cient
to construct pseudorandom generators. (Recall that E is the class of decision problems solvable
deterministically in time 2 O(n) where n is the length of the input.) Babai et al. [BFNW93] present
a construction of pseudorandom generators that only requires the existence of decision problems in
having high worst-case complexity. An improved construction of pseudorandom generators from
worst-case hard problems is due to Impagliazzo and Wigderson [IW97], and it will be the main
focus of this section. The constructions of [NW94, BFNW93, IW97] require non-uniform hardness,
that is, use circuit size as a complexity measure. (Recent work demonstrated that non-trivial
constructions can be based on uniform worst-case conditions [IW98].)
The main result of [IW97] is that if there is a decision problem solvable in time 2 O(n) that cannot
be solved by circuits of size less than 2
n , for some
randomized
polynomial time algorithm has a deterministic polynomial-time simulation. A precise statement of
the result of Impagliazzo and Wigderson follows.
Theorem 2 ([IW97]) Suppose that there exists a family fP l g l0 of predicates
that is decidable in time 2 O(l) , and a constant
> 0 such that, for every su-ciently large l, P l has
circuit complexity at least 2
l .
Then for every constant " > 0 and parameter m there exists a pseudorandom generator
computable in time poly(m) such that IW (m) (U O(logm) ) is
(O(m); ")-indistinguishable from the uniform distribution, and
Results about pseudorandomness are typically proved by contradiction. Impagliazzo and Wigderson
prove Theorem 2 by establishing the following result.
Lemma 3 ([IW97]) For every pair of constants " > 0 and - > 0 there exists a positive constant
and an algorithm that on input a length parameter l and having oracle access
to a predicate computes a function IW
l such that for every
then P is computed by a circuit A that uses T -gates and whose size is at most 2 -l .
By a \circuit with T -gates" we mean a circuit that can use ordinary fan-in-2 AND and OR gates
and fan-in-1 NOT gates, as well as a special gate (of fan-in m) that computes T with unit cost.
This is the non-uniform analog of a computation that makes oracle access to T .
It might not be immediate to see how Theorem 2 follows from Lemma 3. The idea is that if
we have a predicate that cannot be computed by circuits of size 2 2-l , then
")-indistinguishable from uniform. This can be proved by contradiction: if T is
computed by a circuit of size 2 -l and is such that
then there exists a circuit A of size at most 2 -l that uses T -gates such that A computes P . If each
T -gate is replaced by the circuit that computes T , we end up with a circuit of size at most 2 2-l
that computes P , a contradiction to our initial assumption.
We stress that Lemma 3 has not been stated in this form by Impagliazzo and Wigderson [IW97].
In particular, the fact that their analysis applies to every predicate P and to every function T ,
regardless of their circuit-complexity, was not explicitely stated. On the other hand, this is not a
peculiar or surprising property of the Impagliazzo-Wigderson construction: in general in complexity
theory and in cryptography the correctness of a transformation of an object with certain assumed
properties (e.g. a predicate with large circuit complexity) into an object with other properties (e.g.
a generator whose output is indistinguishable from uniform) is proved by \black-box" reductions,
that work by making \oracle access" to the object and making no assumptions about it.
We also mention that three recent papers exploit the hidden generality of the pseudorandom
generator construction of Impagliazzo and Wigderson, and of the earlier one by Nisan and Wigderson
[NW94]. Arvind and Kobler [AK97] observe that the analysis of the Nisan-Wigderson generator
extends to \non-deterministic circuits," which implies the existence of pseudorandom generators for
non-deterministic computations, under certain assumptions. Klivans and Van Melkebeek [KvM99]
observe similar generalizations of the Impagliazzo-Wigderson generator for arbitrary non-uniform
complexity measures having certain closure properties (which does not include non-deterministic
circuit size, but includes the related measure \size of circuits having an NP-oracle"). The construction
of pseudorandom generators under uniform assumptions by Impagliazzo and Wigderson
[IW98] is also based on the observation that the results of [BFNW93] can be stated in a general
form where the hard predicate is given as an oracle, and the proof of security can also be seen as
the existence of an oracle machine with certain properties.
A novel aspect in our view (that is not explicit in [IW97, AK97, KvM99, IW98]) is to see the
Impagliazzo-Wigderson construction as an algorithm that takes two inputs: the truth-table of a
predicate and a seed. The Impagliazzo-Wigderson analysis says something meaningful even when
the predicate is not xed and hard (for an appropriate complexity measure), but rather supplied
dynamically to the generator. In the rest of this section, we will see that if the (truth table of the)
predicate is sampled from a weak random source of su-ciently large min-entropy, then the output
of the Impagliazzo-Wigderson generator is statistically close to uniform: that is, the Impagliazzo-
Wigderson generator is an extractor.
2.3 Using a Random Predicate Instead of a Hard One
Let us introduce the following additional piece of notation: let string x 2 f0; 1g n we
denote by hxi : f0; 1g l ! f0; 1g the predicate whose truth-table is x.
Lemma 4 Fix constants "; - > 0, and an integer parameter l. Consider the function Ext : f0; 1g n
dened as
dened in Equation (1) is a
Proof: We have to prove that for every random variable X of min entropy k m-n - log n+log 1="
and for every
Let us x X and T and prove that Expression (2) holds for them. Let us call B f0; 1g n the set
of bad strings x for which
For each such x, the analysis of Impagliazzo and Wigderson implies that there is a circuit of size
uses T -gates and that computes hxi. Since T is xed, and any two dierent predicates
are computed by two dierent circuits, we can conclude that the total number of elements of B
is at most the number of circuits of size with gates of fan-in at most m. So we have
Since X has high min-entropy, and B is small, the probability of picking an element of B when
sampling from X is small, that is
Then we have
2"
where the rst inequality is an application of the triangle inequality and the third inequality follows
from Expression (4) and the denition of B. 2
If we translate the parameters in a more traditional format we have the following extractor
construction.
Theorem 5 For every positive constants
and " there is a
and an explicit construction of
")-extractor
We proved that for every constant " > 0 and - > 0 there is a 0 < < - such that there
is a (k; 2")-extractor construction where and the output length is
We can then set
=2 and
to get the parameters claimed in the theorem. 2
This is already a very good construction of extractors, and it implies an entropy-rate optimal
simulation of randomized algorithms using weak random sources.
We mentioned in Section 2.2 that Babai et al. [BFNW93] were the rst to give a construction
of pseudorandom generators based on worst-case hard predicates. In particular, a weaker version
of Lemma 3 is proved in [BFNW93], with similar parameters except that
O(l). The analysis of this section applies to the construction of Babai et al. [BFNW93], and
one can show that their construction gives extractors with the same parameters as in Theorem 5,
except that one would have
By deriving a more accurate estimation of the parameters in the Impagliazzo-Wigderson con-
struction, it would be possible to improve on the statement of Theorem 5. More specically, it
could be possible to have an explicit dependency of the parameter t from - and ", and an explicit
expression for (-;
). However, such improved analysis would not be better than the analysis of
the construction that we present in the next section, and so we do not pursue this direction.
3 Main Result
In this section we present a construction of extractors based on the Nisan-Wigderson generator and
error-correcting codes. The Nisan-Wigderson generator is simpler than the Impagliazzo Wigderson
generator considered in the previous section, and this simplicity will allow us to gain in e-ciency.
The advantages of the construction of this section over the construction of the previous section
are better quantitative parameters and the possibility of giving a self-contained and relatively simple
presentation. The subsequent work of Raz et al [RRV99] took the construction of this section as a
starting point, and improved the primitives that we use.
3.1
Overview
The Nisan-Wigderson generator works similarly to the Impagliazzo-Wigderson one: it has access
to a predicate, and its output is indistinguishable from uniform provided that the predicate is hard
(but, as will be explained in a moment, a stronger notion of hardness is being used). This is proved
by means of a reduction that shows that if T is a test that distinguishes the output of the generator
with predicate P from uniform, then there is a small circuit with one T -gate that approximately
computes P . That is, the circuit computes a predicate that agrees with P on a fraction of inputs
noticeably bounded away from 1/2.
Due to this analysis, we can say that the output of the Nisan-Wigderson generator is indistinguishable
from uniform provided that the predicate being used is hard to compute approximately, as
opposed to merely hard to compute exactly, as in the case of the Impagliazzo-Wigderson generator.
We may be tempted to dene and analyse an extractor Ext based on the Nisan-Wigderson
generator using exactly the same approach of the previous section. Then, as before, we would
assume for the sake of contradiction that a test T distinguishes the output Ext(X; U t ) of the
extractor from the uniform distribution, and we would look at how many strings x are there such
that j Pr[T (Ext(x; U t can be large. For each such x we can say that there
is a circuit of size S that describes a string whose Hamming distance from x is noticeably less than
1/2. Since there are about 2 S such circuits, the total number of bad strings x is at most 2 S times
the number of strings that can belong to a Hamming sphere of radius about 1/2. If X is sampled
from a distribution whose min-entropy is much bigger then the logarithm of the number of possible
bad x, then we reach a contradiction to the assumption that T was distinguishing Ext(X; U t )
from uniform. When this calculation is done with the actual parameters of the Nisan-Wigderson
generator, the result is very bad, because the number of strings that belong to a Hamming sphere
of the proper radius is huge. This is, however, the only point where the approach of the previous
section breaks down.
Our solution is to use error-correcting codes, specically, codes with the property that every
Hamming sphere of a certain radius contains a small number of codewords. We then dene an
extractor Ext that on input x and s rst encodes x using the error-correcting code, and then
applies the Nisan-Wigderson generator using s as a seed and the encoding of x as the truth table
of the predicate. Thanks to the property of the error-correcting code, the counting argument of
the previous section works well again.
3.2 Preliminaries
In this section we state some known technical results that will be used in the analysis of our
extractor. For an integer n we denote by [n] the set ng. We denote by u 1 u 2 the string
obtained by concatenating the strings u 1 and u 2 .
3.2.1 Error-correcting codes
Error-correcting codes are one of the basic primitives in our construction. We need the existence
of codes such that few codewords belong to any given Hamming ball of su-ciently small radius.
Lemma 6 (Error Correcting Codes) For every n and - there is a polynomial-time computable
encoding
1=-) such that every ball of Hamming radius
(1=2 -)n in f0; 1g n contains at most 1=- 2 codewords. Furthermore
n can be assumed to be a power
of 2.
Stronger parameters are achievable. In particular the length of the encoding can be
However, the stronger bounds would not improve our constructions. Standard codes are good
enough to prove Lemma 6. We sketch a proof of the lemma in the Appendix.
3.2.2 Designs and the Nisan-Wigderson Generator
In this section we cite some results from [NW94] in a form that is convenient for our application.
Since the statements of the results in this section are slightly dierent from the corresponding
statements in [NW94], we also present full proofs.
Denition 7 (Design) For positive integers m, l, a l, and t l, a (m; t; l; a) design is a family
of sets such that
for every i
Lemma 8 (Construction of Design [NW94]) For every positive integers m, l, and a l there
exists a (m; t; l; a) design where
a +1 l 2
a . Such a design can be computed deterministically in
O(2 t m) time.
The deterministic construction in Lemma 8 was presented in [NW94] for the special case of
log m. The case for general a can be proved by using the same proof, but a little care is
required while doing a certain probabilistic argument. The proof of Lemma 8 is given in Appendix
A.2.
The following notation will be useful in the next denition: if S [t], with
(where then we denote by y jS 2 f0; 1g l the string y s 1
y s l .
Denition 9 (NW Generator [NW94]) For a function f : f0; 1g l ! f0; 1g and an (m; t; l; a)-
as
Intuitively, if f is a hard-on-average function, then f() evaluated on a random point x is an \un-
predictable bit" that, to a bounded adversary, \looks like" a random bit. The basic idea in the
Nisan-Wigderson generator is to evaluate f() at several points, thus generating several unpredictable
output bits. In order to have a short seed, evaluation points are not chosen independently,
but rather in such a way that any two points have \low correlation." This is where the denition of
design is useful: the random seed y for the generator associates a truly random bit to any element
of the universe [t] of the design. Then the i-th evaluation point is chosen as the subset of the
bits of y corresponding to set S i in the design. Then the \correlation" between the i-th and the
j-th evaluation point is given by the a bits that are in S i \ S j . It remains to be seen that the
evaluation of f at several points having low correlation looks like a sequence of random bits to a
bounded adversary. This is proved in [NW94, Lemma 2.4], and we will state the result in a slightly
dierent form in Lemma 10 below.
For two functions f; and a number 0 1 we say that approximates f
within a factor if f and g agree on at least a fraction of their domain, i.e. Pr x
(Analysis of the NW Generator [NW94]) Let S be an (m; l; a)-design, and T :
1g. Then there exists a family G T (depending on T and S) of at most 2 m2 a +log m+2
functions such that for every function f : f0; 1g l ! f0; 1g satisfying
there exists a function , such that g() approximates f() within 1=2
"=m.
Proof: [Of Lemma 10] We follow the proof of Lemma 2.4 in [NW94]. The main idea is that if T
distinguishes NW f;S () from the uniform distribution, then we can nd a bit of the output where
this distinction is noticeable.
In order to nd the \right bit", we will use the so-called hybrid argument of Goldwasser and
Micali [GM84]. We dene m+1 distributions D dened as follows: sample a string
for a random y, and then sample a string r 2 f0; 1g m according to the uniform
distribution, then concatenate the rst i bits of v with the last m i bits of r. By denition, Dm
is distributed as NW f;S (y) and D 0 is the uniform distribution over f0; 1g m .
Using the hypothesis of the Lemma, we know that there is a bit b 0 2 f0; 1g such that
Pr y
We then observe that
In particular, there exists an index i such that
Now, recall that
and
)f(y jS i
We can assume without loss of generality (up to a renaming of the indices) that S lg.
Then we can see y 2 f0; 1g t as a pair (x; z) where
l . For
every us dene h j (x; note that h j (x; z) depends on jS i \ S j j a
bits of x and on l jS i \ S j j l a bits of z.
Using this notation (and observing that for a 0/1 random variable the probability that the
random variable is 1 is equal to the expectation of the random variable) we can rewrite Expression
(5) as
We can use an averaging argument to claim that we can x r to some values
c i+1 c m , as well as x z to some value w, and still have
Let us now consider a new function F : f0; 1g l+1 ! f0; 1g m dened as F (x;
. Using F , renaming r i as b, and moving
back to probability notation, we can rewrite Expression (6) as
Pr
That is, using T 0 and F it is possible to distinguish a pair of the form (x; f(x)) from a uniform
string of length l +1. We will now see that, given F () and T 0 (), it is possible to describe a function
g() that agrees with f() on a fraction 1=2 + "=m of the domain.
Consider the following pick a random b 2 f0; 1g, and compute T 0
us call g b (x) the function performing
the above computation, and let us estimate the agreement between f() and g b (), averaged over
the choice of b.
Pr
b;x
b;x
b;x
b;x
b;x
b;x
b;x
Pr b;x
Over the choices of x and b, the probability that we guess f(x) is 1=2 +"=m, hence there is a
1g such that g b 1
approximates f to within 1=2 Once T and F are given, we can
specify
using two bits of information (the bit b 1 , plus the bit b 0 such that T 0
We now observe that F can be totally described by using no more than log m+(i 1)2 a +(m i) <
log a bits of information. Indeed, we use log m bits to specify i, then for every j < i and
for every x we have to specify f(h j (x; w)). Since h j (x; w) only depends on the bits of x indexed by
just have to specify 2 a values of f for each such j. For j > i we have to specify c j .
Overall, we have a function g() that approximates f to within 1=2 "=m and that, given T ,
can be described using 2 log m+m2 a bits. We dene G T as containing all functions g() that can
be dened in this way, over all possible description. 2
3.3 Construction
The construction has parameters n, . It can be veried
that the constraints on the parameters imply that (because we have
be as in Lemma 6, with so that
For an element u 2 f0; 1g n , dene
be as in
Lemma 8, such that
log(k=2m), and
e
log(k=2m) l 2
log(k=2m)
By our choice of parameters, and by choosing c appropriately, we have that m > t.
Then we
u(y jS 1
3.4 Analysis
Lemma 11 Let Ext be as in Equation (7). For every xed predicate
are at most 2 2+m2 a
strings u 2 f0; 1g n such that
Proof: It follows from the denition of Ext and from Lemma 10 that if u is such that (8) holds,
then there exists a function g : f0; 1g l ! f0; 1g m in G T and a bit b 2 f0; 1g such that the function
There are at most 2 2+t l+log m+m2 a functions g 2 G T , and we have l > log n > log m. Further-
more, each such function can be within relative distance 1=2 "=m from at most (m=") 2 functions
u() coming from the error correcting code of Lemma 6.
We conclude that 2 2+2 log(m=")+t+m2 a
is an upper bound on the number of strings u for which
Expression (8) can occur. 2
Theorem 12 Ext as dened in Equation (7) is a (k; 2")-extractor.
Proof: We rst note that, by our choice of parameters, we have m2 a = k=2. Also, k=2 > 4m and
1g. From Lemma 11 we have that the probability that
sampling a u from a source X of min-entropy k we have
is at most 2 2+t+m2 a +log(m 2 =" 2 which is at most " by our choice of parameters. A Markov
argument shows that
now follows from Theorem 12 and by the choice of parameters in the previous section.
Acknowledgments
Oded Goldreich contributed an important idea in a critical phase of this research; he also contributed
very valuable suggestions on how to present the results of this paper. I thank Oded Goldreich, Sha
Goldwasser, Madhu Sudan, Salil Vadhan, Amnon Ta-Shma, and Avi Wigderson for several helpful
conversations. This paper would have not been possible without the help of Adam Klivans, Danny
Lewin, Salil Vadhan, Yevgeny Dodis, Venkatesan Guruswami, and Amit Sahai in assimilating the
ideas of [NW94, BFNW93, Imp95, IW97]. Thanks also to Madhu Sudan for hosting our reading
group in the Spring'98 Complexity Seminars at MIT. This work was mostly done while the author
was at MIT, partially supported by a grant of the Italian CNR. Part of this work was also done
while the author was at DIMACS, supported by a DIMACS post-doctoral fellowship.
--R
A new general derandomization method.
Weak random sources
BPP has subexponential time simulations unless EXPTIME has publishable proofs.
Free bits
How to generate cryptographically strong sequences of pseudo-random bits
Unbiased bits from sources of weak randomness and probabilistic communication complexity.
Dispersers, deterministic ampli
Probabilistic encryption.
On Yao's XOR lemma.
Modern Cryptography
Tiny families of functions with random properties: A quality-size trade-o for hashing
Another proof that BPP
Randomness versus time: De-randomization under a uniform assumption
Graph non-isomorphism has subexponential size proofs unless the polynomial hierarchy collapses
Introduction to Parallel Algorithms and Architectures.
The Theory of Error-Correcting Codes
Pseudorandom bits for constant depth circuits.
Extracting randomness: How and why.
Extrating randomness
Hardness vs randomness.
More deterministic simulation in Logspace.
Extracting all the randomness and reducing the error in Trevisan's extractors.
Tight bounds for depth-two superconcentrators
Explicit OR-dispersers with polylogarithmic degree
Pseudorandom generators without the XOR lemma.
Generating quasi-random sequences from slightly random sources
Computing with very weak random sources.
Construction of extractors using pseudo-random generators
On extracting randomness from weak random sources.
Almost optimal dispersers.
Various techniques used in connection with random digits.
Random polynomial time is equal to slightly random polynomial time.
Expanders that beat the eigenvalue bound: Explicit construction and applications.
Theory and applications of trapdoor functions.
General weak random sources.
On unapproximable versions of NP-complete problems
--TR
How to generate cryptographically strong sequences of pseudo-random bits
Generating quasi-random sequences from semi-random sources
Unbiased bits from sources of weak randomness and probabilistic communication complexity
Introduction to parallel algorithms and architectures
More deterministic simulation in logspace
Expanders that beat the eigenvalue bound
Hardness vs. randomness
BPP has subexponential time simulations unless EXPTIME has publishable proofs
On extracting randomness from weak random sources (extended abstract)
Randomness-optimal sampling, extractors, and constructive leader election
On Unapproximable Versions of <i>NP</i>-Complete Problems
exponential circuits
Explicit OR-dispersers with polylogarithmic degree
A new general derandomization method
Almost optimal dispersers
Tiny families of functions with random properties
Free Bits, PCPs, and Nonapproximability---Towards Tight Results
Extracting all the randomness and reducing the error in Trevisan''s extractors
Pseudorandom generators without the XOR Lemma (extended abstract)
Graph nonisomorphism has subexponential size proofs unless the polynomial-time hierarchy collapses
Extracting randomness
Weak Random Sources, Hitting Sets, and BPP Simulations
On Resource-Bounded Measure and Pseudorandomness
Extracting Randomness
Hard-core distributions for somewhat hard problems
Tight bounds for depth-two superconcentrators
Randomness vs. Time
--CTR
Amnon Ta-Shma , David Zuckerman , Shmuel Safra, Extractors from Reed-Muller codes, Journal of Computer and System Sciences, v.72 n.5, p.786-812, August 2006
Venkatesan Guruswami, Better extractors for better codes?, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Harry Buhrman , Troy Lee , Dieter Van Melkebeek, Language compression and pseudorandom generators, Computational Complexity, v.14 n.3, p.228-255, January 2005
Russell Impagliazzo, Can every randomized algorithm be derandomized?, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Anup Rao, Extractors for a constant number of polynomially small min-entropy independent sources, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Troy Lee , Andrei Romashchenko, Resource bounded symmetry of information revisited, Theoretical Computer Science, v.345 n.2-3, p.386-405, 22 November 2005
Tzvika Hartman , Ran Raz, On the distribution of the number of roots of polynomials and explicit weak designs, Random Structures & Algorithms, v.23 n.3, p.235-263, October
Christopher Umans, Pseudo-random generators for all hardnesses, Journal of Computer and System Sciences, v.67 n.2, p.419-440, September
Chi-Jen Lu , Omer Reingold , Salil Vadhan , Avi Wigderson, Extractors: optimal up to constant factors, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Paolo Ferragina , Raffaele Giancarlo , Giovanni Manzini , Marinella Sciortino, Boosting textual compression in optimal linear time, Journal of the ACM (JACM), v.52 n.4, p.688-713, July 2005
Michael Capalbo , Omer Reingold , Salil Vadhan , Avi Wigderson, Randomness conductors and constant-degree lossless expanders, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Peter Bro Miltersen , N. V. Vinodchandran, Derandomizing Arthur-Merlin games using hitting sets, Computational Complexity, v.14 n.3, p.256-279, January 2005
Ronen Shaltiel , Christopher Umans, Simple extractors for all min-entropies and a new pseudorandom generator, Journal of the ACM (JACM), v.52 n.2, p.172-216, March 2005
Luca Trevisan , Salil Vadhan, Pseudorandomness and Average-Case Complexity Via Uniform Reductions, Computational Complexity, v.16 n.4, p.331-364, December 2007
Jin-Yi Cai , Hong Zhu, Progress in computational complexity theory, Journal of Computer Science and Technology, v.20 n.6, p.735-750, November 2005 | error-correcting codes;extractors;pseudorandomness |
502105 | Improved implementations of binary universal operations. | We present an algorithm for implementing binary operations (of any type) from unary load-linked (LL) and store-conditional (SC) operations. The performance of the algorithm is evaluated according to its sensitivity, measuring the distance between operations in the graph induced by conflicts, which guarantees that they do not influence the step complexity of each other. The sensitivity of our implementation is O(log* n), where n is the number of processors in the system. That is, operations that are (log* n) apart in the graph induced by conflicts do not delay each other. Constant sensitivity is achieved for operations used to implement heaps and array-based linked lists.We also prove that there is a problem which can be solved in O(1) steps using binary LL/SC operations, but requires O(log log* n) operations if only unary LL/SC operations are used. This indicates a non-constant gap between unary and binary, LL/SC operations. | Introduction
An algorithm is non-blocking if a processor is delayed only when some other processor is making
progress. Non-blocking algorithms avoid performance bottlenecks due to processors' failures
or delay. In asynchronous shared memory systems, non-blocking algorithms require the use of
universal operations such as load-linked (LL) and store-conditional (SC) [14].
For ease of programming, it is more convenient to write non-blocking algorithms using
universal operations that can access several memory words atomically [4, 13, 18, 22]. However,
most existing commercial architectures provide only unary operations which access a single
memory word [23, 26]. Multi-word operations can be implemented using unary universal
operations, e.g., [14, 15], but these implementations are not very efficient.
The efficiency of an implementation can be evaluated in isolation, i.e., when there is no
interference from other operations contending for the same memory words [19]. However, this
provides no indication of the implementation's behavior in the presence of contention, i.e.,
when other operations compete for access to the same memory words. Clearly, if we have a
"hot spot", i.e., a memory word for which contention is high, then in any implementation,
some operations trying to access this word will be delayed for a long time. One can even argue
that in this case, operations will be delayed even when they are supported in hardware [5, 24].
However, such a hot spot should not delay "far away" operations.
This paper proposes to evaluate implementations by their sensitivity , measuring to what
distance a hot spot influences the performance of other operations. Roughly stated, the sensitivity
is the longest distance from one operation to another operation that influences its
performance, e.g., change the number of steps needed in order to complete the operation.
We concentrate on implementations of binary operations from unary LL/SC.
Binary operations induce a conflict graph, in which nodes represent memory words and
there is an edge between two memory words if and only if they belong to the data set of an
operation, i.e., they are the pair of memory words accessed by the operation. A hot spot
corresponds to a node with high degree. It is required that two operations whose distance in
the conflict graph is larger than the sensitivity do not interfere; that is, their step complexity
is the same whether they execute in parallel or not.
We present an algorithm for implementing arbitrary binary operations from unary LL and
SC operations with sensitivity O(log n).
Our algorithm uses LL/SC for convenience, and since they are supported by several contemporary
architectures, e.g., [23, 26]. The algorithm can be extended to rely on other unary
universal operations; in particular, the O(1) implementation of LL/SC from compare&swap [3]
can be employed.
The core of the algorithm implements the binary operation in a manner similar to known
algorithms [3, 7, 19, 25, 27]: A processor locks the words in the data set of the binary operation,
applies the operation, and then unlocks the data set. Operations help each other to complete,
thus ensuring that the algorithm does not block. The new feature of our algorithm is that a
processor may lock its data set in two directions-either starting with the low-address word or
starting with the high-address word.
The sensitivity of the core algorithm depends on the orientation of the conflict graph
according to locking directions. For two common data structures-an array-based linked list
and a heap-we can a priori determine locking directions which induce constant sensitivity. In
general, however, processors have to dynamically decide on locking directions. This is achieved
by encapsulating the core algorithm with a decision algorithm, coordinating the order in which
processors lock their data sets (low-address word first or high-address word first).
We first present a decision algorithm based on the deterministic coin tossing technique of
Cole and Vishkin [9] for the simplified case where the conflict graph is a path. Then, we show
a synchronization method that breaks a conflict graph with arbitrary topology into paths, to
which the adapted deterministic coin tossing technique can be applied.
Combined with the previous algorithm, this is an implementation of arbitrary binary operations
from unary LL/SC with O(log n) sensitivity.
We also show that there is a problem which can be solved in O(1) steps using binary LL/SC
operations, but requires O(log log n) steps if only unary operations (of any type) are used.
The proof adapts a lower bound of Linial [20], which shows that in a message passing model
a maximal independent set in an n-ring can not be found in less than \Omega\Gammaan/ n) rounds. This
lower bound indicates that any implementation of binary LL/SC from unary operations will
have to incur a non-constant overhead.
Following the original publication of this work [6, 10], Afek, Merritt, Taubenfeld and
Touitou [1] presented an algorithm for implementing any k-word object from unary oper-
ations; the algorithm is wait-free, guaranteeing that every operation eventually terminates.
Their algorithm uses algorithmic ideas from our implementation, and in addition, employs it
as a base case in a recursive construction.
Herlihy and Moss [16] introduce transactional memory, a hardware-based scheme for implementing
arbitrary multi-word operations.
Three schemes [3, 19, 25] present software implementations of transactional memory from
single-word atomic operations: Israeli and Rappoport [19] and Shavit and Touitou [25] present
non-blocking implementations of arbitrary multi-word operations using unary LL/SC, while
Anderson and Moir [3] give a wait-free implementation of k-compare&swap and k-SC. Shavit
and Touitou [25] present simulation results indicating that their algorithm performs well in
practice; Israeli and Rappoport [19] analyze the step complexity of an operation; Anderson and
Moir [3] measure the step complexity of k-compare&swap and k-SC operations. Analysis of all
three implementations shows that they are very sensitive to contention by distant operations.
For example, two operations executing on two ends of a linked list can increase each other's
step complexity.
Turek, Shasha and Prakash [27] show a general method for transforming a concurrent implementation
of a data structure into a non-blocking one; their method employs compare&swap.
A process being blocked due to some lock held by another process helps the blocking process
until it releases its lock; help continues recursively if the blocking process is also blocked by
another process. Barnes [7] presents a general method for constructing non-blocking implementations
of concurrent data structures; in this method, only words needed by the operation
are cached into a private memory. Thus, operations can access the data structure concurrently
if they do not contend. These methods are similar to software transactional memory [3, 19, 25],
and their sensitivity is high.
Our algorithm uses helping, as in [3, 7, 19, 25, 27], but decreases the sensitivity and increases
parallelism by minimizing the distance to which an operation helps.
Herlihy [15] introduces a general method for converting a sequential data structure into
a shared wait-free one. Both Herlihy's method and its extension, suggested by Alemany and
Felten [2], do not allow "parallelism" between concurrent operations and are inherently sequential
Anderson and Moir [3] present a universal construction that allows operations to access
multiple objects atomically. Their implementation uses multi-word operations and it can be
employed to efficiently implement certain large shared objects, where it saves copying and
allows parallelism.
Non-blocking implementations of multi-word operations induce solutions to the well-known
resource-allocation problem; these solutions have short waiting chains and small failure locality
[8]. Additional discussion of the relations between the two problems appears in [1].
Preliminaries
2.1 The Asynchronous Shared-Memory Model
In the shared-memory model, processors communicate by applying memory access
operations (in short, operations) to a set of memory words,
Each processor p i is modeled as a (possibly infinite) state machine with state set Q i ,
containing a distinguished initial state, q 0;i .
A configuration is a vector is a local state of processor
is the value of memory word m j . In the initial configuration, all processors are in
their (local) initial states and memory words contain a default value.
Each operation has a type, which defines the number of input and output arguments, their
allowable values, and the functional dependency between the inputs, the shared-memory state
and the processor state, on one hand, and the output arguments and the new states of the
processor and the memory, on the other hand. Each operation is an instance of some operation
type; the data set of an operation is the set of memory words it accesses.
For example, unary LL and SC are defined as follows:
return the value of m
SC(m, new)
if no write or successful SC to m since your previous LL(m) then
new
return success
else return failure
An event is a computation step by a single processor and is denoted by the index of the
processor. In an event, a processor determines the memory operation to perform according to
its local state, and determines its next local state according to the value returned by operation.
Operations are atomic; that is, each operation seems to occur at a certain point, and no
two operations occur at the same point. Therefore, computations in the system are captured
by sequences of configurations, where each configuration is obtained from the previous one by
an event of a single processor.
In more detail, an execution segment ff is a (finite or infinite) sequence
where for every is a configuration, OE k is an event, and the application of OE k to
results in C k+1 ; that is, if OE is the result of applying p i 's transition function
to p i 's state in C k , and applying p i 's memory access operation to the memory in C k .
An execution is an execution segment C which C 0 is the initial
configuration. There are no constraints on the interleavings of events by different processors,
manifesting the assumption that processors are asynchronous and there is no bound on their
relative speeds.
An implementation of a high-level operation type H from low-level operations of type L, is
a procedure using operations of L. Intuitively, processors can not distinguish between H and
its implementation by L.
Assume processor p i invokes a procedure implementing an operation op which terminates;
let OE f and OE l be the first and the last events, respectively, executed by processor p i in the
procedure for op; the interval of op is the execution segment
the operation does not terminate, its interval is the infinite execution segment
Two operations overlap if their intervals overlap.
An invocation of an operation may result in different intervals, depending on the context
of its execution. For example, two intervals of the same operation may differ and even return
different values if the first is executed in isolation, while the second overlaps other operations.
Let ff be an interval of p i ; the step complexity of ff, denoted step(ff), is the number of
events of p i in ff.
op op 3
Figure
1: A simple conflict graph.
An execution fi is linearizable [17] if there is a total ordering of the implemented operations
in fi, preserving the order of non-overlapping operations, in which each response satisfies the
semantics of H , given the responses of the previous operations in the total order.
An implementation is non-blocking if at any point, some processor with a pending operation
completes within a bounded number of steps.
2.2 Sensitivity
The conflict graph of an execution segment ff represents the dependencies between the data
sets of operations in ff; it is an undirected graph, denoted G ff . A node in G ff represents a
memory word, m i . An edge between two nodes m i and m j corresponds to an operation with
data set fm whose interval overlaps ff. G ff may contain parallel edges, if ff contains
several operations with the same data set. 1
Example 1 Let ff be a finite interval of an operation op(m which is overlapped by
shows G ff .
Next, we consider the conflict graph of an interval of some operation op, and measure the
distance between edges representing op and operations that delay the execution of op. The
maximum distance measured in all intervals of an implementation determines its sensitivity.
In more detail, assume ff is the interval of some operation op, and let op i
be an operation
in op's connected component in G ff . The distance between op and op i
in ff is the number of
edges in the shortest path in G ff whose endpoints are the edges representing op i
and op.
Intuitively, the sensitivity measures the minimum distance guaranteeing that two operations
do not "interfere" with each other. Below, we say that an operation op 2 does not interfere with
operation op 1
, if the step complexity of op 1
is the same, whether op 2
is executed in parallel
or not. The definition can be modified so that the sensitivity depends on other complexity
measures, e.g., the set of memory words accessed.
An earlier version of this work [6, 10] defined the contention graph of an execution, in which nodes represent
operations and edges represent the memory words in their data sets; it is the dual of the conflict graph. The
contention graph captures the dependencies between operations somewhat more accurately, but the conflict
graph is easier to work with.
An interval ff of some operation op is sensitive to distance ' if there is an interval ff 0 of op,
such
has exactly one more operation (i.e., an edge) than G ff , at distance ' from the edge
representing op, and
That is, the step complexity of op increases when a single operation is added to ff at distance
' from op.
The sensitivity of an interval ff is the maximum s such that ff is sensitive to distance s.
This means that the step complexity of op does not increase when a single operation is added
to ff at distance s + 1. If this maximum does not exist, then the sensitivity is 1.
The sensitivity of an implementation is the maximum sensitivity over all its intervals.
The sensitivity captures non-interference between operations in the following sense: If the
sensitivity of an implementation is s and the distance between two operations in the conflict
graph is d ? s, then the step complexity (or any other measure we consider) of the operations
is the same whether they execute in parallel or not.
2.3 Related Complexity Measures
Disjoint-access parallelism [19] requires an operation to complete in a constant number of steps,
if no other operations contend for the same memory words. Sensitivity strengthens this notion
and allows to evaluate the behavior of an implementation also in the presence of contention.
Afek et al. [1] suggest two other complexity measures:
1. An algorithm has d-local step complexity if the number of steps performed in an interval
ff is bounded by a function of the number of operations within distance d in G ff .
2. An algorithm has d-local contention if two operations access the same memory word only
if their distance in the conflict graph of their (joint) interval is at most d.
Clearly, sensitivity d+ 1 implies d-local step complexity; however, the converse is not true.
For example, suppose the data set of an operation op contains a hot spot m, accessed by '
other operations; suppose that m is also on a path of operations with length '. Sensitivity 1
does not allow operations on the path to influence op's performance, while with 1-local step
complexity, op may still have to help distant operations on the path.
Local contention is orthogonal to sensitivity and local step complexity, and can be evaluated
in addition to either of them. However, if operations access only memory words associated with
operations they help, then d-local contention follows from sensitivity d + 1. (The contention
locality of our algorithm is discussed at the end of Section 4.)
Dwork, Herlihy and Waarts [12] suggest to measure the step complexity of algorithms while
taking contention into account, by assuming that concurrent accesses to the same memory
words are penalized by delaying their response. This is a good complexity measure to evaluate
solutions for specific problems; however, implementations of multi-word operations inevitably
result in concurrent accesses to the same words, creating hot spots. The complexity measure we
propose, sensitivity, is appropriate for evaluating multi-word implementations, as it measures
the influence of hot spots.
3 The Left-Right Algorithm
A general scheme for implementing multi-word operations [3, 7, 19, 25, 27] is that an operation
"locks" the pair of memory words before executing the operation, and "helps" stuck operations
to avoid blocking.
In this section, we introduce a variant of this scheme, the left-right algorithm, in which
operations lock memory words in different orders. We show that the sensitivity and liveness
of the left-right algorithm depend on the orientation of the conflict graph induced by locking
orders for overlapping operations.
At the end of this section, we discuss data structures in which operations have inherent
asymmetry; for such data structures, the left-right algorithm can be directly applied to achieve
constant sensitivity. In the next section, we show how to break symmetry in general situations
so as to govern the locking directions and reduce sensitivity.
3.1
Overview
A known scheme for implementing multi-word operations from unary operations [7, 19, 25, 27]
requires each processor to go through the following stages:
Locking: Lock the memory words.
Execution: Apply the operation to the memory words.
Unlocking: Unlock the memory words.
Each operation is assigned a unique identifier. A memory word is locked by an operation
if the operation's id is written in the word; the word is unlocked if it contains ?. If a memory
word is locked by an operation, then no other operation modifies the memory word until it is
unlocked.
An operation is blocked if it finds that some of the words it needs are locked by another,
blocking operation. In this case, the processor executing the blocked operation helps the
blocking operation.
Figure
2: A scenario with high sensitivity.
In order to be helped, the operation's details are published when it is invoked and its
state is maintained during its execution. Helping implies that more than one processor may
execute an operation. The initiating processor is the processor invoking the operation, while
the executing processors are the processors helping it to complete. Although more than one
processor can perform an operation, only the most advanced processor at each point of the
execution performs the operation, and other executing processors have no effect.
The blocking operation being helped can be either in its own locking stage or already in
the execution stage or the unlocking stage. An operation in its execution or unlocking stages
has already locked its words; once an operation has locked its data set, it will never be blocked
again. Therefore, an operation helping a blocking operation which has passed the locking
stage is guaranteed not to be blocked. In contrast, an operation still in its locking stage can be
blocked by a third operation, which in turn can be blocked by a fourth operation, and so on.
Therefore, help for a blocking operation in its locking stage may have to continue transitively.
A non-blocking implementation guarantees that some operation eventually terminates and
transitive helping stops; yet, the sensitivity can be very high, as illustrated by the next example.
Example 2 Consider a scenario with n overlapping operations, op
; the data set of
op i is fm Assume every operation op i locks its low-address word,
tries to lock its high-address word m 2 , while op
are delayed.
locked by op 2 , op 1 has to help op 2 ; since the high-address word of op 2 is locked by
has to recursively help op 3 , etc. Thus, the sensitivity of this simple implementation is
at least n \Gamma 1.
In Example 2, operations are symmetric and try to lock memory words in the same order
(low-address word first). The main idea of the left-right algorithm is that when implemented
operations are binary, asymmetry can be introduced by having the operations lock their memory
words in two directions: Either from left to right-low-address word first, or from right to
left-high-address word first.
Example 3 Consider Example 2 again, and assume odd-numbered operations, op
lock their low-address word first, while even-numbered operations, op
address word first. If op i
(for odd i) locks its low-address word, m i , and finds its high-address
locked by another operation (which must be op i+1 ), then op i+1 has already locked
all its data set. Therefore, op i
will only have to help op i+1
in its execution or unlocking stage
but no further operations.
An operation decides on its locking direction before the locking stage. After an operation
terminates its unlocking stage, it resets the shared-memory areas that were used in the decision
algorithm. Thus we have two new stages, decision and post-decision, encapsulating the
algorithm. In this section, we focus on the locking and unlocking stages, leaving the decision
and post-decision stages algorithms to the next section.
3.2 The pseudocode
To simplify the code and the description, a separate shared-memory area is used for the locking
and unlocking stages. The size of this area is the same as the size of the data area; memory
word i in the locking area corresponds to memory word i in the data area.
The algorithm uses a shared array, op-details, in which each initiating processor publishes
its operation's description when the operation starts. The initiating processor also sets an
operation id (op-id) to be used in later stages; the op-id is composed from the id of the initiating
processor and a timestamp generated by a timestamp function which returns a unique id each
time it is invoked.
The algorithm follows the general scheme discussed earlier, except that locking is done
either from left to right or from right to left. If an operation discovers that a word is locked by
another operation, it helps the blocking operation by executing all its stages until it unlocks
its words; then, the operation tries again.
The pseudocode appears in Algorithm 1.
The locking and unlocking stages can be executed by several processors on behalf of the
same operation. Therefore, synchronization is needed to ensure that no errors are caused by
concurrent processors executing the same operation.
Algorithm 2 presents the details of the shared procedures used for locking and unlocking.
The user is responsible for avoiding synchronization errors in the execution stage. The same
local variable tmp is used in all procedures, and it holds the last value read from the shared
memory.
The main synchronization mechanism-guaranteeing that only the most advanced executing
processor actually makes progress-is the timestamp part of the operation id. This field
is written by the initiating processor at the beginning of the operation and is cleared at the
beginning of the unlocking stage. An operation is valid if its timestamp is set; it is invalid
if its timestamp is not set. An executing processor finding that the operation is invalid (its
timestamp is not set) skips directly to the unlocking stage. This ensures that once a memory
word is unlocked by an operation, it will not be locked again by this operation. Similar
considerations apply when unlocking the word.
Each memory word is initially ?; when locked by some operation it contains its id.
Procedure lock locks the two memory words in the order they are given as parameters. A
single memory word is locked by cell-lock, which attempts to lock the word, if the operation is
Algorithm 1 The left-right algorithm: Code for processor p i .
record
low-word, high-word // the data set
ts // timestamp
direction // locking direction
shared state op-details[N ]
procedure
atomically write to op-details[i] // publish
ts
decide(m decide on locking direction
help(op-id) // help yourself
procedure help(op-id)
locking stage
if ( op-details[op-id.pid].direction == left ) then lock(low, high, op-id) // left to right
else lock(high, left, op-id) // right to left
execution(low, high, op-id) // execution stage
unlock(low, high, op-id) // unlocking stage
post-decision(low, high, op-id) // clean memory
still valid and the word is not locked by another operation. The procedure then re-reads the
word: If the word is locked by the operation, the procedure returns true; if the word is locked
by another operation, the executing processor helps the blocking operation and tries again. If
the operation becomes invalid, the procedure returns false.
To help another operation, the blocked operation invokes help with the blocking opera-
tion's id as argument. The blocked operation becomes an executing processor of the blocking
operation and goes through all its stages.
Procedure unlock invalidates the operation by resetting its ts field; this prevents other
executing processors from locking the words again. Then, it unlocks the two memory words
with cell-unlock, which unlocks a single word only if it is still locked by the operation. The
success of SC is not checked; if it fails, the word has already been unlocked by another executing
processor.
Procedure validate compares the timestamp passed in the operation id with the timestamp
in the ts field of the operation's entry in op-details; the operation is valid if they are equal.
Algorithm 2 The left-right algorithm: Shared procedures for processor p i .
procedure lock(x, y, op-id)
cell-lock(x, op-id)
cell-lock(y, op-id)
procedure cell-lock(addr, op-id)
try to lock
locked
if ( tmp == op-id ) then return true
else help(tmp)
if ( not validate(op-id) ) then return false // check if operation ended
procedure validate(op)
if ( op-details[op.pid].ts == op.ts ) then return true
else return false
procedure unlock(x, y, op-id)
invalidate the operation
cell-unlock(x, op-id) // unlock the words
cell-unlock(y, op-id)
procedure cell-unlock(addr, op-id)
procedure decide(m
The function decide makes sure that only the first executing processor to return from
decision (which is left unspecified for now) writes its decision in the operation's details. 2
As mentioned before, it is the responsibility of the user to avoid synchronization errors in
the execution stage. The user can use the timestamp of the operation and may add more state
information, if necessary. For example, if the implemented operation is SC2, we only need to
check before each write that the operation is still valid, in a manner similar to cell-lock.
This function is not needed at this point, when the decision stage is executed only by the initiating processor.
3.3 Proof of Correctness
The proof that the algorithm is linearizable follows as in other general schemes, e.g., [7, 19,
25, 27], once locking and unlocking are shown to behave correctly. Thus, we only show that
the data set of an operation is locked during the execution stage and is unlocked after the
operation terminates.
An executing processor returns from cell-lock either when the memory word is locked by
the operation, or when the operation is invalid. The operation becomes invalid only when
some executing processor reaches the unlocking stage, previously completing the locking stage.
Therefore, when the first executing processor returns from cell-lock, the word is locked by the
operation. This proves the next lemma:
Lemma 3.1 The data set of an operation op is locked when the first executing processor of op
completes the locking stage.
Procedure cell-unlock checks the word first. Thus, a word is unlocked only by an executing
processor of the operation which locked it, implying the following lemma:
Lemma 3.2 The data set of an operation op remains locked until the first executing processor
of op reaches the unlocking stage.
The next lemma proves that if some executing processor unlocks a memory word, then no
other executing processor locks it again.
Lemma 3.3 If m is in the data set of an operation op, then m remains unlocked after the
first executing processor of op reaches the unlocking stage.
Proof: An executing processor of op starts the unlocking stage by initializing the timestamp
field in op-details, thus invalidating op, then it performs LL(m) and SC(?,m). Another executing
processor of op which tries to lock m afterwards, validates op after performing LL(m).
If it finds that the operation is valid, then the value read from m is not ?, and the executing
processor does not try to lock m again. Thus, the lock field of a locked word is written only
in order to unlock it.
This implies two things: First, the SC in cell-unlock fails only if another executing processor
unlocked m. Second, no executing processor will lock m again.
3.4 Progress and Sensitivity
The liveness properties of the algorithm and its sensitivity depend on the orientation of the
conflict graph according to the locking directions.
Figure
3: Reducing sensitivity with helping directions.
Let ff be an interval of some operation op. The helping graph of ff, H ff , is a mixed graph
representing helping among operations overlapping ff. The nodes of H ff are the memory words
accessed by the operations overlapping ff. There is an edge e between nodes m 1 and m 2 if they
constitute the data set of some operation in ff; the direction of e is operations
in ff with data set fm the direction of e is operations in
ff with data set fm otherwise, e is undirected. H ff is a partially oriented
version of G ff , the conflict graph of ff.
Lemma 3.4 Let fi be an execution of the left-right algorithm in which no operation completes.
Then the helping graph of some interval in fi contains either an undirected edge or a directed
cycle.
Proof: There must be a point in fi from which no operation completes. Let ff be the interval
of some blocked operation, op, in fi. By the left-right algorithm, op is blocked if it can not lock
its data set. Since op does not terminate, the blocking operation is itself blocked by another
blocked operation. Since the number of processors is finite and each processor has at most one
pending operation, the number of blocked operations in fi is also finite. Therefore, there is a
cycle of blocked operations, op
2. By the algorithm, a blocked operation helps
the blocking operation.
then we have two operations blocking and helping each other. This implies they
have the same data set, but lock it in different directions, and there is an undirected edge in
H ff . If l ? 2, then we have three or more operations blocking and helping each other, and
there is a directed cycle in H ff
We next analyze the sensitivity of the algorithm.
Consider two operations, op 1 with data set fm and op 2 with data set fm g.
Assume a helping graph in which the edge between m 1 and m 2 is directed to m 2 , and the edge
between m 2 and m 3 is also directed to m 2 (see Figure 3). If op 1 helps op 2 , then by the code
of cell-lock, m 2 is locked by op 2
. However, op 2 locks m 3 before locking m 2 , and has passed its
locking stage; thus, op 1 helps op 2 only in its execution or unlocking stages. This argument is
generalized in the next lemma.
Lemma 3.5 Let ff be an interval of an operation op i , and let op j
be an overlapping operation.
If there is no directed path from a memory word of op i to a memory word of op j in H ff , then
there exists another interval of op i , ff 0 , with the same overlapping operations except op j , such
that
Proof: Let OP be the set of operations that op i
helps; there are directed paths from op i
to
the operations in OP. This implies that there is no directed path from a memory word of an
operation in OP to a memory word of op j
in H ff , since there is no directed path from a memory
word of op i to a memory word of op j in H ff . Thus, the operations in OP do not help op j
, as
argued before the lemma.
We construct an execution ff 0 without op j
. In ff 0 , op i performs the same sequence of steps
as in ff, and moreover, all the operations in OP lock their words in the same order as in ff.
If ff 0 is not an execution of the left-right algorithm, then let op k
be the first operation in
OP which locks a word in ff and can not do it in ff 0 . By the algorithm, this happens only if
another operation holds a lock on this word. However, we do not add new operations in ff 0
(only omit op j
) and the sequence of locking until op k
's locking in ff 0 is the same as in ff. Thus,
if the word is unlocked in ff it is also unlocked in ff 0 , and op k
succeeds in locking the word.
performs the same sequence of steps in ff and in ff 0 ,
If the length of directed paths in H ff
is bounded by d, then operations at distance d+ 1 (or
more) do not increase the number of steps taken by the operation, by Lemma 3.5.
Lemma 3.6 Let ff be an interval of the left-right algorithm. If the length of a directed path in
H ff is at most d, then the sensitivity of ff is at most d + 1.
3.5 Data Structures with Constant Sensitivity
We discuss two data structures in which the memory access patterns of operations are very
structured and therefore, locking directions can be determined a priori to obtain constant
sensitivity.
A linked list: If a linked list is implemented inside an array, then the data set of each
operation is m i and m i+1 , for some i. Let the locking direction of the operation be determined
by the parity of its low-address word; that is, the locking direction of an operation accessing
some i, is "left" if i is even, and "right", if i is odd. Clearly, neighboring
operations in the conflict graph lock in opposite directions. Therefore, the maximum length of
a directed path is one. By Lemma 3.4, the implementation is non-blocking, and by Lemma 3.6,
its sensitivity is two.
v a v b
Figure
4: Binary operations on a heap: v g is at even depth.
A heap: Israeli and Rappoport [18] present an implementation of a heap supporting a bubble
up and bubble down using unary LL and binary SC2 operations. In this implementation, the
data set of a binary operation is always a parent node and one of its children.
In order to implement the binary operations, we use the left-right algorithm, and let the
locking direction of an operation be the parity of the depth of the higher node it has to lock.
Clearly, operations with the same data set have the same locking direction.
To see that the length of a directed path is at most two, let v g , v f , v a and v b be four nodes
in a heap where v g is the parent node of v f , and v f is the parent node of v a and of v b (see
Figure
4). Two kinds of paths can be formed by contending operations. In the first kind,
the depths are monotone, e.g., In this case, neighboring
operations lock in opposite directions, and hence no directed path from v a to v g or from v b to
v g can be formed. In the second kind, the depths are not monotone, e.g., In
this case, neighboring operations lock in the same direction (determined by the depth of v f ),
and no directed path is formed between v a and v b .
Therefore, the longest directed path is of length one. By Lemma 3.4, the algorithm is
non-blocking; by Lemma 3.6, its sensitivity is two.
4 The Decision Algorithm
When access patterns are not known in advance, processors have to dynamically decide on
locking directions. In this section, we present an algorithm for choosing locking directions after
gathering some information about the memory access patterns, so as to minimize sensitivity.
For simplicity, a separate shared-memory area is used for the decision stage. The size of
this area is the same as the size of the locking area (or the data area); memory word i in the
decision area corresponds to word i in the locking or data areas.
op op
Figure
5: Locking directions: High-address word is equal to low-address word.
op op
Figure
Locking directions: Low-address words are equal.
Consider a simple example, where the data set of op is fm and the data set of op 0 is
assume that their data sets intersect. If
(the high-address word of op is
the low-address word of op 0 ) then the locking directions of op and op 0 have to be different in
order to avoid a directed path (see Figure 5). If
(the low-address word of op is the
low-address word of op 0 ) then the locking directions of op 1
and op 2
have to be equal in order
to avoid a directed path (see Figure 6) and similarly when
(the high-address word of
op is the high-address word of op 0 ).
This example leads us to concentrate on monotone paths, in which the high-address word of
one operation is the low-address word of another operation (as in Figure 5). In this situation,
we want neighboring operations to lock in different directions (as much as possible).
We first describe the algorithm for the restricted case of a single monotone path, and then
handle the general case, by decomposing an arbitrary conflict graph into monotone paths.
4.1 Monotone Paths
be operations, such that op i is initiated with processor id pid i
and data set
). For op i
, the operations with lower indices, op
, are called downstream
neighbors; the operations with higher indices, op are called upstream neighbors.
(This situation is similar to the one depicted in Figure 2.)
Assume that an operation chooses its locking direction according to the following rule:
is smaller than the pid of the upstream neighbor, pid i+1
, then op i decides left;
otherwise, op i decides right.
An edge operation, with no upstream neighbor, decides left.
Under this rule, directed paths correspond to ascending or descending chains of pid's; for
example, if all operations decide left, then pid's appear in ascending order. The key insight
is that the length of the longest chain of ascending or descending pid's depends on the range
of pid's. If the range of pid's can be reduced, while ensuring that adjacent operations have
different pid's, then Rule ( ) guarantees short directed paths.
We reduce the range of pid's using the ``deterministic coin tossing technique'' of Cole and
Vishkin [9]. This is a symmetry breaking algorithm for synchronous rings, which we adapt to
monotone paths in an asynchronous system.
The Cole-Vishkin algorithm works in phases. In each phase, the range is reduced by a
logarithmic factor, until the range is small. After the range is reduced, Rule ( ) is applied.
Although the new pid's are not unique any more, the fact that the pid's of adjacent operations
are different ensures that operations can decide on their locking directions.
As we see below, in order to perform k range reduction phases, an operation has to know
the pid's of k operations. Operations without k neighbors decide
left and are called edge operations. 3
The algorithm guarantees the alternation property-adjacent pid's are not equal.
Assume that the pid's after phase k are in the range lg. By the alternation property,
the length of an ascending or descending sequence of pid's is at most l. However, since there
are there may be a chain of l operations with the same locking
direction.
For monotone paths, we simplify the description by assuming that (a) all operations start
together, and (b) an operation waits after the locking stage, until all operations finish their
locking stage. Later, we will remove these assumptions.
We first describe a single phase of the algorithm, reducing the range of pid's to O(log n),
with few memory operations. Applying these ideas repeatedly, reduces the range to O(log n),
using O(log n) memory operations.
4.1.1 A Single Phase
An operation begins the phase by writing its pid into its low-address word. Since all operations
start together, all memory words are written together; since each operation waits until all
operations finish their locking stage, memory words are not over-written while some operation
is choosing its direction.
The pid's induce pointers between consecutive words in the path: The pid in the low-address
word leads to the operation's details record, where the high-address word of the operation can
be found.
3 Edge operations may also decide according to the parity of their distance from the end of the path.
Figure
7: Reduction of pid's in a single phase.
Assume op i
reads three pid's: From m i (its own pid), from m i+1 and from m i+2 , denoted
respectively. The binary representations of processors' pid's
are strings of length dlog ne, where the bits are numbered from 0 to dlog ne \Gamma 1, going from
least significant bit to most significant bit.
Let j be the index of the least significant (rightmost) bit in which the binary representations
of pid 0 (i) and pid 0 (i can be represented as a binary string of length dlog log ne.
to be the concatenation of the binary representation of j and b j , the value of
the jth bit in pid 0 (i).
Note that the length of pid 1
is dlog log ne
In a similar manner, op i
computes pid
Example 4 Consider Figure 7. In this example, pid 0 (i) is 01010101 (= 85), pid
11111101 (= 254), and pid 0 (i+2) is 01111101 (= 126). The index of the rightmost bit in which
differ is 3 and the value in pid 0 (i) is 0; thus, pid 1 (i) is 0110 (= 6).
The index of the rightmost bit in which pid 0 (i differ is 7 and the value in
pid
Since no memory word is modified during the decision stage, both op i
and op i+1
use pid 0 (i+
1). This implies that a single phase is consistent-the
new pid computed for an operation op by itself is equal to the new pid computed for op by its
downstream neighbor-as stated in the next lemma.
Lemma 4.1 If op i
and op i+1
are neighboring operations on the path, then they calculate the
same value for pid 1 (i 1).
Thus, we can refer to pid 1 (i) without mentioning which processor calculates it.
As described, pid 1 (i) is composed from a bit part, denoted pid 1 (i):bit, and an index part,
denoted pid 1 (i):index.
If pid 1
Thus, pid 0 (i) and pid 0 (i have the same bit in position pid 1
contradicting the fact that pid 1 (i):index is the rightmost bit in which pid 0 (i) and pid 0 (i
differ. This proves the following lemma:
Lemma 4.2 If op i
and op i+1
are neighboring operations on the path and pid 0 (i) 6= pid
Since the initial pid's are distinct, the lemma implies that consecutive values of pid 1 are
not equal, proving the alternation property.
4.1.2 The Multi-Phase Algorithm
We now describe how the above idea is applied repeatedly to reduce the pid's to be at most
three bits long; this guarantees that the longest monotone sequence of pid's contains at most
eight operations.
Denote '(0; n) = dlog ne, and let '(j
f(n) be the smallest integer j such that '(j; n) - 3; note that
An operation starts by writing its pid in its low-address word; then it reads
upstream memory words. An edge operation, without neighbors, chooses
left, without any further calculation.
Let the pid's read by op i
be pid 0 (i); pid 0 (i 1). By iterating on
the operation computes pid k (j) from pid k\Gamma1 (j) and pid 1), for every j,
as in the single-phase algorithm (Section 4.1.1).
Lemma 4.1 immediately implies that the algorithm is consistent.
Lemma 4.3 If op i
and op i+1
are neighboring operations on the path, then they calculate the
same value for pid k (i + 1), for every k,
The alternation property is proved by induction, applying Lemma 4.2 for every iteration.
Lemma 4.4 If op i and op i+1 are neighboring operations on the path such that pid 0 (i) 6=
pid
Proof: The proof is by induction on the phases of the local computation, denoted k, 0 -
f(n). In the base case, 1), by the assumption.
For the induction step, assume the lemma holds for phase k,
every consecutive pair of pid k have different values. Since each iteration is the same as the
one-phase algorithm, Lemma 4.2 implies that pid k+1 (i) 6= pid
can be represented with less than three bits, and thus,
has to be represented with more than three bits; however, for any x - 3,
This shows that after each iteration, the pid's length is strictly reduced.
After f(n) iterations the length is at most three, showing that every value of pid f(n)\Gamma1 is at
most three bits long. Thus, there are at most eight consecutive operations with ascending or
descending pid f(n) values.
After the range of pid's is reduced, an operation chooses a locking direction by comparing
its pid and the pid of its upstream neighbor, following Rule ( ). Edge operations, without
neighbors, decide left. At most 8 consecutive operations decide left, and at
most eight consecutive operations decide right. This proves the following theorem:
Theorem 4.5 The length of a directed path is at most 8
4.2 General Topology
In order to apply the range reduction technique of the previous section in general topologies,
we "disentangle" an arbitrary combination of overlapping and contending operations into a
collection of monotone paths. To achieve this, an operation first checks whether its data set
may create a non-monotone path: If it does, then the operation stalls while helping other
operations; otherwise, it applies the algorithm for a monotone path.
To explain this idea further, we need to define monotone paths more precisely. Assume
memory words l form an undirected path in some conflict graph. Memory word m i ,
local minimum if m it is a local maximum if
local minimum is created when two operations have the same
low-address word (as in Figure 6); a local maximum is created when two operations have the
same high-address word. A path is monotone if it does not contain local minima or maxima.
The decision stage of each operation is preceded with a separate marking stage. In the
marking stage, operations check the memory access patterns before trying to lock their memory
words, to detect local minima or maxima and avoid non-monotone paths. Only one of the
operations forming a local minimum or a local maximum continues and the others stall.
The marking stage maintains a variant of the conflict graph in the shared memory. Nodes
are marked memory words; a word can be either marked low, if it is the low-address word of
some operation, or marked high, if it is the high-address word of some operation. A word can
be marked as both low and high, if it is the low-address word of one operation and the high-
address word of another operation. A marked memory word is in the data set of an operation
which is not stalled.
An operation starts by trying to mark its low-address and high-address words. If marking
succeeds, then the operation's data set is on a monotone path and the operation decides on
a locking direction in a manner similar to Section 4.1. If marking fails, then the operation's
data set creates a non-monotone path; the operation stalls while helping other operations.
A word has two special fields-for low marking and for high marking. An operation marks
a word by writing its id instead of a ? in the low/high field; marking fails if the relevant field
is not ?. If an operation succeeds in marking the low field of its low-address word, then it
tries to mark the high field of its high-address word.
An operation unmarks its data set after unlocking it.
A word cannot be marked as high twice, or as low twice, implying that if two overlapping
operations have the same high-address word or the same low-address address word, only one
of them succeeds in marking the word while the other stalls. Consequently, there are no local
minima or maxima, avoiding non-monotone paths.
An operation can mark a memory word as low even if the word is already marked high by
another operation. In this respect, marking is different than locking since a word can not be
locked by two different operations, but it can be marked by two different operations.
Two problems arise due to the dynamic nature of the conflict
1. If new operations join the end of a marked path after the locking stage starts, then an
edge operation may help upstream operations after it finds the end of the path.
This can increase the sensitivity during the locking stage.
2. An operation may unmark its data set and then another operation with the same data
set may take its place.
This can yield inconsistencies, if some downstream operations use the first operation's
pid for their local computations, while other downstream operations use the second
operation's pid.
Both problems are handled by the same mechanism:
An operation finding the end of a path, prunes the path by placing a special end symbol
in the low field of the last word of the path. No operation can later mark the last word as low,
and hence no new operation can be "appended" at the end of the path.
When an operation unmarks its data set, it replaces its id with end in the low field of its
low-address word, if this word is marked high (i.e., if it has a downstream neighbor). In this
way, the path is "cut" at the word where the data set was unmarked; new operations will not
be able to mark this word and confuse the downstream operations.
When the high field is unmarked and the low field is marked end, both fields are cleared
and set to ?.
4.2.1 The Pseudocode
Each memory word contains two fields for marking, low and high, which may contain an
operation id, end or ?; both are initially ?.
A binary intersection field is added to the record containing the operation's details; this
field is set when the operation is intersected, i.e., its high-address word is already marked high
by another operation and is part of another monotone path. The intersection field is cleared
at the beginning of the marking stage; if it is set during the operation, then it is not cleared
until the operation terminates.
Each operation holds a local array, id-array, in which the pids of its upstream operations
are collected for the local computation (as in the monotone path algorithm of Section 4.1). The
tmp variable contains the last value read by the low- and high-level functions. The functionality
of other local variables should be clear from the code.
The high-level procedures for the decision and the post-decision stages appear in Algorithm
3. Algorithms 4 and 5 detail the code for synchronizing access to the shared data
structures.
An operation starts by initializing the local variables (see the code) and clearing the
intersection field. Then, the operation tries to mark its low-address word, using first. If
marking fails in first, then the operation helps the operation whose id is marking the word as
low until the word is unmarked, and tries again; if marking succeeds, the operation continues.
To advance to the next memory word, the operation extracts the initiating processor's id
from the operation id in the current word, and reads the high-address word from its record in
the op-details array. This is the next word on the monotone path. The operation also stores
the initiating processor's id in id-array for the local computation.
An operation marks its high-address word using next. If both the high and low fields are
empty, then the operation tries to mark the high field of the word and put the end mark in the
low field; if this is successful, the operation has just marked the end of the path, and it returns
as left. If the high field is empty and the low field is not, then it tries to mark the high field of
the word; if successful, it continues to the upstream words. Otherwise, this is an intersection
with another path; the operation sets the intersection flag, unmarks its low-address word, helps
the operation which is written in the high field of the word and starts again. An intersected
operation first unmarks its low-address word so that operations helping it will not continue to
help its upstream operations.
An operation op i uses next to access upstream words. The parameters passed to next are the
address of the last word accessed by op i , the current address, and op j , the id of the operation
whose high-address word it is. Hence, op i helps op j to mark its high-address word.
Note that if op i finds that op j is intersected, then op i acts as if it discovers the end of the
path, since op j is going to unmark its low-address word.
The operation unmarks its low-address word with unmark-low and its high-address word
with unmark-high. Procedure unmark-low replaces its id with end in the low field of its low-
address word, if the high field of its low-address address word is not ?, i.e., it has a downstream
neighbor; otherwise, it clears the low field.
Algorithm 3 The general algorithm: Decision and post-decision stages.
local id my-op-id // id of the operation being executed
local id last-op-id // id of the last operation read
local id id-array[f(n)+2] // for local computation of reduced id
local addr current, prev // current and previous addresses
local int index // index to id-array
local addr tmp // persistent, used to advance
procedure decision(m
op-details[op-id.pid].intersecting
low-address word
while advance to the next word
tmp.low // tmp is set in first or inside
current
if ( next(prev, current, op-id) ) then return left // an edge operation
return according to local computation on id-array // as in monotone path algorithm
procedure first(addr, op-id)
repeatedly try to mark the low-address word
if mark-low(addr, op-id) then return
else help(addr.low)
procedure next(prev, addr, op-id)
repeatedly try to mark upstream words
if mark-end(addr, op-id) then return true // an edge operation
if mark-high(addr, op-id) then return false // continue to next word on the path
if set-intersection(addr, op-id) then // intersected operation
initiating processor
unmark-word(prev) // unmark the low-address word
get the op-id of intersected operation
restart the operation // get a new timestamp and clear the intersection flag
else return true // not the initiating processor
if ( not validate(op-id) ) then return true // an edge operation
procedure post-decision(m
Algorithm 4 The general algorithm: Low-level procedures for the decision stage.
procedure mark-low(addr, op-id)
if ( tmp.low == op-id) then return true // marking is successful
else if ( tmp.low 6= ?) then return false // marked by another operation
procedure mark-end(addr, op-id)
if ( not check-intersection(op-id) ) then // not intersected
SC(addr,(op-id, end)) // mark as ending
if ( tmp.high == op-id ) then return true // marking is successful
else if ( tmp.high 6= ? ) then return false // marked by another operation
procedure mark-high(addr, op-id)
if ( not check-intersection(op-id) ) then // not intersected
if ( tmp.high == op-id ) then return true // marking successful
else if ( tmp.high 6= ? ) then return false // marked by another operation
procedure check-intersection(op-id)
SC(op-details[op-id.pid].intersection, tmp) // "touch" the intersection flag
else return false
procedure set-intersection(addr, op-id)
SC(op-details[op-id.pid].intersection, true)
return( op-details[op-id.pid].intersection
Algorithm 5 The general algorithm: Low-level procedures for the post-decision stage.
procedure unmark-low(addr, op-id)
if ( tmp.low == op-id) then
there are downstream operations
ending mark in low
else SC(addr,(?, ?))
else return
procedure unmark-high(addr, op-id)
if ( tmp.high == op-id) then
if ( tmp.low == end ) then // edge operation
ending mark
else SC(addr,(?, tmp.low)) // unmark high field
else return
4.2.2 Proof of Correctness
The proof of correctness concentrates on properties of the marking stage: We prove that the
data set of an operation is marked when the first executing operation enters the locking stage,
and unmarked when the first executing processor completes the operation.
An operation marks its low-address word in first, and tries to mark its high-address word
in the first call to next. A non-intersected operation returns from next only after it marks the
word passed as the parameter; an intersected operation restarts an does not return from next
at all. Therefore, the high-address word is marked when the first call to next returns. This
implies the next lemma:
Lemma 4.6 The data set of an operation op is marked when the first executing processor of
op enters the locking stage.
A word is unmarked only when the post-decision stage is reached, or when the initiating
processor finds that the operation is intersected (in next), and restarts the operation. This
implies the next lemma:
Lemma 4.7 The data set of an operation remains marked until the operation completes.
Next, we prove that the data set of the operation remains unmarked after the operation
completes. A problem may occur if some executing processors set the intersection flag, while
other executing processors mark the high-address word.
set-intersection write to high
mark-high
read(intersection)
Figure
8: Illustration for the proof of Lemma 4.8, Case 1.
Lemma 4.8 The intersection flag of an operation is set if and only if its high-address word is
not marked.
Proof: Three procedures access the high field-mark-end, mark-high and unmark-high. Procedure
unmark-high does not mark an unmarked word. Therefore, only mark-end and mark-high
can mark a previously unmarked operation.
We only consider mark-high; the same proof applies to mark-end, which has the same synchronization
structure. Let m be the high-address word of some operation op; consider the
memory accesses in set-intersection and in mark-high, with the call to check-intersection expanded
set-intersection mark-high
H5: SC(m.high)
Case 1: Suppose that m is marked as high after m.intersection is set (that is, S3 precedes
H5). SC(m.high) in mark-high (H5) is reached only if read(m.intersection) in mark-high (H4)
returns ?, hence, H4 precedes SC(m.intersection) in set-intersection (S3). Since LL(m.high)
in mark-high (H1) returns ? it precedes read(m.high) in set-intersection (S2) which returns
a non-? value. (See Figure 8.) Therefore, there is an intervening write to m.high between
LL(m.high) and the matching SC(m.high) in mark-high, so the SC fails.
Case 2: Suppose that m.intersection is set after m is marked as high (that is, H5 precedes
S3). SC(m.intersection) in set-intersection (S3) succeeds only if SC(m.intersection) in mark-high
precedes LL(m.intersection) in set-intersection (S1). (See Figure 9.) Since LL(m.high) in
mark-high returns ? and read(m.high) in set-intersection (S2) returns a non-? value,
there is an intervening write to m.high between LL(m.high) and the matching SC(m.high) in
mark-high, so the SC fails.
set-intersection write to high
mark-high
Figure
9: Illustration for the proof of Lemma 4.8, Case 2.
Lemma 4.9 The data set of an operation op remains unmarked after op terminates.
Proof: If m is the low-address word of op then m is marked only by the initiating processor
of op, before any processor starts executing op. Thus, no executing processor of op marks m
after it is unmarked, which proves the lemma when m is the low-address word of op.
Assume m is the high-address word of op. If op terminates at the post-decision stage,
then op is invalidated (its ts field is reset) and its data set is unmarked by unmark-word. If
op terminates since it is intersected (in next), the intersection flag is set until the operation is
invalidated by the initiating processor. By Lemma 4.8, a memory word is not marked if the
intersection flag is set.
Therefore, we only have to prove that an unmarked high-address word is not marked again
when op is invalid. Only mark-end and mark-high mark the high-address word; as in the proof
of Lemma 4.8, we only consider mark-high.
Assume, by way of contradiction, that mark-high marks m and then after it is unmarked,
and that op is invalid. Since mark-high validates the operation before marking, op is invalidated
between the validation and the SC(m.high) operation in mark-high. Moreover, the LL operation
in mark-high reads ? from m.low and m.high. By Lemma 4.6, m is marked when the first
executing processor of op reaches the locking stage and is unmarked only after the operation is
invalidated. Thus, m is marked with a write to m.high between LL(m.high) and its matching
SC(m.high) in mark-high, so the SC fails.
4.2.3 Analysis of the Algorithm
Lemma 4.10 Only monotone paths exist during the locking stage.
Proof: The data set of an operation is marked when the first executing process enters the
locking stage (Lemma 4.6) and remains marked until the first executing process completes the
post-decision stage (Lemma 4.7).
A memory word can not be marked low twice or marked high twice, by different operations,
by the code of mark-low, mark-high and mark-end. Thus, two operations with the same low-
address or high-address words cannot be in their locking stage together. That is, there are no
local minima or maxima and only monotone paths exist during the locking stage.
Lemma 4.11 Let op i be a downstream neighbor of op i+1 , and assume op i and op i+1 decide by
local computation. The last f(n) entries in id-array i
are the first f(n) entries in id-array i+1
Proof: By Lemma 4.10, the data sets of op i
and op i+1
are on a monotone path,
and op i+1
read different values from some memory word, m j , then some
operation unmarked m j between the reads from m j .
Without loss of generality, let op i
be the operation that reads from m j after it is unmarked.
We argue that op j
exits as only as an edge operation, by induction on the distance between
in the conflict graph. This contradicts the assumption that op i
and op i+1
decide
by local computation, and proves the lemma.
In the base case, if the distance is 1, then is in the data set of op i
. If op i
marks m j as an ending word, then op i
is an edge operation and the claim is proved. If another
operation op 0 marks m j as an ending word, then since the low-address word is marked before the
high-address word, m i is also marked by op 0 . Thus, op j
does not mark its data set; therefore,
stalls and the claim follows.
For the induction step, assume the lemma holds when the distance between m i and m j is
assume that distance is l. If op i
reads end from m j
, then op i
is an edge operation
and the claim follows. Since m j is unmarked before op i
reads from it, some operation op writes
marked end before op i
it, then the
claim follows by the induction hypothesis. Otherwise, op i
finds that op is
invalid. Therefore, next returns true and op i
exists as an edge operation.
The decision algorithm for a monotone path and the general decision algorithm differ only
in the marking phase. Lemma 4.11 implies that the new pid computed for an operation op
by itself is equal to the new pid computed for op by its downstream neighbor. Since both
algorithms have the same local computation, Theorem 4.5 implies:
Lemma 4.12 The length of a directed path of non-edge operations is at most eight.
The end of the path is at most f(n)+1 operations from an edge operation. New operations
can not join the end of the path by marking the low field of the last word, since it contains
end. Therefore, an edge operation helps only operations with distance smaller than or equal
to f(n) + 1.
Lemma 4.13 An edge operation does not help upstream operations with distance larger than
Theorem 4.14 The sensitivity of the decision stage and the locking stage is O(log n).
Proof: The sensitivity of the decision stage is at most f(n) since an operation advances
at most f(n) words on the path in the conflict graph which contains its high-address word.
If there is a directed path of length nine in the locking stage in which operations lock from
right to left, then they all decide by local computation, which contradicts Lemma 4.12.
If there is a directed path of length 9 in the locking stage in which operations lock
from left to right, then there is an edge operation on the path with distance larger than f(n)+1
from the end of the path, by Lemma 4.12. This contradicts Lemma 4.13.
By Lemma 3.6, the sensitivity of the locking stage is at most f(n)
The algorithm does not guarantee local contention, as defined in [1]: Two operations may
access the same entry in op-details for different operations of the same processor, although
they are far away in the conflict graph. This happens since the op-details array is indexed by
processors' ids; this can be easily fixed by indexing op-details with operations' ids (as was done
in [1]).
5 The Step Complexity of Implementing Binary LL/SC
In this section, we prove an
\Omega\Gamma/22 log n) lower bound on the number of steps required for
implementing a binary SC operation using unary operations. The lower bound is proved by
showing a problem which can be solved in O(1) operations using binary LL/SC, but requires
\Omega\Gammaequ log n) operations if only unary operations (of any type) are used.
The "separating problem" is a variant of the maximal independent set (MIS) problem which
is defined as follows: A set of n processors, organized in a virtual ring; processor
assigned an initialized memory word m i , and gets as input the address of the memory
word of its clockwise neighbor, m next i
. Every processor has to terminate either as a member
or as a non-member; it is required that: (a) no two consecutive processors are members, and
(b) each non-member processor has at least one neighbor that will halt as a member.
The problem can be trivially solved with binary synchronization operations, LL and SC2:
Processor
load-links
and then load-links m next i
if they are both ? then
tries to SC2
its pid, atomically, into m i and m next i
. If p i succeeds, it exists as a member; otherwise, it
exists as a non-member.
Next, we show that any maximal independent set algorithm which uses only unary operations
has execution in which some processor performs at least \Omega\Gammaast log n) operations. Linial
has proved that \Omega\Gammaat/ n) rounds are required to solve the MIS problem in the message-passing
model [20]. We modify this proof to the shared-memory model, but we get a smaller bound.
Linial uses the fact that in the message-passing model, by round t, a processor knows only
the addresses and the pid's of processor that are at distance t from it. This is not true in the
shared-memory model. Assume that the computation proceeds in rounds, and in each round
a processor performs a single memory operation. If each processor knows the addresses of its
neighbors after t rounds, and during round t processor accesses the memory word of
the processor at distance k from it, than it knows the addresses of its k neighbors after
1. However, the next lemma proves that this is the best that may happen:
Lemma 5.1 At round t, each processor on the ring knows the pid's and the addresses of
processors with distance 2 t from itself.
Proof: The lemma is proved by induction on t, the round number. The base case is
In this round, each processor knows only what it receives as input, that is, the addresses of its
two words. That is, it knows the address of its clockwise neighbor.
For the induction step, we assume the lemma holds for round t, and prove the lemma for
1. By the induction assumption, after round t processor p i
knows the pid's and the
addresses of processors with distance 2 t from it. Thus, in round t + 1, a processor p i can access
a single processor at distance - 2 t from it. That is, it can learn the pid's and the addresses
that processor knows. Therefore, it can know the pid's and the addresses of processors with
distance it.
The rest of the proof closely follows Linial [20].
We first argue that an algorithm finding a maximal independent set in a ring can be
converted into a 3-coloring algorithm in one more operation. After a processor decides on its
membership, it checks the decision of its right neighbor: If both decide non-members, it picks
color decides member and its neighbor decides non-member, it picks color 2; otherwise,
it picks color 3.
Let V be the set of all vectors (v and the v i are mutually distinct
processors' ids. A 3-coloring algorithm is a mapping c from V to f1,2,3g.
We construct a graph B x;n , whose set of nodes is V . All edges of B x;n are in the form:
(v
nodes and is a regular graph with degree
The mapping is a 3-coloring of B x;n . To see this, suppose c maps
(v to the same color. Then the 3-coloring algorithm for the
ring fails when the labeling happens to contain the segment (u; v
By a result of Linial [20], the chromatic number of B x;n
n). Therefore, to color
x;n with at most three colors, we must have x
=\Omega\Gamma323 n), that is, t
log n).
This implies that \Omega\Gammaat/ log n) steps are needs in order to solve the MIS problem. Together
with the O(1) algorithm which solves the MIS problem using binary LL/SC, this proves the
following theorem.
Theorem 5.2 An implementation of binary LL/SC operations from unary operations must
have
\Omega\Gammae/1 log n) step complexity.
6 Discussion
This paper defines the sensitivity of implementing binary operations from unary operations;
the sensitivity is the distance, in terms of intersecting data sets, between two concurrent
operations that guarantees they do not interfere with each other. Clearly, if the sensitivity of
an implementation is low, then more operations can execute concurrently with less interference.
In our context, we say that one operation "interfere" with an other operation if one of them is
delayed because of the other. However, the notion of interference can be modified; for example,
one can add the requirement that the set of memory words accessed by an operation does not
change when it executes concurrently with another operation.
We present an algorithm for implementing a binary operation (of any type) from unary LL
and SC operations, with sensitivity O(log n). The algorithm employs a symmetry breaking
algorithm based on "deterministic coin tossing" [9]. For practical purposes, a simple non-deterministic
symmetry breaking technique could be employed; however, care should be taken
to avoid deadlocks in this scheme.
Interestingly, our core algorithm-locking memory words in two directions-is similar to the
left-right dining philosophers algorithm (cf. [21, pp. 344-349]). In this problem, n philosophers
sit around a table and there is a fork between any pair of philosophers; from time to time, a
philosopher gets hungry and has to pick the two forks on both sides in order to eat. In the
left-right dining philosophers algorithm, a philosopher sitting in an odd-numbered place first
picks the left fork, while a philosopher sitting in an even-number place, first picks the right
fork. As in our implementation of a linked list, this guarantees short waiting chains when
many philosophers are hungry.
We also prove that any implementation of binary LL/SC from unary operations will have
to incur non-constant overhead in step complexity. This lower bound is not tight, since the
the step complexity of the wait-free extension of our algorithm [1] is at least O(log n).
Acknowledgments
: The authors thank Shlomo Moran, Lihu Rappoport and Gadi Tauben-
feld for helpful comments on a previous version of the paper.
--R
Disentangling multi-object opera- tions
Performance issues in non-blocking synchronization on shared-memory multiprocessors
Universal constructions for multi-object operations
Primitives for asynchronous list compression.
The performance of spin lock alternatives for shared-memory multipro- cessors
Universal operations: Unary versus binary.
A method for implementing lock-free data structures
Localizing failures in distributed synchronization.
Deterministic coin tossing with applications to optimal parallel list ranking.
Universal operations: Unary versus binary.
Alpha Architecture Handbook.
Contention in shared memory systems.
The synergy between non-blocking synchronization and operating system structure
A methodology for implementing highly concurrent data objects.
Transactional memory: Architectural support for lock-free data structures
A correctness condition for concurrent objects.
Efficient wait-free implementation of a concurrent priority queue
Locality in distributed graph algorithms.
Distributed Algorithms.
The PowerPC Architecture: A Specification for a New Family of RISC Processors.
"Hot spot"
Software transactional memory.
Alpha AXP architecture.
Locking without blocking: Making lock based concurrent data structure algorithms nonblocking.
--TR
Deterministic coin tossing with applications to optimal parallel list ranking
Linearizability: a correctness condition for concurrent objects
Wait-free synchronization
Locality in distributed graph algorithms
Performance issues in non-blocking synchronization on shared-memory multiprocessors
Locking without blocking
Alpha AXP architecture
A methodology for implementing highly concurrent data objects
Transactional memory
A method for implementing lock-free shared-data structures
The PowerPC architecture
Primitives for asynchronous list compression
Disjoint-access-parallel implementations of strong shared memory primitives
The synergy between non-blocking synchronization and operating system structure
Localizing Failures in Distributed Synchronization
Universal operations
Disentangling multi-object operations (extended abstract)
Contention in shared memory algorithms
Universal Constructions for Large Objects
Distributed Algorithms
The Performance of Spin Lock Alternatives for Shared-Money Multiprocessors
Efficient Wait-Free Implementation of a Concurrent Priority Queue | universal operations;deterministic coin tossing;load-linked/store-conditional operations;wait-free algorithms;asynchronous shared-memory systems;contention-sensitive algorithms |
502111 | A fully sequential procedure for indifference-zone selection in simulation. | We present procedures for selecting the best or near-best of a finite number of simulated systems when best is defined by maximum or minimum expected performance. The procedures are appropriate when it is possible to repeatedly obtain small, incremental samples from each simulated system. The goal of such a sequential procedure is to eliminate, at an early stage of experimentation, those simulated systems that are apparently inferior, and thereby reduce the overall computational effort required to find the best. The procedures we present accommodate unequal variances across systems and the use of common random numbers. However, they are based on the assumption of normally distributed data, so we analyze the impact of batching (to achieve approximate normality or independence) on the performance of the procedures. Comparisons with some existing indifference-zone procedures are also provided. | Introduction
In a series of papers (Boesel and Nelson 1999, Goldsman and Nelson 1998ab, Nelson and
Banerjee 1999, Nelson and Goldsman 1998, Nelson, Swann, Goldsman and Song 1998, Miller,
Nelson and Reilly 1996, 1998ab), we have addressed the problem of selecting the best simulated
system when the number of systems is finite and no functional relationship among the
systems is assumed. We have focussed primarily on situations in which "best" is defined by
maximum or minimum expected performance, which is also the definition we adopt in the
present paper.
Our work grows out of the substantial literature on ranking, selection and multiple comparison
procedures in statistics (see, for instance, Bechhofer, Santner and Goldsman 1995,
Hochberg and Tamhane 1987, and Hsu 1996), particularly the "indifference zone" approach
in which the experimenter specifies a practically significant difference worth detecting. Our
approach has been to adapt, extend and invent procedures to account for situations and
opportunities that are common in simulation experiments, but perhaps less so in physical
experiments. These include:
ffl Unknown and unequal variances across different simulated systems.
ffl Dependence across systems' outputs due to the use of common random numbers.
ffl Dependence within a system's output when only a single replication is obtained from
each system in a "steady-state simulation."
ffl A very large number of alternatives that differ widely in performance.
ffl Alternatives that are available sequentially or in groups, rather than all at once, as
might occur in an exploratory study or within an optimization/search procedure.
Prior to the present paper, we have proposed procedures that kept the number of stages
small, say 1, 2 or 3, where a "stage" occurs whenever we initiate a simulation of a system
to obtain data. It makes sense to keep the number of stages small when they are implemented
manually by the experimenter, or when it is difficult to stop and restart simulations.
However, as simulation software makes better use of modern computing environments, the
programming difficulties in switching among alternatives to obtain increments of data are
diminishing (although there may still be substantial computing overhead incurred in making
the switch). The procedures presented in this paper can, if desired, take only a single basic
observation from each alternative still in play at each stage. For that reason they are said to
be "fully sequential." The motivation for adopting fully sequential procedures is to reduce
the overall simulation effort required to find the best system by eliminating clearly inferior
alternatives early in the experimentation.
For those situations in which there is substantial computing overhead when switching
among alternative systems, we also evaluate the benefits of taking batches of data-rather
than a single observation-from each system at each stage. These results have implications
for the steady-state simulation problem when the method of batch means is employed, or
when the simulation data are not approximately normally distributed.
Our work can be viewed as extending, in several directions, the results of Paulson (1964)
and Hartmann (1991), specifically in dealing with unequal variances across systems and dependence
across systems due to the use of common random numbers (CRN). See also Hartmann
(1988), Bechhofer, Dunnett, Goldsman and Hartmann (1990) and Jennison, Johnstone
and Turnbull (1982).
The paper is organized as follows. In Section 2 we provide an algorithm for our fully
sequential procedure and prove its validity. Section 3 provides guidance on how to choose
various design parameters of the procedure, including critical constants, batch size and
whether or not to use CRN. Some empirical results are provided in Section 4, followed
by conclusions in Section 5.
2 The Procedure
In this section we describe a fully sequential procedure that guarantees, with confidence level
greater than or equal to 1 \Gamma ff, that the system ultimately selected has the largest true mean
when the true mean of the best is at least ffi better than the second best. When there are
systems whose means are within ffi of the true best, then the procedure guarantees
to find one of these "good" systems with the same probability. The parameter ffi, which is
termed the indifference zone, is set by the experimenter to the smallest actual difference that
it is important to detect. Differences of less than ffi are considered practically insignificant.
The procedure is sequential, has the potential to eliminate alternatives from further
consideration at each stage, and terminates with only one system remaining in contention.
However, the experimenter may also choose to terminate the procedure when there are m - 1
systems still in contention, in which case the procedure guarantees that the subset of size m
contains the best system (or one of the good systems) with confidence greater than or equal
to
Throughout the paper we use the notation X ij to indicate the jth independent observation
from system i. We assume that the
unknown. Notice
may be the mean of a batch of observations, provided the batch size remains
fixed throughout the procedure (we analyze the effect of batch size later). We also let
the sample mean of the first r observations from system i. The
procedure is valid with or without the use of common random numbers.
Fully Sequential, Indifference-Zone Procedure
Select confidence level 1 \Gamma ff, indifference zone ffi and first stage sample size n 0 - 2.
Calculate j and c as described below.
Initialization: Let I = kg be the set of systems still in contention, and let h
2cj \Theta (n
from each system
For all i 6= ' compute
the sample variance of the difference between systems i and '. Let
where b\Deltac indicates truncation of any fractional part, and let
Here is the maximum number of observations that will be taken from system i.
select the system with the largest -
as the best.
and go to Screening.
Screening: Set I old = I. Let
I old and -
where
2cr
Notice that W i' (r) decreases as the number of replications r increases.
Stopping Rule: If select the system whose index is in I as the best.
Otherwise, take one additional observation X i;r+1 from each system i 2 I and set
select the system whose index is in I and has the
largest -
as the best. Otherwise go to Screening.
(Notice that the stopping rule can also be it is desired to find a subset
containing the best, rather than the single best.)
Constants: The constant c may be any nonnegative integer, with standard choices being
we evaluate different choices later in the paper and argue that
typically the best choice. The constant j is the solution to the equation
c
c
where I is the indicator function. In the special case that c = 1 we have the closed-form
solution
To prove the validity of the procedure we will need the following lemmas from Fabian
(1974) and Tamhane (1977):
Let
for some a ? 0 and fl - 0. Let R(n) denote the interval (L(n); U(n)), and let
2 R(n)g be the first time the partial sum S(n) does not fall in the triangular region
defined by R(n). Finally, let E be the event fS(T
;g. If positive integer c, then
c
Remark: In our proof that the fully sequential procedure provides the stated correct selection
guarantee, the event E will correspond to an incorrect selection (incorrectly eliminating
the best system from consideration).
independent random variables, and let
nonnegative, real-valued functions, each one nondecreasing
in each of its arguments. Then
Y
Y
Without loss of generality, suppose that the true means of the systems are indexed so
that - k - be a vector of observations
across all k systems.
Theorem 1 If are distributed i.i.d. multivariate normal with unknown mean
vector - and unknown, positive definite covariance matrix \Sigma, then with probability
the fully sequential indifference-zone procedure selects system k provided
Proof: We begin by considering the case of only 2 systems, denoted k and i, with - k - i +ffi.
Select a value of j such that
violated
Notice that T is the stage at which the procedure terminates. Let ICS denote the event that
an incorrect selection is made at time T . Then
ik
oe ik
ik
ffi2coe ik
2coe ik
oe ik
\Gammah
ik
ffi2coe ik
2coe ik
and "SC" denotes the slippage configuration -
Notice that under the SC, (X are i.i.d. N(\Delta; 1) with In lemma 1,
let
ik
ffi2coe ik
ik
ffioe ik
and \Delta=(2c). Therefore, the lemma implies that
But observe that
c
\Theta (n
ik
ik
and (n
ik has a chi-squared distribution with degrees of freedom. To evaluate
the expectation, recall that E [expft- 2
- a chi-squared
random variable with - degrees of freedom. Thus, the expected value of (4) is
c
c
where the equality follows from the way we choose j.
Thus, we have a bound on the probability of an incorrect selection when there are two
systems. Now consider k - 2 systems, set be the event that an
incorrect selection is made when systems k and i are considered in isolation. Then
ff
and the result is proven.
Remark: This procedure is valid with or without the use of common random numbers,
since the effect of CRN is to change (ideally reduce) the value of oe 2
ik , which is not important
in the proof. Notice that reducing oe 2
ik will tend to reduce S 2
ik , which narrows (by decreasing
a) and shortens (by decreasing N i ) the continuation region R(n). Thus, CRN should allow
alternatives to be eliminated earlier in the sampling process.
the fully sequential indifference-zone
procedure selects one of the systems whose mean is within ffi of - k .
then the result is trivially true, since any selected system constitutes
a correct selection.
Suppose there exists t ? 1 such that - k
Prfsystems are eliminatedg
Prfsystem k is selected from among
If we know that the systems will be simulated independently (no CRN), then it is possible
to reduce the value of j somewhat using an approach similar to Hartmann (1991); the smaller
j is the more quickly the procedure terminates. In this case we set
rather than ff=(k \Gamma 1), but otherwise leave the procedure unchanged. It is not difficult to
show that 2, and that, as a
consequence of Theorem 2.3 of Fabian (1974), g \Gamma1 (fi) is a nonincreasing function of fi.
Theorem 2 Under the same assumptions as Theorem 1, except that \Sigma is a diagonal matrix,
the fully sequential indifference-zone procedure selects system k with probability
Proof: Let CS denote the event that a correct selection is made, and CS i the event that a
correct selection is made if system i faces system k in isolation. Then
because the intersection event requires system k to eliminate each inferior system i individ-
ually, whereas in reality some system ' 6= could eliminate i. Thus,
Pr
Pr
ik
where the last equality follows because the events are conditionally independent. Clearly,
(5) does not increase if we assume the slippage configuration, so we do so from here on.
Now notice that Pr fCS
ik g is nondecreasing in X kj and S 2
ik . There-
fore, by lemma 2,
Pr
ik
ik
jo
where the last inequality follows from the proof of Theorem 1.
Remark: A corollary analogous to Corollary 1 can easily be proven in this case as well.
Key to the development of the fully sequential procedure is Fabian's (1974) result that
allows us to control the chance that we incorrectly eliminate the best system, system k,
when the partial sum process
wanders below 0. Fabian's analysis is based
on linking this partial sum process with a corresponding Brownian motion process. We are
aware of at least one other large-deviation type result that could be used for this purpose
(Robbins (1970)), and in fact is used by Swanepoel and Geertsema (1976) to derive a fully
sequential procedure. However, it is easy to show that Robbin's result leads to a continuation
region that is, in expected value, much larger than Fabian's region, implying a less efficient
procedure.
3 Design of the Procedure
In this section we examine factors that the experimenter can control in customizing the fully
sequential procedure for their problem. Specifically, we look at the choice of c, whether or
not to use CRN, and the effect of batch size. We conclude that
choice, CRN should almost always be employed, but the best batch size depends on the
relative cost of stages of sampling versus individual observations.
3.1 Choice of c
Fabian's result defines a continuation region for the partial sum process,
Provided c ! 1, this region is a triangle, and as long as the partial sum stays within this
triangle sampling continues. As c increases the triangle becomes longer, but narrower, and
in the limit becomes an open rectangle. Figure 1 shows the continuation region for our
procedure.
The type of region that is best for a particular problem will depend on characteristics of
the problem: If there is one dominant system and the others are grossly inferior, then having
the region as narrow as possible is advantageous since the inferior systems will be eliminated
quickly. However, if there are a number of very close competitors so that sampling is likely to
continue to the end stage, then a short, fat region is desirable. Of course, the experimenter
is unlikely to know such things in advance.
To compare various values of c, we propose looking at the area of the continuation region
they imply. The smallest area results from the best combination of small base-implying
that clearly inferior systems can be eliminated early in the experiment-and short height-
r
Figure
1: Continuation region for the fully sequential, indifference-zone procedure when
implying reasonable termination of the procedure if it goes to the last possible stage.
Using area as the metric immediately rules out since the area is infinite. For
the area of the continuation region is (ignoring rounding)
ik
!/
ik
ik
or simply cj 2 in units of 2(n
ik =ffi 3 . Thus, for fixed n 0 , the key quantity is cj 2 . Below
we provide numerical results that strongly suggest that
all k.
Table
1 lists the value of cj 2 for
(completely analogous results are obtained when In all of these cases
the smallest area. Therefore, when the experimenter has no idea if there are a
few dominant systems or a number of close competitors, the choice of appears to be a
good compromise solution. We set the remainder of the paper.
3.2 Common Random Numbers
As described in Section 2, the choice to use or not to use CRN alters the fully sequential
procedure only through the parameter j, which is smaller if we simulate the systems independently
(no CRN). A smaller value of j tends to make the procedure more efficient.
However, if we use CRN then we expect the value of S 2
i' to be smaller, which also tends to
make the procedure more efficient. In this section we show that even a small decrease in S 2
due to the use of CRN is enough to offset the increase in j that we incur.
Recall that the parameter j is the solution of the equation
we use CRN, while we know the systems are simulated independently.
C be the solution when CRN is employed, and let j I be the solution if the systems are
simulated independently.
For is easy to show that g(j) is nonincreasing in j ? 0; in fact, g(j) is decreasing
than or equal to 1
with equality holding only when 2. These two facts imply that j C - j I . Below we
derive a bound on j C =j I for practically useful values of k, n 0 and ff.
Ignoring rounding,
I
hi
The ratio \Psi is a function of n 0 , ff and k, and we are interested in finding an upper bound
100, the range of parameters we consider to be
of practical importance.
To accomplish this, we evaluated the @\Psi=@n 0 for all
and at narrow grid of ff values (including the standard 0:10; 0:05 and 0:01 values). For this
Table
1: Area cj 2 in units of 2(n
range, @\Psi=@n 0 is always less than zero; therefore, \Psi is a decreasing function of n 0 - 2. This
implies that we need to consider only the smallest value of n 0 to find an upper bound on (6).
After setting n the smallest value of interest to us, we observed that (6) is an
increasing function of ff by evaluating @\Psi=@ff for each k in the range of interest. Thus, the
largest ff, which is 0:1, should be chosen to find an upper bound:
Now we have only one variable remaining, k, and by evaluating (7) for all k in the range of
interest we find that gives the largest value, which is 1:01845. Therefore, we conclude
that for the practical range of interest
I
That is, j C is at most 1.02 times j I . To relate this ratio to the potential benefits of using
CRN, we consider two performance measures: the expected maximum number of replications
and the expected area on the continuation region.
Let N C and N I be the maximum number of replications and let A C and A I be the area of
the continuation region with CRN and without CRN, respectively. To simplify the analysis,
assume the variances across systems are all equal to oe 2 and that the correlation
induced between systems by CRN is ignoring rounding,
2cj I (n
I
Equation (8) shows that CRN will reduce the expected maximum number of replications if
implying that ae -
0:02 is always sufficient.
Recall that the area of the continuation region for a pair of systems i; j is given by
Under the same assumptions we can show that
I
This again implies that ae -
0:02 is always sufficient for CRN to reduce the
expected area of the continuation region. Therefore, for the range of parameters 2 - k - 100,
we claim that achieving a positive correlation of at least
0:02 is sufficient to make the use of CRN superior to simulating the systems independently.
3.3 The Effect of Batch Size
There are several reasons why an experimenter might want X ij , the jth observation from
system i, to be the mean of a batch of more basic observations:
ffl When the computing overhead for switching among simulated systems is high, it may
be computationally efficient to obtain more than one observation from each system
each time it is simulated.
ffl Even if the basic observations deviate significantly from the assumed normal distribu-
tion, batch means of these observations will be more nearly normal.
ffl If the simulation of system i involves only a single, incrementally extended, replication
of a steady-state simulation, then the basic observations may deviate significantly from
the assumed independence, while batch means of a sufficiently large number of basic
observations may be nearly independent.
In this section we investigate the effect of batch size on the fully sequential procedure. To
facilitate the analysis we assume that the total number of basic observations obtained in the
first-stage of sampling, denoted, n raw
0 , is fixed, and that all systems use a common batch size
b. However, the procedure itself does not depend on a common batch size across systems,
only that the batch size remains fixed within each system. Throughout our analysis we use
the setting in the fully sequential procedure.
a basic observation, denote the mean of a batch of b basic
observations. Let S 2
i' [b] be the sample variance of the difference between the batch means
from systems i and ', and let n
=b denote the number of batch means X ij [b] required
in the first stage of sampling. For the purpose of analysis we assume that n 0 is always integer
and that each basic observation X ij from system i is simulated independently and follows a
normal distribution with mean - i and common unknown variance oe 2 (recall that common
variance is not an assumption of our procedure). Then,
rawand
Under these conditions, if we ignore rounding
\Theta
As mentioned in Section 2, (n
i' [b] has a chi-squared distribution with
degrees of freedom, so the expected maximum number of stages involves the maximum of
identically distributed, but dependent, chi-squared variables. However, simulation
analysis of several cases revealed that the effect of this dependence on the expected value is
weak, so for the purpose of analysis we use the approximation
where f and F are the density and cdf, respectively, of the chi-square distribution with
degrees of freedom. In other words, we treat the S 2
i' [b]; ' 6= i as if they are independent.
Expression (12) is a function of k and n 0 , while j is a function of k; n 0 and ff. If we
assume that n raw
0 is given, then for any fixed k and 1 \Gamma ff the expected maximum number
normalized
Figure
2: E[N i ] as a function of number of batches n 0 when 0:95 and the
number of observations n raw
0 is fixed. For each value of k the result is normalized by dividing
by
of stages (11) depends only on the initial number of batches, n 0 , provided we express it in
units of (oe=
. Unfortunately, there is no closed-form expression for (12), but we
can evaluate it numerically.
Figure
as a function of the number of batches n 0 for different values of
k. To make the plot easier to view, the E[N i ] for each value of k is divided by E[N i ] when
batches. This figure shows that the expected maximum number of stages decreases,
then increases, as a function of the number of batches in the first stage when n raw
batches
decrease
in
number
of
raw
replications
Figure
3: Percentage decrease in E[bN i ] as a function of number of batches n 0 when
0:95 and the number of observations n raw
The savings in the beginning are caused by increasing the degrees of freedom. However,
after n 0 passes some point there is no further benefit from increased degrees of freedom, so
the effect of increasing n 0 (dividing the output into smaller batches, eventually leading to a
batch size of 1) is simply to increase the number of stages. The expected maximum number
of stages, E[N i ], is important when the computing overhead for switching among systems is
substantial.
Figure
3 shows the percentage decrease in E[bN i ], the number of basic (unbatched) ob-
servations, as the number of batches (but not the number of basic first-stage observations
increased. The expected maximum number of basic observations is important when
the cost of obtaining basic observations dominates the cost of multiple stages of sampling.
As the figure shows, this quantity is minimized by using a batch size of (that is, no
batching), but the figure also shows that once we obtain, say, 15 to 20 batches, there is little
potential reduction in E[bN i ] from increasing the number of batches (decreasing the batch
further.
For a fixed number of basic observations, the choice of number of batches n 0 should be
made considering the two criteria, E[N i ] and E[bN i ]. If the cost of switching among systems
dominates, then a small number (5 to 10) batches will tend to minimize the number of stages.
When the cost of obtaining each basic observation dominates, as it often will, then from 15
to 20 batch means are desirable at the first stage; of course, if neither nonnormality nor
dependence is a problem then a batch size of 1 will be best in this case.
4 Experiments
In this section we summarize the results of experiments performed to compare the following
procedures:
1. Rinott's (1978) procedure (RP), a two-stage indifference-zone selection procedure that
makes no attempt to eliminate systems prior to the second (and last) stage of sampling.
2. A two-stage screen-and-select procedure (2SP) proposed by Nelson, Swann, Goldsman
and Song (1998) that uses subset selection (at confidence level 1 \Gamma ff=2) to eliminate systems
after the first stage of sampling, and then applies Rinott's second-stage sampling
rule (at confidence level 1 \Gamma ff=2) to the survivors.
3. The fully sequential procedure (FSP) proposed in Section 2, both with and without
CRN (recall that the two versions differ only in the value of j used).
The systems were represented by various configurations of k normal distributions; in all
cases system 1 was the true best (had the largest true mean). We evaluated each procedure
on different variations of the systems, examining factors including the number of systems,
batch size, b; the correlation between systems, ae; the true means, and the
true variances, oe 2
k . The configurations, the experiment design, and the results are
described below.
4.1 Configurations and Experiment Design
To allow for several different batch sizes, we chose the first-stage sample size to be n raw
making batch sizes of possible. Thus, n 0 (the number of first-stage batch
means) was 24, 12 or 8, respectively. The number of systems in each experiment varied over
The indifference zone, ffi, was set to
1 is the variance of an observation
from the best system. Thus, ffi is the standard deviation of the first-stage sample
mean of the best system.
Two configurations of the true means were used: The slippage configuration (SC), in
which - 1 was set to ffi, while - This is a difficult configuration
for procedures that try to eliminate systems because all of the inferior systems are close to
the best. To investigate the effectiveness of the procedures in eliminating non-competitive
systems, monotone decreasing means (MDM) were also used. In the MDM configuration,
the means of all systems were spaced evenly apart according to the following
. Values of - were (effectively
spacing each mean 2ffi; ffi or ffi=3 from the previous mean).
For each configuration of the means we examined the effect of both equal and unequal
variances. In the equal-variance configuration oe i was set to 1. In the unequal-variance
configuration the variance of the best system was set both higher and lower than the variances
of the other systems. In the MDM configurations, experiments were run with the variance
directly proportional to the mean of each system, and inversely proportional to the mean of
each system. Specifically, oe 2
to examine the effect of increasing variance as the
mean decreases, and oe 2
to examine the effect of decreasing variances as
the mean decreases. In addition, some experiments were run with means in the SC, but with
the variances of all systems either monotonically decreasing or monotonically increasing as
in the MDM configuration.
When CRN was employed we assumed that the correlation between all pairs of systems
was ae, and values of ae = 0:02; 0:25; 0:5; 0:75 were tested. Recall that ae = 0:02 is the lower
bound on correlation that we determined is necessary to insure that the FSP with CRN is
at least as efficient as the FSP without CRN.
Thus, we had a total of six configurations: SC with equal variances, MDM with equal
ances, MDM with increasing variances, MDM with decreasing variances, SC with increasing
variances and SC with decreasing variances. For each configuration, 500 macroreplications
(complete repetitions) of the entire experiment were performed. In all experiments, the nominal
probability of correct selection was set at To compare the performance of
the procedures we recorded the total number of basic (unbatched) observations required by
each procedure, and the total number stages (when data are normally distributed all of the
procedures achieve the nominal probability of a correct selection).
4.2 Summary of Results
The experiments showed that the FSP is superior to the other procedures across all of the
configurations we examined. Under difficult configurations, such as the SC with increasing
variances, the FSP's superiority relative to RP and 2SP was more noticeable as the number
of systems increased.
As we saw in the Figure 3, the expected maximum number of basic observations that
the FSP might take from system i, E[bN i ], increases as batch size increases (number of
batches decreases); this was born out in the experiments as the total actual number of
observations taken also increased as batch size b increased. However, the total number of
basic observations increased more slowly for the FSP than for RP or 2SP as batch size
increased. The number of stages behaved as anticipated from our analysis of the expected
maximum number of stages: first decreasing, then increasing as the number of batches
increases.
Finally, and not unexpectedly, in the MDM configuration wider spacing between the true
means made both 2SP and FSP work better (eliminate more systems earlier) than they did
otherwise.
4.3 Some Specific Results
Instead of presenting comprehensive results from such a large simulation study, we present
selected results that emphasize the key conclusions.
4.3.1 Effect of Number of Systems
In our experiments the FSP outperformed all of the other procedures under every config-
uration; see Table 2 and Table 3 for an illustration. Reductions of more than 50% in the
number of basic observations, as compared to RP and 2SP, were obtained in most cases. As
Table
2: Total number of basic (unbatched) observations when the number of systems is
and the spacing between the means is as a function of batch size b and
induced correlation (ae).
MDM MDM SC SC
Procedure increasing var decreasing var increasing var decreasing var
Table
3: Total number of basic (unbatched) observations when the number of systems is
and the spacing between the means is as a function of batch size b and
induced correlation (ae).
MDM MDM SC SC
Procedure increasing var decreasing var increasing var decreasing var
the number of systems increased under difficult configurations-such as MDM with increasing
variances or SC with increasing variances-the benefit of the FSP relative to RP and
2SP is even greater.
4.3.2 Effect of Batch Size
Results in Section 3.3 suggest that the total number of basic observations should be an
increasing function of the batch size b (a decreasing function of the number of batches n raw
while the number of stages should decrease, then increase in b. Tables 2-3 show empirical
results for the total number of basic observations, while Table 4 shows the total number of
stages, for different values of b. As expected, the total number of basic observations for the
FSP (as well as for RP and 2SP) is always increasing in b, with the incremental increase
becoming larger as b increases. On the other hand, the number of stages is usually minimized
at shown in Table 4. The number of stages is always 1 or 2 for RP and 2SP.
The total number of basic observations taken by RP and 2SP is more sensitive to the
batch size than FSP is; see Table 3, for example. Under MDM with increasing variances,
the total number of basic observations taken by 2SP when more than four times
larger than when 1. However, for the FSP the number of basic observations was only
about 1.5 times larger when moving from 3. This effect becomes even more
pronounced as number of systems becomes larger.
4.3.3 Effect of Correlation
Results in Section 3.2 suggest that positive correlation larger than 0:02 is sufficient for the
FSP with CRN to outperform the FSP assuming independence. As shown in the empirical
results in Table 5, FSP under independence is essentially equivalent to the FSP under CRN
in terms of the number of basic observations. Of course, a larger positive
correlation makes the FSP even more efficient, and this holds across all of the configurations
Table
4: Total number of stages when number of systems is spacing of the means
is as a function of batch size b and induced correlation (ae).
MDM MDM SC SC
Procedure increasing var decreasing var increasing var decreasing var
26 28 475 342 350 51 43 45
Table
5: Total number of basic (unbatched) observations for the FSP when the number of
systems is spacing of the means is and the batch size is
of correlation ae.
MDM MDM SC SC
ae increasing var decreasing var increasing var decreasing var
Table
Total number of basic (unbatched) observations for the FSP when the number of
systems is the batch size is and the systems are simulated independently as a
function of the spacing of the means ffi=- .
MDM MDM SC SC
ffi=- increasing var decreasing var increasing var decreasing var
that were used in our experiments.
4.3.4 Effect of Spacing
In our experiments spacing between means was defined by multiples of ffi=- , so that small
- implies large spacing. Larger spacing makes it easier for any procedure that screens or
eliminates systems to remove inferior systems. Table 6 shows that, in most cases, the total
number of basic observations for the FSP decreases as - =ffi increases. The exception is the
SC with increasing variances where the FSP actually does worse with wider spacing of the
means (this happened for all values of k and b, and for 2SP as well as FSP). A similar pattern
emerged for the total number of stages.
To explain the counterintuitive results for the SC with increasing variances, recall that
in this configuration all inferior systems have the same true mean, but the variances are
assigned as in the MDM configuration with increasing variances; that is, the variance of the
ith system is oe 2
would be in the MDM configuration. Therefore,
larger ffi=- implies larger spacing, and larger spacing implies variances that increase much
faster. Thus, in this example the effect of increasing the variances of the inferior systems is
greater than the effect of spacing the means farther apart. This is consistent with what we
have seen in other studies: inferior systems with large variances provide difficult cases for
elimination procedures.
Conclusions
In this paper we presented a fully sequential, indifference-zone selection procedure that allows
for unequal variances, batching and common random numbers. As we discussed in Section 4,
the procedure is uniformly superior to two existing procedures across all the scenarios we
examined, and it is significantly more efficient when the number of systems is large or the
correlation induced via CRN is large. One advantage of the FSP is that it is easy to account
for the effect of CRN, which is not true of 2SP, for instance (see Nelson, Swann, Goldsman
and Song 1998 for a discussion of this point).
The results in this paper suggest several possibilities for improving the FSP. One is to
search for a tighter continuation region than the triangular one suggested by Fabian's lemma.
A tighter region would seem to be possible since our estimates of the true probability of
correct selection for the FSP (not reported here) show that it is typically greater than the
nominal
Although we did consider the effect of batching, our results are most relevant for the
situation in which we batch to reduce the number of stages or to improve the approximation
of normality, rather than to mitigate the dependence in a single replication of a steady-state
simulation. The effect of such dependence on the performance of the FSP should be
investigated before we would recommend routine use in steady-state simulation experiments
that employ a single replication from each system.
--R
A comparison of the performances of procedures for selecting the normal population having the largest mean when the populations have a common unknown variance.
Design and Analysis for Statistical Selection
Using ranking and selection to clean up after a simulation search.
Note on Anderson's sequential procedures with triangular boundary.
Comparing systems via simulation.
Statistical screening
An improvement on Paulson's sequential ranking procedure.
An improvement on Paulson's procedure for selecting the population with the largest mean from k normal populations with a common unknown variance.
Multiple Comparison Procedures.
Multiple comparisons: Theory and methods.
Asymptotically optimal procedures for sequential adaptive selection of the best of several normal means.
Getting more from the data in a multinomial selection problem.
Efficient multinomial selection in simulation.
Comparing simulated systems based on the probability of being the best.
Comparisons with a standard in simulation experiments.
Simple procedures for selecting the best simulated system when the number of alternatives is large.
A sequential procedure for selecting the population with the largest mean from k normal populations.
Communications in Statistics
Statistical methods related to the law of the iterated logarithm.
Sequential procedures with elimination for selecting the best of k normal populations.
Multiple comparisons in model I one-way anova with unequal variances
--TR
Multiple comparison procedures
Getting more from the data in a multinomial selection problem
Statistical screening, selection, and multiple comparison procedures in computer simulation
Two-stage multiple-comparison procedures for steady-state simulations
Evaluating the probability of a good selection
New Two-Stage and Sequential Procedures for Selecting the Best Simulated System
Simple Procedures for Selecting the Best Simulated System When the Number of Alternatives is Large
Ranking and Selection for Steady-State Simulation
Comparisons with a Standard in Simulation Experiments
New Procedures to Select the Best Simulated System Using Common Random Numbers
--CTR
Harry Ma , Thomas R. Willemain, Better selection of the best, Proceedings of the 36th conference on Winter simulation, December 05-08, 2004, Washington, D.C.
Takayuki Osogami, Finding probably best system configurations quickly, ACM SIGMETRICS Performance Evaluation Review, v.34 n.3, December 2006
Jamie R. Wieland , Barry L. Nelson, An odds-ratio indifference-zone selection procedure for Bernoulli populations, Proceedings of the 36th conference on Winter simulation, December 05-08, 2004, Washington, D.C.
Hua Shen , Hong Wan, Controlled sequential factorial design for simulation factor screening, Proceedings of the 37th conference on Winter simulation, December 04-07, 2005, Orlando, Florida
Roy R. Creasey, Jr. , K. Preston White, Jr., Comparison of limit standards using a sequential probability ratio test, Proceedings of the 37th conference on Winter simulation, December 03-06, 2006, Monterey, California
Takayuki Osogami , Toshinari Itoko, Finding probably better system configurations quickly, ACM SIGMETRICS Performance Evaluation Review, v.34 n.1, June 2006
Takayuki Osogami , Sei Kato, Optimizing system configurations quickly by guessing at the performance, ACM SIGMETRICS Performance Evaluation Review, v.35 n.1, June 2007
Kirk C. Benson , David Goldsman , Amy R. Pritchett, Ranking and selection procedures for simulation, Proceedings of the 37th conference on Winter simulation, December 03-06, 2006, Monterey, California
L. Jeff Hong , Barry L. Nelson, Indifference zone selection procedures: an indifference-zone selection procedure with minimum switching and sequential sampling, Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana
Juta Pichitlamken , Barry L. Nelson, Comparing systems via stochastic simulation: selection-of-the-best procedures for optimization via simulation, Proceedings of the 33nd conference on Winter simulation, December 09-12, 2001, Arlington, Virginia
Kirk C. Benson , David Goldsman , Amy R. Pritchett, Applying statistical control techniques to air traffic simulations, Proceedings of the 36th conference on Winter simulation, December 05-08, 2004, Washington, D.C.
Seong-Hee Kim, Efficient simulation procedures: comparison with a standard via fully sequential procedures, Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana
Todd A. Sriver , James W. Chrissis, Combined pattern search and ranking and selection for simulation optimization, Proceedings of the 36th conference on Winter simulation, December 05-08, 2004, Washington, D.C.
E. Jack Chen , W. David Kelton, Indifference zone selection procedures: inferences from indifference-zone selection procedures, Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana
David Goldsman , William S. Marshall , Seong-Hee Kim , Barry L. Nelson, Ranking and selection for steady-state simulation, Proceedings of the 32nd conference on Winter simulation, December 10-13, 2000, Orlando, Florida
Sigrn Andradttir , David Goldsman , Lee W. Schruben , Bruce W. Schmeiser , Enver Ycesan, Analysis methodology: are we done?, Proceedings of the 37th conference on Winter simulation, December 04-07, 2005, Orlando, Florida
Seong-Hee Kim, Comparison with a standard via fully sequential procedures, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.15 n.2, p.155-174, April 2005
Gwendolyn J. Malone , Seong-Hee Kim , David Goldsman , Demet Batur, Performance of variance updating ranking and selection procedures, Proceedings of the 37th conference on Winter simulation, December 04-07, 2005, Orlando, Florida
Susan M. Sanchez , R. Kevin Wood, The "BEST" algorithm for solving stochastic mixed integer programs, Proceedings of the 37th conference on Winter simulation, December 03-06, 2006, Monterey, California
Michael C. Fu , Jian-Qiang Hu , Chun-Hung Chen , Xiaoping Xiong, Simulation Allocation for Determining the Best Design in the Presence of Correlated Sampling, INFORMS Journal on Computing, v.19 n.1, p.101-111, January 2007
David Goldsman , Seong-Hee Kim , Barry L. Nelson, Statistical selection of the best system, Proceedings of the 37th conference on Winter simulation, December 04-07, 2005, Orlando, Florida
Jrgen Branke , Stephen E. Chick , Christian Schmidt, New developments in ranking and selection: an empirical comparison of the three main approaches, Proceedings of the 37th conference on Winter simulation, December 04-07, 2005, Orlando, Florida
Mahmoud H. Alrefaei , Ameen J. Alawneh, Selecting the best stochastic system for large scale problems in DEDS, Mathematics and Computers in Simulation, v.64 n.2, p.237-245, 27 January
Stephen E. Chick , Noah Gans, Simulation selection problems: overview of an economic analysis, Proceedings of the 37th conference on Winter simulation, December 03-06, 2006, Monterey, California
Douglas J. Morrice , John C. Butler, Ranking and selection with multiple "targets", Proceedings of the 37th conference on Winter simulation, December 03-06, 2006, Monterey, California
Sigrn Andradttir , David Goldsman , Seong-Hee Kim, Finding the best in the presence of a stochastic constraint, Proceedings of the 37th conference on Winter simulation, December 04-07, 2005, Orlando, Florida
Mary Court , Jennifer Pittman , Christos Alexopoulos , David Goldsman , Seong-Hee Kim , Margaret Loper , Amy Pritchett , Jorge Haddock, A framework for simulating human cognitive behavior and movement when predicting impacts of catastrophic events, Proceedings of the 36th conference on Winter simulation, December 05-08, 2004, Washington, D.C.
David Goldsman , Barry L. Nelson, Statistical selection of the best system, Proceedings of the 33nd conference on Winter simulation, December 09-12, 2001, Arlington, Virginia
Sigrn Andradttir, Simulation optimization with countably infinite feasible regions: Efficiency and convergence, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.16 n.4, p.357-374, October 2006
Seong-Hee Kim , Barry L. Nelson, Selecting the best system: selecting the best system: theory and methods, Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana
Christos Alexopoulos , Seong-Hee Kim, Review of advanced methods for simulation output analysis, Proceedings of the 37th conference on Winter simulation, December 04-07, 2005, Orlando, Florida
Christos Alexopoulos , Seong-Hee Kim, Statistical analysis of simulation output: output data analysis for simulations, Proceedings of the 34th conference on Winter simulation: exploring new frontiers, December 08-11, 2002, San Diego, California
James R. Swisher , Sheldon H. Jacobson , Enver Ycesan, Discrete-event simulation optimization using ranking, selection, and multiple comparison procedures: A survey, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.13 n.2, p.134-154, April | variance reduction;multiple comparisons;ranking and selection;output analysis |
502522 | Data mining criteria for tree-based regression and classification. | This paper is concerned with the construction of regression and classification trees that are more adapted to data mining applications than conventional trees. To this end, we propose new splitting criteria for growing trees. Conventional splitting criteria attempt to perform well on both sides of a split by attempting a compromise in the quality of fit between the left and the right side. By contrast, we adopt a data mining point of view by proposing criteria that search for interesting subsets of the data, as opposed to modeling all of the data equally well. The new criteria do not split based on a compromise between the left and the right bucket; they effectively pick the more interesting bucket and ignore the other.As expected, the result is often a simpler characterization of interesting subsets of the data. Less expected is that the new criteria often yield whole trees that provide more interpretable data descriptions. Surprisingly, it is a "flaw" that works to their advantage: The new criteria have an increased tendency to accept splits near the boundaries of the predictor ranges. This so-called "end-cut problem" leads to the repeated peeling of small layers of data and results in very unbalanced but highly expressive and interpretable trees. | Introduction
Tree methods can be applied to two kinds of predictive problems. Regression trees
are used to predict a continuous response, while classification trees are used to
predict a class label.
The goal of tree methods is to partition data into small buckets such that a
response value (regression tree) or a class label (classification tree) can be well
predicted in each subset. Figure 1 shows partitions in a regression tree in the
left panel and a classification tree in the right panel. In the left panel, data are
partitioned into two subsets in order to explain the response (vertical axis) as best
as possible with means in the subsets. In the right panel, data are partitioned into
two subsets in order to explain the class labels "1","2","3" as best as possible.
Andreas Buja is Technology Consultant, AT&T Labs - Research, 180 Park Ave, P.O. Box 971,
Florham Park, NJ 07932-0971.
andreas@research.att.com, http://www.research.att.com/~andreas/
Yung-Seop Lee is graduate student, Department of Statistics, Rutgers University, Hill Center
for the Mathematical Sciences - Busch Campus, Piscataway, NJ 08855.
Regression Tree
x
Classification Tree
x
class
Figure
1: Partitions in Regression and Classification Tree
1.1 Structure of trees
Data are repeatedly partitioned into even smaller subsets by means of binary splits.
Thus a tree can be described by a diagram such as Figure 2 (a). Each node represents
a subset of data, each pair of daughter nodes represents a binary split of the subset
corresponding to the parent node.2
(a)
(b)
Figure
2: Two predictors example
Standard trees partition the data with binary splits along predictor variables,
that is, two daughter subsets of a parent subset are obtained by splitting a fixed
value at a fixed threshold; for example, x goes to left daughter and x (1) - t 1
goes to right daughter, and so on . Figure 2 (a) illustrates a tree with repeated
splits at x (1) with respect to t 1 , x (2) with respect to t 2 , and x (1) again with respect
to t 3 . The geometry of the resulting partitions is illustrated in Figure 2 (b).
1.2 Tree construction: greedy optimization of splits
One repeatedly searches for the best possible split of a subset by searching all possible
threshold values for all variables.
Optimization is according to an impurity measure, which will be discussed in
detail in Sections 3 and 4. For regression trees, impurity can be measured by variants
of RSS, while for classification trees, impurity can be measured for example by
misclassification rate, entropy, Gini index, .
One stops splitting a subset when no significant gain in impurity can be obtained.
The subset then forms a terminal node or "leaf" in the tree.
Since CART (Breiman et al., 1984), the sizing of trees is somewhat more complex:
One tends to grow first an overly large tree, and then to prune it back in a second
stage. The reason for this is the greediness of tree construction which does not look
ahead more than one step, and may hence miss out on successful splits further down
the line. Overgrowing and pruning trees may therefore find more optimal trees than
growing alone.
1.3 Splitting criteria for data mining
A tree growing strategy is specified by defining a measure of impurity for a split.
This is done by defining a measure of impurity for the left and the right bucket of
a split, and by combining these two measures into an overall impurity measure for
the split.
In conventional splitting criteria, the left and the right are combined by a
weighted sum of the two sides. In effect, one compromises between left and right
buckets by taking their sizes into account.
In our new data mining criteria, however, we are not interested in modeling all
of the data equally well; rather, we are content as long as we find subsets of the
data that are interesting. We therefore propose combining the impurities of the left
and right buckets of split in such a way that a split occurs when one bucket alone is
of low impurity, regardless of the other bucket. That is, low impurity of one bucket
alone causes a low value for the impurity measure of the split.
These data mining criteria for splits need to be developed for regression and
classification trees separately. Section 3 deals with regression, and Section 4 with
classification.
2 Data for Trees
2.1 When To Grow Trees, and When Not
Recently tree methods have been applied to many areas in data mining, sometimes
even with data that are not suitable for tree methods. In general, growing trees
is most useful when dependence on predictors is heterogeneous, complex and interactions
are present. If the dependency is heterogeneous or complex, multiple local
structures can be handled by tree methods, while other models can only handle
a single prominent structure. The strength of trees for detecting interactions was
effectively advertized by the names of the first tree methods:
Interaction Detection (Morgan & Sonquist, 1963), and Chi-square-based
AID (Hartigan, 1975).
Prediction accuracy of trees will suffer in the following situations:
ffl Regression trees may fail when a dose response is present. The term "dose re-
sponse", borrowed from biostatistics, refers to gradual monotone dependence;
the opposite is dependence with threshold effects.
ffl Classification trees may fail when the optimal decision boundaries are not
aligned in some way with the variable axes, and hence splits on variables are
not able to make full use of the information contained in the predictors.
Interpretability of trees, on the other hand, may suffer in the presence of highly
correlated predictors: Correlated predictors can substitute for each other and appear
in more or less random alternation, when in fact one of the predictors would suffice.
The effect is added complexity for the interpreting data analyst and, even worse, the
possibility of misleading conclusions when separate and distinct effects are attributed
to the predictors, even though they share the "effects" due to their correlations.
2.2 An Example of When Not to Grow Trees
The waveform data (Breiman et al., 1984, p. 49 and 134) is an example of unsuitable
data for tree growing. This data was generated artificially with 21-dimensional
predictors. It has three classes whose distribution in predictor space is analytically
described as follows:
where the numbers u and ffl m are independently distributed according to the uniform
distribution on [0; 1], and the Gaussian with zero mean and unit variance, respec-
tively. The three 21-dimensional vectors h are irrelevant, except
for the fact that they form an obliquely placed triangle with sides of length roughly
11.5, 11.5 and 16.5. The shape of the resulting pointcloud is that of three sausages
with their ends linked together. This is reflected in the projection onto the first two
principal components, shown in figure 3 .
The reason why tree methods are not successful in this data is that the optimal
classification boundaries are not aligned with any of the coordinate axes, as the
authors mention on p. 55. From the geometry of the data it is immediate that
linear discriminant analysis outperforms trees. Thus, before applying tree methods
blindly, one should perform a preliminary exploratory analysis to determine what
type of classification procedure will make best use of the available information in
the data.
Figure
3: Distribution of Class of Waveform Data
2.3 Uses of Trees
There are two principles which must be balanced in using tree growing algorithms;
accuracy for prediction, and 2) interpretation.
In prediction problems, one wants to grow trees which produce accurate classifiers
based on training samples and which generalize well to test samples.
Often, one also wants to grow interpretable trees, that is, trees whose nodes
represent potentially useful and meaningful subsets. For interpretation, the top of
a tree is usually the most valuable part because the top nodes can be described
using few conditions: In general, each step down a tree adds a new inequality to
the description of a node (for exceptions, see below). It is also helpful, although not
essential for interpretation, that the top nodes of a tree tend to be statistically less
variable than lower level nodes.
2.4 Simplicity and Interpretability of Trees
In the present work, we are concerned with the construction of methods that favor
interpretable trees. That is, we attempt to find methods that search for meaningful
nodes as close to the top of the tree as possible. By emphasizing the interpretability
of nodes near the top, we de-emphasize the precise calibration of the tree depth
for prediction: Stopping criteria for growing and pruning trees are of low priority
for our purposes because the interpreting data analyst can always ignore the lower
nodes of a tree that was grown too deeply. This is harmless as long as the analyst
does not interpret statistical flukes in the lower nodes. By de-emphasizing the lower
parts of trees, we may sacrifice some global accuracy for prediction.
A remark about simplicity and interpretability of trees is in order: Although
the literature has a bias in favor of balanced trees, balance and interpretability are
very different concepts: There exists a type of extremely unbalanced trees that are
highly interpretable, namely, those that use the same variable repeatedly down a
cascade of splits. See Figure 4 for an example. The simplicity of this tree stems
from the fact that all nodes can be described by one or two conditions, regardless
of tree depth.
Tree fragments similar to this example appear often in trees fitted to data that
exhibit dose response. In the example of Figure 4, it is apparent from the mean
values in the nodes that the response shows monotone increasing dependence on
the predictor x. If this kind of tree structure is found, a data analyst may want to
consider linear or additive models that describe the monotone dependence in a more
direct way. On the other hand, dose response may be present only locally, in which
case a tree may still be a more successful description of the data than a linear or
an additive model. Below we will illustrate with the Boston Housing data how our
new data mining criteria can uncover local dose response.
Our new criteria for splitting sometimes generate trees that are less balanced and
yet more interpretable than conventional balanced trees. This is not a contradiction,
as the preceding considerations show.
x < 7x < 4
Figure
4: An artificial example of a simple and interpretable yet unbalanced tree
3 Regression Trees
3.1 Conventional Criteria for Regression Trees
As mentioned in Section 1.1, trees are grown by recursively splitting the data based
on minimization of a splitting criterion.
Conventional splitting criteria are compromises of the impurities of left and right
buckets. The impurity measure of choice for buckets is simply the variance. The
compromise for the splitting criterion is a weighted average of the variances or a
function thereof in the left and right buckets.
We need some notation: The means and variances in the left and right buckets
are denoted by
-R =NR
R =NR
The compromise for the split can be derived with maximum likelihood estimation in
a combined Gaussian model for the two buckets: i.i.d. N(-L ; oe 2
L ) in the left bucket
and i.i.d. N(-R ; oe 2
R ) in the right bucket. In CART, equal variances are assumed,
but there is no necessity to this assumption. We give the criteria for both equal and
non-equal variances. After minimizing the negative log Likelihood of these models,
the following splitting criteria result:
equal variance model (CART) : crit LR =NL +NR [N L - oe 2
[N L log -
L +N R log - oe 2
This makes precise the sense in which conventional criteria compromise between the
left and right bucket.
Minimization of the negative log Likelihood is straightforward, but here it is
anyway: In the equal variance case, we have
min -L ;- R ;oe [\Gammalog likelihood(L;
oe L
2 ) is the pooled variance estimate. Thus, minimizing
yields the same splits as minimizing the negative log Likelihood.
For the non-equal variance case, we get
min -L ;oe L ;- R ;oe R
[\Gammalog likelihood(L;
R +1+log
where the constants can be dropped because they do not affect the minimization
over all splits.
3.2 Data Mining Criteria for Regression Trees
Under some circumstances, conventional criteria for tree growing are unable to immediately
identify single interesting buckets, where "interesting" could mean buckets
with high mean value of the response, or high purity in terms of small variance of
the response. The kind of situations we have in mind are depicted in Figure 5. The
left hand plot shows a response that has a subset with small variance on the right,
while the right hand plot has a subset with extreme mean on the right. It is plain
that the CART criterion will not deal properly with the left hand situation as it
assumes equal variances. In the right hand situation CART will find a split in the
middle at around x = 300 whereas we may have an interest in finding the small
bucket with extremely high mean on the right.
Our approach is to identify pure buckets (-oe 2 small) or extreme buckets (- ex-
treme) as quickly as possible.
For example, we will ignore the left bucket if the right side is interesting, that
R is very small or if -
-R is very high (or low). Thus we are not compromising
anymore between left and right buckets. In order to generate this type of tree, we
use two new criteria for splitting.
x
y
-55x
y
-226
Figure
5: Examples of Pure or Extreme Buckets. Left : Around x=200, we find a
pure bucket with small -
oe 2 . Right: Around x=600, we find an extreme bucket with
large mean.
With this criterion we are searching for one pure bucket as early as possible. To
this end, rather than using a weighted average of two variances, the criterion
is:
crit
By minimizing this criterion over all possible splits, we are finding a split whose
left or right side is the single bucket with smallest variance (purity). Note that
for the subsequent split, both the high-purity bucket AND the ignored bucket
are candidates for further splitting. Thus, ignored buckets get a chance to have
further high-purity buckets split off later on. Typically, a high-purity bucket
is unlikely to be split again. As a result, the trees grown with this criterion
tend to be purposely unbalanced.
ffl New Criterion 2: One-sided extremes (high or low value response)
This criterion is searching for a high mean on the response variable as early as
possible. dual criterion would be searching for low means.) The criterion is
a more radical departure from conventional approaches because the notion of
"purity" has never been questioned so far. Means have never been thought of
as splitting criteria, although they are often of greater interest than variances.
From our point of view, minimizing a variance-based criterion is a circuitous
route when searching for interesting means. The mean criterion we propose
maximizes the larger of the means of the left and right bucket:
crit
Implicitly the bucket with the smaller mean is ignored. By maximizing this
criterion over all possible splits, we are finding a split whose left or right side
is the single bucket with the highest mean.
At this point, a natural criticism of the above criteria may arise: their potentially
excessive greediness, that is, trees built with these criteria may miss the most
important splitting variables and they may instead capture spurious groups on the
periphery of the variable ranges, thus exacerbating the so-called "end cut problem."
This criticism has a grain of truth, but it paints the situation bleaker than it is.
It is not true that important splitting variables are generally missed; it depends
on the data at hand whether the conventional or the newly proposed criteria are
more successful. The criterion that searches for one-sided extremes, for example,
can be successful because extreme response values are often found on the periphery
of the variable ranges, if for no other reason than monotonicity of many response
surfaces. The "end cut problem," that is the preference of cuts near the extremes
of the variable ranges, is shared by most tree-building strategies [see Breiman et al.,
1984, p.313], so this is nothing peculiar to the new criteria.
A second criticism is the lack of balance in the trees constructed with the above
criteria. Superficially, one might expect balanced trees to be more interpretable than
unbalanced ones, defeating the rationale for the criteria. This concern is unfounded,
though, as we will show with real data examples below. In those cases where the
above criteria succeed, they produce more interpretable trees due to the simple
rationale that underlies them.
Finally, recall that the present goal is not to devise methods that produce superior
fit, but methods that enhance interpretability. Therefore, we are not concerned
with the usual problems of tuning tree-depth for fit with stopping and pruning
strategies. In the course of interpretation of a tree, a user simply ignores lower level
nodes when they are no longer interpretable.
4 Classification Trees
4.1 Conventional Criteria for Classification Trees
We consider here only the two-class situation. The class labels are denoted 0 and
1. Given a split into left and right buckets, let p 0
be the
probabilities of 0 and 1 on the left and on the right, respectively. Here are some of
the conventional loss or impurity notions for buckets:
ffl Misclassification rate in the left bucket: min(p 0
(Breiman et al., 1984,
p.98). Implicitly one assigns a class to the bucket by majority vote and estimates
the misclassification rate by the proportion of the other class.
ffl Entropy (information): \Gammap 0
L . If the probabilities are estimated
by relative frequencies (denoted the same by abuse of notation), the
entropy can also be interpreted as min[\Gammalog likelihood=N L ] of the multinomial
model in the left bucket, where NL is the size of the left bucket (Breiman
et al., 1984, p.103).
ffl The Gini index: p 0
L . In terms of estimates, this is essentially the Mean
Square fitting a mean to y n 2 f0; 1g; n 2 L, as a short
calculation shows:
L is the number of 0's and N 1
L is the number of 1's
in the left bucket.0.20.60.0 0.5 1.0
Misclassification
the
Gini
index
Figure
Impurity Functions for buckets, from left to right: misclassification rate,
entropy, and the Gini index (as a function of p 0
L , for example).
The above impurity criteria for buckets are conventionally blended into impurity
criteria for splits by forming weighted sums from the left and right buckets, thus
compromising between left and right. Denoting with pL the marginal
probabilities of the left and the right bucket, the compromise takes the following
form for misclassification rate:
for entropy:
R log
R log
and for the Gini index:
The impurity functions for buckets are depicted in Figure 6. They have several
desirable properties: In a given node, they take the maximum value when the classes
are in equal proportions, and the minimum value when all bucket members are in
the same class. Also, these functions are symmetric, and hence the two classes are
treated equally (assuming equal priors).
Among the three impurity functions, misclassification rate is the most problematic
because it may lead to many indistinguishable splits, some of which may be
intuitively more preferable than others. The problem is illustrated, for example, in
Breiman et al. (1984, p.96). The general reason is linearity of misclassification rate
on either half of the unit interval(see the left plot of Figure 6). Linearity implies that
data can be shifted (within limits) between left and right buckets without changing
the combined misclassification rate of a split. The following consideration gives an
idea of the large number of potential splits that are equivalent under misclassification
rate: Consider the misclassification count NLmin(p 0
of rate) on training data, and estimate the probabilities by relative frequencies:
Denote by N l
L and N l
R the counts of the minority class - the "losers" - on the left
and right, and correspondingly by N w
L and N w
R the counts of the majority class -
the "winners" - on the left and right:
l
R - N w
R (1)
where l 6= w 2 f0; 1g. Note that
R (2)
is the misclassification count. Because
L and
R +N w
R are the
counts of the left and right bucket, respectively, we have
R +N w
R (3)
for the count of the parent bucket.
For a fixed value of the misclassification count M , there exist many combinations
(N l
R ) satisfying the above conditions 1, 2 and 3. For example, for
fixed there exist a large number of combinations, such as
In general, there exist
combinations for fixed N and M . For this amounts to 1281
possibilities. For fixed N, the maximum number of combinations is (N
is attained for misclassification count Figure 7 shows the number
of combinations for as a functions of M .
missclassification count
number
of
combinations
parent bucket size = 100
Figure
7: Number of combinations of misclassification count for fixed N=100
These considerations suggest that misclassification rate, when used as a splitting
criterion, can suffer from considerable non-uniqueness of minimizing splits. When
considering two examples of equivalent splits, however, one observes quickly that
one of the two splits is usually preferable for reasons other than misclassification
rate. For example, among the two equivalent splits resulting in the combinations
(N l
respectively, the latter
is clearly preferable because it provides one large bucket (R) that is completely pure.
The root of the problem is the piecewise linearity of the misclassification rate
such as min(p 0
R ) in the right bucket. We therefore need an impurity function
that accelerates toward zero, that is, decreases faster than linearly, as the proportion
of the dominant class in a bucket moves closer to 1. This is the rationale for using
concave impurity functions such as entropy and the Gini index. CART (Breiman
et al., 1984 and Salford Systems, 1995) uses the Gini index, while C4.5 (Quinlan,
1993) and S-Plus (Venables and Ripley, 1994) use entropy.
In the two-class case, there does not seem to exist a clear difference in performance
between entropy and the Gini index. In the multiclass case, however, recent
work by Breiman (1996) has brought to light a difference between the Gini and entropy
criterion. The Gini index tends to assign the majority class to one pure bucket
(if it exists) and the rest of the classes to the other bucket, that is, it tends to form
unbalanced, well distinguishable buckets. Entropy, on the other hand, tries to balance
the size of the two buckets. According to Breiman, this results in deficiencies
of entropy that are not shared by the Gini index.
4.2 Data Mining Criteria for Classification Trees
As mentioned in Section 3.2, our approach tries to identify pure or extreme buckets
as quickly as possible. While the criteria for regression trees are based on variances
or means, the ones for classification trees are only based on the probabilities of
class 0 or 1. Our goal can therefore be restated as searching for buckets where one
of the class probabilities (p 0 or either in the left or the right bucket, but
not necessarily both.
Another approach is to select one of the two classes, 1, say, and look for buckets
that are very purely class 1. For example, in a medical context, one might want to
quickly find buckets that show high rates of mortality, or high rates of treatment
effect.
As before in Section 3.2, we use two criteria for splitting, corresponding to the
two approaches just described:
This criterion searches for a pure bucket, regardless of its class, as early as
possible:
crit
which is equivalent to
crit (a)
because min(p 0
(for example) are monotone transformations of
each other. The criteria are also equivalent to
crit (b)
because if one of p 0
L is maximum, the other is minimum. This latter
criterion expresses the idea of pure buckets more directly.
ffl New Criterion 2: One-sided extremes
Having chosen class 1 as the class of interest, the criterion that searches for a
pure class 1 bucket among L and R is
crit
which is identical to
crit
because of p 0
L , for example.
Note that these criteria are direct analogs of the new data mining criteria for re-
gression, as shown in the following table:
Regression Trees Classification Trees
One-sided purity min(-oe 2
One-sided extreme max(-
5 The End-Cut Problem
High variability in small buckets can lead to chance capitalization (Breiman et al.,
p.313 ff), that is, optimization of splits can take advantage of randomly occurring
purity of small buckets. As an implication, splitting methods can lead to extremely
unbalanced splits. This problem is even greater for our data mining criteria because
they look at buckets individually without combining them in a size-weighted average
as in CART. In the usual CART criterion, small buckets with higher variability are
downweighted according to their size.
As an illustration of the end-cut problem, we simulated a simple case of unstructured
regression data, which is the same case that was theoretically examined in
Breiman et al. (1984). Theoretically, all cuts have equal merit, but empirically the
end-cut preference for finite samples emerges for all known figures of merit. Simulation
is necessary because the theoretical consideration for the CART criterion do
not carry over to the new data mining criteria.
We thus generated a set of 200 observations from a Gaussian distribution with
zero mean and unit variance. We computed for each split with buckets of size
- 5 the CART criterion (RSSL
R ) and the new one-sided
purity criterion min(-oe L ; - oe R ). We then computed the optimal split locations for both
criteria. This scheme was repeated 10,000 times, and optimality of each split location
was tallied in a histogram. The first histogram in Figure 8 shows the frequency with
which each split location was minimum under the CART criterion. The second
histogram shows the same for the one-sided purity criterion.
The two figures show the extent to which the criteria prefer cut locations closer
to either extreme. Clearly this effect is more pronounced for the one-sided purity
criterion. While both criteria require measures to counteract this effect, the one-sided
purity criterion does more so.
split location of minimum of total RSS
data mining criterion 1
split location of minimum of purity criterion
Figure
8: Illustration of end-cut problem
An approach to solving the end-cut problem is penalization of buckets for small
size by the criterion with penalty terms so as to make small buckets less viable.
Penalization is best understood if the criterion can be interpreted as negative log
Likelihoods of a suitable model. In this case, the literature offers a large number of
additive penalty terms such as C P (Mallows' C P statistic), AIC (Akaike's information
criterion), BIC (Schwarz' Bayesian information criterion), MDL (the minimum
description length criterion), among others. In the present paper, we work with the
AIC and the BIC criteria, if for no other reasons than their popularity. The AIC
penalty adds the (effective) number of estimated parameters to the negative log
Likelihood, whereas the BIC adds the number of estimated parameters multiplied
by log(N)=2.
Applying these penalties to an individual bucket, we obtain for the constant
Gaussian and multinomial models that underlie regression and classification trees:
Model -log Likelihood AIC BIC
Regression: Gaussian N(log -
Classification: Multinomial N Entropy
At this stage, we have to raise an important point about our intended use of
the AIC and BIC penalties: Conventionally, these penalties are used in a model
selection context, where one applies multiple models to a fixed dataset. In our
unconventional situation, however, we apply one fixed model to multiple datasets,
namely, variable-sized (NR , NL ) buckets of data that are part of a larger dataset.
The ensuing problem is that -log Likelihood is not comparable across different
bucket sizes: -log Likelihood is an unbiased estimate of bucket size times the divergence
of the model with respect to the actual distribution. Therefore, comparability
across bucket sizes is gained if one uses the average -log Likelihood, which is an
unbiased estimate of the divergence across bucket sizes.
log P
Z
log P (x))dQ(x)
[In the rest of this section, N denotes a generic sample size or bucket size in order
to avoid the subscripts of NL and NR .]
As a consequence, a penalized average -log Likelihood has a penalty term that
is also divided by the bucket size:
Model ave -log Likelihood 1
Regression: Gaussian 1log -
Classification: Multinomial Entropy
The penalty terms ( 2
N , log N
N and 1log N
N ) are monotone decreasing for N - 3 and
converge to zero as N !1. These behaviors are obvious requirements for additive
penalties.
The penalized value associated with the BIC is bigger than that associated with
the AIC except small buckets, as illustrated in Figure 9.
Unfortunately, even though this approach produces intuitively pleasing penalties,
their performance in our experiments has been somewhat disappointing. We expect,
however, that the approach will perform better if some recent results by Jianming
Ye (1998) are taken into account. In light of Ye's insights, it is plausible that
in the penalties the number of parameters of the model should be replaced with
Ye's ``generalized degrees of freedom'' (gdf) which take into consideration the fact
that extensive searching implicitly consumes degrees of freedom. Gdfs tend to be
considerably higher than the conventional dfs.
In the examples of the following sections, we counteract the end-cut problem by
imposing a minimum bucket size of roughly 5% of the overall sample size.
6 An Example of Regression Trees: Boston Housing
Data
Following Breiman et al. (1984), we demonstrate the application of the new data
mining criteria on the Boston Housing data. These well-known data were originally
Bucket Size
Penalized
Value
AIC penalty
BIC penalty
Figure
9: The plot of 1
and 1log N
except small size of N
created by Harrison and Rubinfeld (1978), and they were also exploited by Belsley,
Kuh and Welsch (1980) and Quinlan (1993). Harrison and Rubinfeld's main interest
in the data was to investigate how air pollution concentration (NOX) affects the
value of single family homes in the suburbs of Boston. Although NOX turned out
to be a minor factor if any, the data have been frequently used to demonstrate new
regression methods.
The Boston Housing data are available from the UC Irvine Machine Learning
Repository (http://www.ics.uci.edu/~mlearn/MLRepository.html).
The data contain the median housing values as a response, and 13 predictor
variables for 506 census tracts in the Boston area. The predictors are displayed in
Table
1.
6.1 Comparison of Criteria for Regression Trees
We constructed several trees based on both CART and the new data mining cri-
teria. To facilitate comparisons, all trees were generated with equal size, namely,
terminal nodes. A minimum bucket size of 23 was chosen, which is about 5%
of the overall sample size (506). No pruning was applied because our interest was
in interpretation as opposed to prediction. The results are displayed in Figures 10
Variable description
CRIM crime rate
ZN proportion of residential land zoned for lots over 25,000 sq. ft
INDUS proportion of non-retail business acres
CHAS Charles River dummy variable (=1 if tract bounds river; 0 otherwise)
NOX nitric oxides concentration, pphm
RM average number of rooms per dwelling
AGE proportion of owner-occupied units built prior to 1940
DIS weighted distances to five Boston employment centers
RAD index of accessibility to radial highways
TAX full-value property tax rate per $10,000
PTRATIO pupil teacher ratio
is the proportion of blacks
LSTAT percent lower status population
response median value of owner occupied homes in $1000's
Table
1: Predictor Variables for the Boston Housing Data.
through 15 and summarized in Table 2. In the figures, the mean response (m) and
the size (sz) is given for each node.
After perusing the six trees and their summaries in sequence, it is worthwhile
to return to the CART tree of Figure 10 at the beginning, and apply the lessons of
the extreme means trees of the last two Figures, 14 and 15. We recognize from the
CART tree that RM is the dominant variable, in particular for RM ? 7, indicating
a monotone dependence on RM for large values of RM. For RM! 7, LSTAT takes
over. As we learnt from the high means tree, for RM! 7 there exists a monotone
decreasing dependence on LSTAT. The CART tree tries to tell the same story, but
because of its prejudice in favor of balanced splits, it is incapable of successively
layering the data according to ascending values of LSTAT: The split on LSTAT at
the second level divides into buckets of size 51% and 34%, of which only the left
bucket further divides on LSTAT. By comparison, the high means criterion creates
at level 3 a split on LSTAT with buckets of sizes 9% and 77%, clearly indicating
that the left bucket is only the first of a half dozen "tree rings" in ascending order
of LSTAT and descending order of housing price.
In summary, it appears to us that the least interpretable trees are the first two,
corresponding to the CART criterion and the separate-variances criterion, although
0:8 is the highest among the six trees. Greater interpretability is gained
with the one-sided purity criterion, partly due to the fact that it successively peels off
many small buckets, resulting in a less balanced yet more telling tree. The greatest
|
LSTAT<5.36
B<380.4
CRIM<7.48
5.7%
8.9%
m=21,
4.7%
m=24,
8.7%
6.7%
4.9%
6.5%
m=21,
4.7%
4.5%
m=17,
8.7%
m=15,
7.1%
5.5%
8.1%
m=35,
4.5%
4.5%
m=45,
5.9%
m=23,
sz=100.0%
85.0%
m=23,
51.0%
m=25,
28.1% m=24,
13.4%
m=21,
m=15,
34.0%
m=17,
20.4% m=16,
15.8%
13.6%
m=37,
15.0%
9.1%
Boston
Housing
pooled
variance
model
(CART)
Figure
10: The Boston Housing Data, Tree 1, CART Criterion.
|
DIS<3.36
RM<7.06
AGE<93.2
4.7% m=22.0,
10.5% m=26.0,
8.5%
5.7% m=34.0,
4.5%
6.1% m=23.0,
8.1% m=20.0,
5.7% m=21.0,
7.1%
5.9% m=20.0,
5.5% m=17.0,
4.5%
8.1% m=14.0,
5.1% m=12.0,
4.9% m=
9.9,
4.7%
sz=100.0%
40.1%
34.0%
19.0%
10.3%
59.9%
26.9% m=20.0,
13.0%
33.0%
10.1%
14.8% m=11.0,
9.7%
Boston
Housing
separate
variance
model
Figure
11: The Boston Housing Data, Tree 2, Separate Variance Criterion.
AGE<22.1
5.1%
m=27,
14.6%
5.3%
m=34,
4.5%
m=46,
5.3%
6.1%
5.1%
6.1%
5.5%
m=11,
5.3%
4.7%
6.7%
4.7%
7.1%
4.5%
m=17,
8.9%
m=23,
sz=100.0%
m=23,
91.1%
m=24,
86.6%
m=25,
74.7%
41.1%
m=31,
m=28,
29.6%
m=27,
20.0%
28.5%
m=18,
m=17,
15.6%
m=15,
10.9%
m=16,
Boston
Housing
raw
one-sided
purity
Figure
12: The Boston Housing Data, Tree 3, One-Sided Purity Criterion.
m=24,
6.5%
7.9%
m=23,
6.1%
5.5%
m=35,
4.9%
m=28,
4.9%
m=46,
5.1%
m=21,
6.7%
8.9%
4.9%
8.5%
m=11,
4.5%
4.7%
7.1%
4.5%
m=17,
8.9%
m=23,
sz=100.0%
m=23,
91.1%
m=24,
86.6%
m=25,
74.7%
41.1%
m=24,
20.6%
m=25,
14.4%
m=35,
20.6%
15.4% m=32,
9.9%
m=18,
26.9%
13.8%
m=16,
13.0%
m=16,
Boston
Housing
data:
penalized
one-sided
purity
Figure
13: The Boston Housing Data, Tree 4, Penalized One-Sided Purity Criterion.
|
LSTAT<6.22
m=36,
4.7%
m=27,
8.7%
m=25,
7.9%
4.7%
m=21,
5.7%
5.9%
m=21,
5.3%
m=21,
5.5%
8.5%
m=18,
5.1%
m=18,
5.7%
m=15,
4.7%
m=16,
4.5%
m=11,
13.0%
m=33,
4.5%
m=45,
5.1%
m=23,
sz=100.0%
m=21,
94.9%
m=21,
90.1%
85.6%
76.9%
m=18,
69.0%
m=24,
10.5%
m=17,
58.5%
m=17,
52.6% m=16,
m=16,
41.7% m=15,
28.1% m=13,
Boston
Housing
one-sided
extremes
(high
Figure
14: The Boston Housing Data, Tree 5, High Means Criterion.
|
LSTAT<14.1
PTRATIO<20.2
m=13,
4.7%
m=23,
6.3%
m=25,
5.7%
m=24,
4.7%
4.9%
5.9%
m=34,
4.7%
m=41,
8.9%
7.3%
m=21,
8.7%
5.3%
6.7%
m=17,
7.7%
m=15,
8.5%
4.5%
5.1%
m=23,
sz=100.0%
m=23,
94.9%
m=24,
90.1%
m=24,
85.6%
m=25,
77.1%
69.4%
m=27,
62.6%
m=27,
57.3%
48.6%
41.3%
m=31,
m=24,
10.5%
m=34,
19.6% m=39,
13.6%
Boston
Housing
one-sided
extremes
(low
Figure
15: The Boston Housing Data, Tree 6, Low Means Criterion.
R CART: pooled variance model
Somewhat balanced tree of depth 6. The major variables are
RM (3x) and LSTAT (6x). Minor variables appearing once each
are NOX, CRIM, B, PTRATIO, DIS, INDUS, with splits mostly
in the expected directions. Means in the terminal nodes vary from
45.10 to 10.20.
Tree 2 NL log -
R separate variance model
More balanced tree than the CART tree, of depth 5. The major
variables are again RM (4x) and LSTAT (3x). Minor variables are
doesn't appear. The splits are in the expected directions. Means in
the terminal nodes vary from 45.10 to 9.94.
Tree 3 min(-oe 2
R ) Data Mining criterion 1: raw one-sided
purity
Unbalanced tree of depth 9. PTRATIO (1x) appears at the top
but cannot be judged of top importance because it splits off a
small bucket of only size 9%. Apparently a cluster of school districts
has significantly worse pupil-to-teacher ratios than the ma-
jority. Crime-infested neighborhoods are peeled off next in a small
bucket of size 5% (CRIM, 1x). NOX makes surprisingly 3 appear-
ances, which would have made Harrison and Rubinfeld happy. In
the third split from the top, NOX breaks off 12% of highly polluted
areas with a low bucket mean of 16, as compared to 25 for the rest.
The most powerful variable is LSTAT (3x), which creates next a
powerful split into buckets of size 41% and 34%, with means of
and 19, respectively. Noticeable is the ambiguous role of DIS (3x),
which correlates negatively with housing values for low values of
LSTAT (! 10:15, "high-status"), but positively for high values of
LSTAT ("low-status") and high values of NOX (? 0:52, "polluted").
RM (2x) plays a role only in "high-status" neighborhoods. "Crime-
neighborhoods are peeled off early on as singular areas with
extremely low housing values (CRIM, 1x). The splits on AGE (1x)
and ZN (1x) are irrelevant due to low mean-differences.
log - oe 2
Data Mining criterion 2: penalized one-sided
purity (AIC)
Qualitatively, this tree is surprisingly similar to the previous one,
with some differences in the lower levels of the tree. Penalization
does not seem to affect the splits till further down in the tree.
Mining criterion 3: One-sided ex-
tremes: high mean
The search of high response values creates a very unbalanced tree.
There are no single powerful splits, only peeling splits with small
buckets on one side. The repeated appearance of just two variables,
RM (2x, levels 1 and 3) and LSTAT (8x), however, tells a powerful
story: For highest housing prices (bucket mean 45), average size of
homes as measured by RM (? 7:59) is the only variable that mat-
ters. For RM ! 7:08, a persistent monotone decreasing dependence
on LSTAT takes over, down to a median housing value of about 17.
This simple interplay between RM and LSTAT lends striking interpretability
to the tree and tells a simple but convincing story. At
the bottom, crime (CRIM, 2x) and pollution (NOX, 1x) show some
remaining smaller effects in the expected directions. Smaller effects
of DIS and TAX are also seen half-way down the tree.
-R ) Data Mining criterion 4: One-sided ex-
tremes: low mean
This tree tells a similar story as the previous one, but greater precision
is achieved for low housing values, because this is where the
criterion looks first. Again the tree is very unbalanced. The first
peeling split is on CRIM (1x) which sets aside a 5% bucket of crime-
infested neighborhoods with lowest housing values around 10. The
second lowest mean bucket consists of 5% census tracts with low B
(! 100), corresponding to 63%\Sigma32% of African-American popula-
tion, due to the quadratic transformation. Thereafter, monotone
decreasing dependence on LSTAT takes over in the form of six peeling
splits, followed by monotone increasing dependence on RM in
the form of five peeling splits. These two successive dose response
effects are essentially the same as in the previous tree, which found
them in reverse order due to peeling from high to low housing values.
Table
2: The result of regression trees in Boston Housing Data.
interpretability is achieved for the one-sided extreme means criteria, partly due to
their extreme imbalance.
Interpretability, as we found it in the above examples, has two sides to it:
ffl Buckets with extremely high or low means near the top of the trees: Such buckets
are desirable for interpretation because they describe extreme response behavior
in terms of simple clauses. Finding such buckets is exactly the purpose
of the extreme means criteria.
In the Boston Housing Data, the high-mean criterion, for example, immediately
finds a bucket of well-off areas with very large homes: RM? 7:59. The
low-mean criterion, by comparison, immediately finds a high-crime bucket:
CRIM? 15:79. While CART also finds the areas with large homes at the
second level, it does not find the high crime bucket at all.
ffl Dose response effects or monotone dependencies: The iterative peeling behavior
of the data mining criteria allows detection of gradual increases or decreases
in the response as a function of individual predictors. For iterative peeling on
a predictor to become apparent, it is essential that the peeling layers form a
series of small dangling terminal buckets and hence form highly unbalanced
trees. CART, by comparison, is handicapped in this regard because of it favors
balanced splits more than the data mining criteria.
The last point is ironic because it implies that the greater end-cut problem of the
data mining criteria compared to CART works in our favor. Conversely, CART's
end-cut problem is not sufficiently strong to allow it to clearly detect monotone
dependencies with highly unbalanced trees.
Once monotone dependencies are detected, it is plausible to switch from tree
modeling to additive or even linear modeling that include suitable interaction terms.
Interaction terms may be necessary to localize monotone dependence. For example,
the above tree generated with the low means criterion might suggest a linear model
of the following form:
6.2 Graphical Diagnostics for Regression Trees
Although tree-based methods are in some sense more flexible than many conventional
parametric methods, it is still necessary to guard against artifacts. The best
techniques for diagnosing artifacts and missfit are graphical. In contrast to linear
regression, basic diagnostics for regression trees can be straightforward because
they do not require additional computations. In a first cut, it may be sufficient to
graphically render the effect of each split in turn. This may be achieved by plotting
the response variable against the predictor of the split for the points in the parent
bucket. One should then graphically differentiate the points in the two child buckets
by plotting them with different glyphs or colors.
We created a series of such diagnostic plots for the tree generated with the low
mean criterion. Figure 16 shows how buckets with ascending housing values are
being split off:
ffl high crime areas,
strongly African-American neighborhoods,
ffl a segment with decreasing fraction of lower status people,
ffl communities with unfavorable pupil-teacher ratios in their schools,
ffl another segment with decreasing fraction of lower status people, and finally
ffl the segment in which increasing size of the homes matters.
These plots confirm that most of these splits are plausible: High crime is the factor
that depresses housing values the most; there does exist a cluster of neighborhoods
whose pupil-teacher ratio is clearly worse and well separated from the majority.
Finally, the monotone dependencies are clearly visible for decreasing percentage of
lower status people and increasing number of rooms. Also visible are some "outliers,"
namely, the desirable areas of Beacon Hill and Back Bay near Boston downtown,
with top housing values yet limited size of homes.
14
Figure
Graphical view of the major splits applied to the Boston Housing data,
using the low means criterion.
7 An Example of Classification Trees: Pima Indians
Diabetes
We demonstrate the application of the new data mining criteria for classification
trees with the Pima Indians Diabetes data (Pima data, for short). These data
were originally owned by the "National Institute of Diabetes and Digestive and
Kidney Diseases," but they are now available from the UC Irvine Machine Learning
Repository (http://www.ics.uci.edu/~mlearn/MLRepository.html).
The class labels of the Pima data are 1 for diabetes and 0 otherwise. There
are 8 predictor variables for 768 patients, all females, at least 21 years of age, and
of Pima Indian heritage near Phoenix, AZ. Among the 768 patients, 268 tested
positive for diabetes (class 1). For details about the data, see the documentation at
the UC Irvine Repository. The predictor variables and their definitions are shown
in
Table
3.
Variable description
PRGN number of times pregnant
PLASMA plasma glucose concentration at two hours in an oral
glucose tolerance test
BP diastolic blood pressure (mm Hg)
THICK Triceps skin fold thickness (mm)
INSULIN two hour serum insulin (- U/ml)
BODY body mass index (weight in kg=(height in m) 2 )
diabetes pedigree function
AGE age (years)
RESPONSE class variable (=1 if diabetes; 0 otherwise)
Table
3: Predictor Variables for the Pima Indians Diabetes Data.
Using the Pima data, we constructed four trees based on both CART and our
new data mining criteria. A minimum bucket size of 35 was imposed, which is
about 5% of the overall sample size (768), as for the regression trees of the Boston
Housing data. Since we are concerned with interpretability, we again did not use
pruning. The resulting trees are shown in Figures 20 through 23. For each node, the
proportion (p) of each class and the size (sz) are given. Tables 4 and 5 summarizes
the trees.
From the trees and the summary, it becomes clear that PLASMA is the most
powerful predictor, followed by BODY. In particular the third tree is almost completely
dominated by these two variables. Their interleaved appearance down this
tree suggests a combined monotone dependence which should be studied more care-
0: No Diabetes
50 100 150 200305070
PLASMA
Class 1: Diabetes
Figure
17: The Distribution of the Two Classes of the Pima Diabetes Data, BODY
against PLASMA.
fully.
Figure
17 shows the distribution of the two classes in the PLASMA-BODY
plane. We switched to another fitting method that is more natural for describing
monotone dependencies, namely, nearest-neighbor fitting. For every point we estimated
the conditional class 1 probability p 1 (PLASMA, BODY) by the fraction of
class 1 samples among the 20 nearest neighbors in terms of Euclidean distance in
PLASMA-BODY coordinates after standardizing both variables to unit variance.
Figure
shows a sequence of plots of the data in the (PLASMA, BODY) plane
and a rendition of p 1 (PLASMA, BODY) in terms of highlighted slices
for an increasing sequence of six values of c. The plots make it clear that the response
behavior is quite complex: The first plot shows a slice that is best described as a
cut on a low value of BODY. The following four slices veer 90 clockwise, and the
last slice is best described as a cut on a high value of PLASMA. There is one more
feature worth of note, though: In the third plot we notice a hole in the center of the
highlighted slice. This hole is being filled in by a blob of data in the fourth plot.
From this feature we infer the existence of a mild hump in the p 1 -surface in the
center of the data. In summary, the function p 1 (PLASMA, BODY) has the shape
of a clockwise ascending spiral rule surface with a hump in the middle.
Obviously, trees are quite suboptimal for fitting a response with these charac-
teristics. Figure 19 shows how the third tree in Figure 22 tries to approximate this
surface with a step function on axes-aligned rectangular tiles.
PLASMA PLASMA
Figure
18: The Pima Indian Diabetes Data, BODY against PLASMA. The highlights
represent slices with near-constant p 1 ffl). The values of p 1 in the
slices increase left to right and top to bottom. Open squares: no diabetes (class 0),
A
Figure
19: The Pima Diabetes Data, BODY against PLASMA. The plain is tiled
according to the buckets of the tree in Figure 22. Open squares: no diabetes (class 0),
AGE<28
PLASMA<105.1
PEDIGREE<0.26
INSULIN<00000101p=1.00:0.00,
13.2% p=0.93:0.07,
7.0% p=0.82:0.18,
4.9% p=0.92:0.08,
4.9%
p=0.73:0.27,
6.6%
p=0.93:0.07,
6.0% p=0.87:0.13,
5.1% p=0.70:0.30,
4.8%
p=0.53:0.47,
6.6%
p=0.36:0.64,
7.2%
p=0.67:0.33,
8.6% p=0.53:0.47,
5.9% p=0.28:0.72,
7.9% p=0.08:0.92,
4.8% p=0.16:0.84,
6.4%
p=0.65:0.35,
sz=100.0%
p=0.79:0.21,
36.7%
p=0.97:0.03,
20.2%
16.5%
p=0.87:0.13,
9.9%
p=0.66:0.34,
29.7%
p=0.59:0.41,
p=0.69:0.31,
16.5%
p=0.79:0.21,
9.9%
p=0.37:0.63,
p=0.27:0.73,
p=0.39:0.61,
13.8%
p=0.13:0.87,
Pima
misclass.
error
Figure
20: The Pima Indian Diabetes Data, Tree 1, CART Criterion.
PLASMA<99.3
BODY<25.2
PLASMA<166.2
PEDIGREE<0.2
AGE<28
p=0.95:0.05,
5.6% p=1.00:0.00,
4.9% p=0.84:0.16,
4.9% p=0.91:0.09,
4.6% p=0.89:0.11,
8.1% p=0.86:0.14,
7.7% p=0.92:0.08,
5.1% p=0.72:0.28,
7.4% p=0.71:0.29,
6.4% p=0.71:0.29,
4.9% p=0.46:0.54,
14.6%
p=0.76:0.24,
4.8%
p=0.28:0.72,
6.1%
p=0.25:0.75,
5.2%
p=0.19:0.81,
4.8% p=0.08:0.92,
4.8%
p=0.65:0.35,
sz=100.0%
p=0.63:0.37,
94.4%
p=0.92:0.08,
9.9%
84.5%
p=0.58:0.42,
79.9%
p=0.55:0.45,
71.9%
64.2%
p=0.58:0.42,
54.6%
p=0.80:0.20,
12.5%
42.1%
p=0.47:0.53,
30.5%
p=0.57:0.43,
p=0.52:0.48,
19.5%
p=0.14:0.86,
9.6%
Pima
one-sided
purity
misclass.
error
Figure
21: The Pima Indian Diabetes Data, Tree 2, One-Sided Purity.
PLASMA<99.3
BODY<25.2
THICK<28
|
PLASMA<122
THICK<34.1
THICK<28
Typical balanced tree of depth 6. The strongest variable is PLASMA
(5x), which creates a very successful split at the top. BODY (3x)
is the next important variable, but much less so, followed by PEDIGREE
(3x) and AGE (2x). The class ratios in the terminal buckets
range from 1.00:0.00 on the left to 0.16:0.84 on the right. All splits
are in the right direction. Overall, the tree is plausible but does not
have a simple interpretation.
R ) Data Mining criterion 1: one-sided pu-
rity
Extremely unbalanced tree of depth 12. In spite of the depth of the
tree, its overall structure is simple: As the tree moves to the right,
layers high in class 0 (no diabetes) are being shaved off, and, con-
versely, as the tree steps left, a layers high in class 1 (diabetes)
is shaved off. The top of the tree is dominated by BODY and
while AGE and PEDIGREE play a role in the lower parts
of the tree, where the large rest bucket gets harder and harder to
classify.
Tree 3 max(p 0
R ) Data Mining criterion 2: one-sided ex-
tremes, high class 0
Extremely unbalanced tree with simple structure: Because the criterion
searches for layers high in class 0 (no diabetes), the tree keeps
stepping to the right. In order to describe conditions under which
class 0 is prevalent, it appears that only BODY and PLASMA mat-
ter. The tree shows a sequence of interleaved splits on these two
variables, indicating a combined monotone dependence on them. See
below for an investigation of this behavior. For interpretability, this
tree is the most successful one.
Table
4: The Result of Classification Trees in the Pima Indians Diabetes Data.
Tree 4 max(p 1
R ) Data Mining criterion 2: one-sided ex-
tremes, high class 1
Another extremely unbalanced tree with simple structure: The criterion
searches for layers high in class 1 (diabetes), which causes the
tree to step to the left. In order to describe conditions under which
class 1 is prevalent, PLASMA (6x) matters by far the most, followed
by PRGN (2x).
Table
5: The Result of Classification Trees in Pima Indians Diabetes Data.
Summary
The following are a few messages from our investigations and experiments:
ffl If trees are grown for interpretation, global measures of goodness of fit are not
always desirable.
ffl Hyper-greedy data mining criteria can give very different insights.
ffl Highly unbalanced trees can reveal monotone dependence and dose-response
effects. The end-cut problem turns into a virtue.
ffl To really understand data & algorithms, extensive visualization is necessary.
The following are a few topics that would merit further research:
Assess when the end-cut "problem" hurts and when it helps.
ffl Extend the new 2-class criteria to the multiclass problem.
ffl Develop more sophisticated rules for stopping and pruning.
ffl Increase accuracy with a limited 2-step look-ahead procedure for the new cri-
teria, adopting a suggestion of Breiman's (1996).
--R
"A New Look at the Statistical Model
Regression Diagnostics
"Technical Note: Some Properties of Splitting Criteria,"
Classification and Regression Trees
"Tree-Based Models,"
"A Statistical Perspective on Knowledge Discovery in Databases,"
"From Data Mining to Knowledge Discovery: An Overview,"
"Model Selection and the Principle of Minimum Description Length,"
"Hedonic Prices and the Demand for Clean Air,"
Clustering Algorithms
"Some Comments on C p ,"
UCI repository of machine learning data bases (http://www.
"Problems in the Analysis of Survey Data, and a Proposal,"
"Estimating the Dimension of a Model,"
CART: A Supplementary Module for SYS- TAT
"XGobi: Interactive Data Visualization in the X Window System,"
Modern Applied Statistics with S-Plus
"On Measuring and Correcting the Effects of Data Mining and Model Selection,"
--TR
C4.5: programs for machine learning
Technical note
A simple, fast, and effective rule learner
--CTR
Xiaoming Huo , Seoung Bum Kim , Kwok-Leung Tsui , Shuchun Wang, FBP: A Frontier-Based Tree-Pruning Algorithm, INFORMS Journal on Computing, v.18 n.4, p.494-505, January 2006
Soon Tee Teoh , Kwan-Liu Ma, PaintingClass: interactive construction, visualization and exploration of decision trees, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Owen Carmichael , Martial Hebert, Shape-Based Recognition of Wiry Objects, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.12, p.1537-1552, December 2004
Vasilis Aggelis , Panagiotis Anagnostou, e-banking prediction using data mining methods, Proceedings of the 4th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering Data Bases, p.1-6, February 13-15, 2005, Salzburg, Austria | CART;boston housing data;splitting criteria;Pima Indians Diabetes data |
502523 | Probabilistic modeling of transaction data with applications to profiling, visualization, and prediction. | Transaction data is ubiquitous in data mining applications. Examples include market basket data in retail commerce, telephone call records in telecommunications, and Web logs of individual page-requests at Web sites. Profiling consists of using historical transaction data on individuals to construct a model of each individual's behavior. Simple profiling techniques such as histograms do not generalize well from sparse transaction data. In this paper we investigate the application of probabilistic mixture models to automatically generate profiles from large volumes of transaction data. In effect, the mixture model represents each individual's behavior as a linear combination of "basis transactions." We evaluate several variations of the model on a large retail transaction data set and show that the proposed model provides improved predictive power over simpler histogram-based techniques, as well as being relatively scalable, interpretable, and flexible. In addition we point to applications in outlier detection, customer ranking, interactive visualization, and so forth. The paper concludes by comparing and relating the proposed framework to other transaction-data modeling techniques such as association rules. | INTRODUCTION
Large transaction data sets are common in data mining ap-
plications. Typically these data sets involve records of transactions
by multiple individuals, where a transaction consists
of selecting or visiting among a set of items, e.g., a market
basket of items purchased or a list of which Web pages an
individual visited during a specic session.
Transaction Data Examples
Store Department Number
Transactions
Figure
1: Examples of transactions for several in-
dividuals. The rows correspond to market baskets
and the columns correspond to particular categories
of items. The darker the pixel, the more items were
purchased (white means zero). The solid horizontal
gray lines do not correspond to transaction data but
are introduced in the plot to indicate the boundaries
between transactions of dierent individuals.
We are interested in the problem of making inferences about
individual behavior given transaction data from a large set
of individuals over a period of time. In particular we focus
on techniques for automatically inferring proles for individuals
from the transaction data. In this paper a prole is
considered to be a description or a model of an individual's
transaction behavior, specically, the likelihood that individual
will purchase (or visit) a particular item. Finding
proles is a fundamental problem of increasing interest in
data mining, across a range of transaction-related applica-
tions: retail cross-selling, Web personalization, forecasting,
and so forth.
Figure
1 shows a set of transactions for 5 dierent individuals
where rows correspond to market baskets (transactions)
and columns correspond to categories of items (store departments
in this example). The data set from which these
examples are taken involves over 200,000 transactions from
50,000 customers over a two-year period in a set of retail
stores. The heterogeneity of purchasing behavior is clear
even from this simple plot. Dierent customers purchase
dierent numbers of items, in dierent departments, and in
dierent amounts. Our goal in this paper is to investigate
parsimonious and accurate models for each individual's purchasing
behavior, i.e., individual proles.
The paper begins by dening the general problem of prol-
ing and the spectrum between sparse individual-specic information
and broadly supported global patterns. We then
dene some general notation for the problem and introduce
a mixture model framework for modeling transaction \be-
havior" at the individual level. We describe the model and
illustrate with some examples. We then conduct a number
of experiments on a real-world transaction data set and
demonstrate that the proposed approach is feasible on real-world
data and provides performance that is interpretable,
accurate, and scalable. We brie
y sketch how the model can
support several data mining tasks, such as exploratory data
analysis, ranking of customers, novelty detection, forecast-
ing, and so forth and conclude with a discussion of related
work.
2. THE PROFILING PROBLEM
Proling is essentially the problem of converting transaction
data (such as that in gure 1) into a model for each
individual that can be used to predict their future behav-
ior. Clearly human behavior is highly unpredictable and,
thus, uncertainty abounds. Nonetheless, there are likely to
be regularities in the data that can be leveraged, and that,
on average, can lead to a systematic method for making
predictions.
A facet of many transaction data sets is the fact that the
data are quite sparse. For example, for many transaction
data sets (including the particular data set corresponding
to gure 1), a histogram of \number of transactions" peaks
at 1 (i.e., more customers have a single transaction than
any other number of transactions) and then decreases exponentially
quickly. Thus, for many customers there are very
few transactions on which to base a prole, while for others
there are large numbers of transactions.
Assume for example that we model each individual via a simple
multinomial probability model to indicate which items
are chosen, namely, a vector of probabilities pC , one for
each of the C categories of items, with
assume that this is combined with a histogram that models
how many items in total are purchased per visit, and
a \rate parameter" that governs how often the individual
conducts transactions per unit time on average (e.g.,
per month).
The crux of the proling problem is to nd a middle-ground
between two proling extremes. At one modeling extreme,
we could construct a general model, of the general form described
above, for the whole population and assume that
individuals are homogeneous enough in behavior that such
a single model is adequate. This is unlikely to be very
accurate given the heterogeneity of human behavior (e.g.,
see gure 1). At the other extreme we could construct a
unique model for each individual based only on past trans-action
data from that individual, e.g., the multinomial, the
histogram, and the rate parameter are all estimated from
raw counts for that individual. This will certainly provide
individual-specic prole models. However, it suers from
at least two signicant problems. Firstly, for individuals
with very small amounts of data (such as those with only
one item in one transaction) the proles will be extremely
sparse and contain very little information. Secondly, even
for individuals with signicant amounts of data, the raw
counts do not contain any notion of generalization: unless
have purchased a specic item in the past the prole
probability for that item is zero, i.e., the model predicts
that you will never purchase it.
These limitations are well-known and have motivated the
development of various techniques for borrowing strength,
i.e., making inferences about a specic individual by combining
both their individual data with what we know about
the population as a whole. Collaborative ltering can be
thought of as a non-parametric nearest-neighbor technique
in this context. Association rule algorithms also try to address
the problem of identifying rules of generalization that
allow identication and prediction of novel items that have
not been seen in an individual's history before (e.g., Brijs et
al., 2000; Lawrence et al., 2001).
However, while these methods can support specic inferences
about other items that an individual is likely to pur-
chase, they do not provide an explicit model for an indi-
vidual's behavior. Thus, it is di-cult to combine these
approaches with other traditional forecasting and prediction
techniques, such as, for example, seasonal modeling
techniques that utilize information about annual seasonal
shopping patterns. Similarly, it is not clear how covariate
information (if available: such as an individual's income,
sex, educational background, etc.) could be integrated in
a systematic and coherent manner into an association rule
framework (for example).
In this paper we take a model-based approach to the proling
problem, allowing all information about an individual
to be integrated within a single framework. Specically we
propose a
exible probabilistic mixture model for the trans-
actions, t this model in an e-cient manner, and from the
mixture model infer a probabilistic prole for each individ-
ual. We compare our approach with baseline models based
on raw or adjusted histogram techniques and illustrate how
the mixture model allows for more accurate generalization
from limited amounts of data per individual.
3. NOTATION
We have an observed data set
is the observed data on the ith customer, 1 i N . Each
individual data set D i consists of one or more transactions
for that customer , i.e., D fy
g, where
y ij is the jth transaction for customer i and n i is the total
number of transactions observed for customer i.
An individual transaction y ij consists of a description of the
set of products that were purchased at the same time by the
same customer. For the purposes of the experiments described
below, each individual transaction y ij is represented
as a vector of d counts y
component n ijc indicates how many items of type c are in
transaction ij, 1 c C. One can straightforwardly generalize
this representation to include (for example) the price
for each product, but here we focus just on the number of
items (the counts). Equally well the components n ijc could
indicate the time since the last page request from individual
when they request Web page c during session j. For the
purposes of this paper we will ignore any information about
the time or sequential order in which items are purchased
or in which pages are visited within a particular transaction
y, but it should be clear from the discussion below that it
is straightforward to generalize the approach if sequential
order or timing information is available.
We are assuming above that each transaction is \keyed"
unique identier for each individual. Examples of such keys
can include driver's license numbers or credit card numbers
for retail purchasing or login or cookie identiers for Web
visits. There are practical problems associated with such
identication, such as data entry errors, missing identiers,
fraudulent or deliberately disguised IDs, multiple individuals
using a single ID, ambiguity in identication on the Web,
and so forth. Nonetheless, in an increasing number of trans-action
data applications reliable identication is possible,
via methodologies such as frequent shopper cards, \opt-in"
Web services, and so forth. In the rest of the paper we
will assume that this identication problem is not an issue
(i.e., either the identication process is inherently reliable,
or there are relatively accurate techniques to discern iden-
tity). In fact in the real-world transaction data set that we
use to illustrate our techniques, ambiguity in the identica-
tion process is not a problem.
4. MIXTUREBASISMODELSFORTRANS-
ACTIONS
We propose a simple generative mixture model for the trans-
actions, namely that each transaction y ij is generated by one
of K components in a K-component mixture model. Thus,
the kth mixture component, 1 k K is a specic model
for generating the counts and we can think of each of the
K models as \basis functions" describing prototype trans-
actions. For example, one might have a mixture component
that acts as a prototype for suit-buying behavior, where the
expected counts for items such as suits, ties, shirts, etc.,
Probability
COMPONENT 3
Probability
COMPONENT 4
COMPONENT 5
Department
Probability
COMPONENT 6
Department
Figure
2: An example of 6 \basis" mixture components
t to the transaction data of Figure 1.
given this component, would be relatively higher than for
the other items.
There are several modeling choices for the component trans-action
models for generating item counts. In this paper we
choose a particularly simple memoryless multinomial model
that operates as follows. Conditioned on n ij , the total number
of items in the basket, each of the individual items is
selected in a memoryless fashion by n ij draws from a multinomial
distribution on the C possible
items. Other models are possible: for example, one could
model the data as coming from C conditionally independent
random variables, each taking non-negative integer values.
This in general involves more parameters than the multinomial
model, and allows (for example) the modeling of
the purchase of exactly one suit and one pair of shoes in
a manner that the multinomial multiple trials model cannot
achieve. In this paper, however, we only investigate the
multinomial model since it is the simplest to begin with.
We can also model the distribution on the typical number
of items purchased within a given component, e.g., as a
Poisson model, which is entirely reasonable (a gift-buying
component model might have a much higher mean number
of items than a suit-buying model). These extensions are
straightforward and not discussed further in this paper.
Figure
2 shows an example of
components that have been learned from the transaction
data of gure 2 (more details on learning will be discussed
below). Each window shows a a particular set of multinomial
probabilities that models a specic type of transaction.
The components show a striking bimodal pattern in that the
multinomial models appear to involve departments that are
either above or below department 25, but there is very little
probability mass that crosses over. In fact the models are
capturing the fact that departments numbered lower than
correspond to men's clothing and those above 25 correspond
to women's clothing. We can see further evidence
of this bimodality in the data itself in gure 1 noting that
some individuals do in fact cross over and purchase items
Number
of
items TRAINING PURCHASES
5026Number
of
items
Department
TEST PURCHASES
Figure
3: Histograms indicating which products
a particular individual purchased, from both the
training data and the test data.
from \both sides" depending on the transaction.
4.1 Individual-Specific Weights
We further assume that for each individual i there exists a
set of K weights, and in the general case these weights are
individual-specic, denoted by
represents the probability that
when individual i enters the store their transactions will be
generated by component k. Or, in other words, the ik 's
govern individual i's propensity to engage in \shopping be-
(again, there are numerous possible generalizations
such as making the ik 's have dependence over time,
that we will not discuss here). The ik 's are in eect the
prole coe-cients for individual i, relative to the K component
models.
This idea of individual-specic weights (or proles) is a key
component of our proposed approach. The mixture component
models Pk are xed and shared across all individuals,
providing a mechanism for borrowing of strength across individual
data. In contrast, the individual weights are in
principle allowed to freely vary for each individual within
a K-dimensional simplex. In eect the K weights can be
thought as basis coe-cients that represent the location of
individual i within the space spanned by the K basis functions
(the component Pk multinomials). This approach is
quite similar in spirit to the recent probabilistic PCA work
of Hofmann (1999) on mixture models for text documents,
where he proposes a general mixture model framework that
represents documents as existing within a K-dimensional
simplex of multinomial component models.
Given the assumptions stated so far, the probability of a
particular transaction y ij , assuming that it was generated
by component k, can now be written as
Y
kc (1)
SMOOTHED HISTOGRAM PROFILE (MAP)
Figure
4: Inferred \eective" proles from global
weights, smoothed histograms, and individual-
specic weights for the individual whose data was
shown in gure 3.
where kc is the probability that the cth item is purchased
given component k and n ijc is the number of items of category
c purchased by individual i, during transaction ij.
When the component that generated transaction y ij is not
known, we then have a mixture model, where the weights
are specic to individual i:
ik
Y
An important point in this context is that this probability
model is not a multinomial model, i.e., the mixture has richer
probabilistic semantics than a simple multinomial.
As an example of the application of these ideas, in g-
ure 3 the training data and test data for a particular individual
is displayed. Note that there is some predictability
from training to test data, although the test data contains
(for example) a purchase in department 14 (which was not
seen in the training data). Figure 4 plots eective proles 1
for this particular individual as estimated by three dier-
ent schemes in our modeling approach: (1) global weights
that result in everyone being assigned the same \generic"
prole, (2) a smoothed histogram (maximum a posterior
or MAP) technique that smooths each individual's training
histogram with a population-based histogram, and (3) individual
weights that are \tuned" to the individual's specic
behavior. (Details on each of these methods are provided
later in the paper).
One can see in gure 4 that the global weight prole re-
We call these \eective proles" since the predictive model
under the mixture assumption is not a multinomial that can
be plotted as a bar chart: however, we can approximate it
and we are plotting one such approximation here
ects broad population-based purchasing patterns and is
not representative of this individual. The smoothed histogram
is somewhat better, but the smoothing parameter
has \blurred" the individual's focus on departments below
25. The individual-weight prole appears to be a better representation
of this individual's behavior and indeed it does
provide the best predictive score on the test data in g-
ure 3. Note that the individual-weights prole in gure 4
\borrows strength" from the purchases of other similar cus-
tomers, i.e., it allows for small but non-zero probabilities
of the individual making purchases in departments (such as
6 through or she has not purchased there in the
past. This particular individual's weights, the ik 's, are
(0:00; 0:47; 0:38; 0:00; 0:00:0:15) corresponding to the 6 component
models shown in gure 2. The most weight is placed
on components 2, 3 and 6, which agrees with our intuition
given the individual's training data.
4.2 Learning the Model Parameters
The unknown parameters in our model consist of both the
parameters of the K multinomials,
C, and the individual-specic prole weights
In learning these parameters from the data we have two main
choices: either we treat all of these parameters as unknown
and use EM to estimate them, or we can rst learn the
K multinomials using EM, and then determine weights for
the N individuals relative to these previously-learned basis
functions. The disadvantage of the rst approach is that
it clearly may overt the data, since unless we have large
numbers of transactions and items per customer, we will end
up with as many parameters as observations in the model.
In the Appendix we outline three dierent techniques for estimating
individual-specic prole weights (including \full
EM"), and later in the experimental results section we investigate
the relative performance of each. We also include
a generic baseline version of the model in our experiments
for comparison, namely Global weights where each individ-
ual's weights i are set to a common set of weights , where
is the set of K weights returned by EM from learning
a mixture of K multinomials. Intuitively we expect that
these global weights will not be tuned particularly well to
any individual's behavior, and thus, should not perform as
well as individual weights. Nonetheless, the parsimony of
the global weight model may work in its favor when making
out-of-sample predictions, so it is not guaranteed that
individual-weight models will beat it, particularly given the
relative sparsity of data per individual.
4.3 Individual Histogram-based Models
We also include for baseline comparison a very simple smoothed
histogram model dened as a convex combination of the
maximum likelihood (relative frequency or histogram) estimate
of each individual's multinomial estimated directly
from an individual's past purchases and a population multinomial
estimated from the pooled population data. The
idea is to smooth each individual's histogram to avoid having
zero probability for items that were not purchased in the
past. This loosely corresponds to a maximum a posteriori
(MAP) strategy. We tuned the relative weighting for each
term to maximize the overall logp score on the test data set,
which is in eect \cheating" for this method. A limitation
of this approach is that the same smoothing is being applied
to each individual, irrespective of how much data we have
available for them, and this will probably limit the eective-
ness of the model. A more sophisticated baseline histogram
model can be obtained using a fully hierarchical Bayes ap-
proach. In fact all of the methods described in this paper
can be formulated within a general hierarchical Bayesian
framework: due to space limitations we omit the details of
this formulation here.
5. EXPERIMENTAL RESULTS
5.1 Retail Transaction Data
To evaluate our mixture models we used a real-world trans-action
data set consisting of approximately 200,000 separate
transactions from approximately 50,000 dierent individu-
als. Individuals were identied and matched to a transaction
(a basket of items) at the point-of-sale using a card system.
The data consists of all identiable transactions collected
at number of retail stores (all part of the same chain). We
analyze the transactions here at the store department level
(50 departments, or categories of items). The names of individual
departments have been replaced by numbers in this
paper due to the proprietary nature of the data.
5.2 Experimental Setup
We separate our data into two time periods (all transactions
are time-stamped), with approximately 70% of the data being
in the rst time period (the training data) and the remainder
in the test period data. We train our mixture and
weight models on the rst period and evaluate our models
in terms of their ability to predict transactions that occur in
the subsequent out-of-sample test period. In the results reported
here we limit attention to individuals for who have at
least transactions over the entire time-span of the data,
a somewhat arbitrary choice, but intended to focus on individuals
for whom we have some hope of being able to extract
some predictive power.
The training data contains data on 4339 individuals, 58,866
transactions, and 164,000 items purchased. The test data
consists of 4040 individuals, 25,292 transactions, and 69,103
items purchased. Not all individuals in the test data set
appear in the training data set (and vice-versa): individuals
in the test data set with no training data are assigned a
global population model for scoring purposes.
To evaluate the predictive power of each model, we calculate
the log-probability (\logp scores") of the transactions
as predicted by each model. Higher logp scores mean that
the model assigned higher probability to events that actually
occurred. The log probability of a specic transaction
from individual (i.e. a set of counts
of items purchased in one basket by individual i), under
mixture model M , is dened as
ik
Y
Note that the mean negative logp score over a set of trans-
actions, divided by the total number of items, can be interpreted
as a predictive entropy term in bits. The lower
this entropy term, the less uncertainty in our predictions
(bounded below by zero of course, corresponding to zero
uncertainty).
2.93.13.33.5Out-of-Sample, Individuals with 10 or more transactions, time split
Model Complexity k
Negative
log-likelihood
per
item
Figure
5: Predictive entropy on out-of-sample transactions
for the three dierent individual weight tech-
niques, as a function of K, the number of mixture
components.
5.3 Performance of Different Individual Profile
Weight Models
Figure
5 shows the predictive entropy scores for each of the
three dierent individual weighting techniques (described in
the
Appendix
), as a function of K the number of components
in the mixture models. From the plot, we see that in general
that mixtures (K > 1) perform better than single multinomial
models, with an order of 20% reduction in predictive
entropy. Furthermore, the simplest weight method
(\wts1") is more accurate than the other two methods, and
the worst-performing of the weight methods is the method
that allows the individual prole weights to be learned as parameters
directly by EM. While this method gave the highest
likelihood on in-sample data (plots not shown) it clearly
overtted, and led to worse performance out-of-sample. The
wts1 method performed best in a variety of other experiments
we conducted, and so, for the remainder of this paper
we will focus on this particular weighting technique and refer
to the results obtained with this method as simple \Individ-
ual weights."
5.4 Comparison of Individual and Global Weight
Models
Figure
6 compares the out-of-sample predictive entropy scores
as a function of number of mixture components K for the
Individual weights, the Global weights (where all individuals
are assigned the same marginal mixture weights), and
the MAP histogram baseline method (for reference). The
MAP method is the solid line: it does better than the default
(leftmost points in the plot)
because it is somewhat tuned to individual behavior, but
the mixture models quickly overtake it as K increases. The
performance of both Individual and Global weight mixtures
steadily improves up to about then somewhat
attens out above that, providing about a 15% reduction in
predictive uncertainty over the simple MAP approach. The
(Number of Mixture Components K)
Negative
log-likelihood
per
item
Global Weights
Individual Weights
MAP histogram
Figure
Plot of the negative log probability scores
per item (predictive entropy) on out-of-sample
transactions, for global and individual weights as a
function of the number of mixture components K.
Also shown for reference is the score for the non-
mixture MAP model.
Individual weights are systematically better than the Global
weights, with a roughly 3% improvement in predictive accuracy
Figure
7 shows a more detailed comparison of the dierence
between Individual and Global weight models. It contains
a scatter plot of the out-of-sample total logp scores for spe-
cic individuals, for a xed value of
that the Global weight model is systematically worse than
the Individual weights model (i.e., most points are above
the bisecting line). For individuals with the lowest likelihood
(lower left of the plot) the Individual weight model is
consistently better: typically lower weight total likelihood
individuals are those with more transactions and items, so
the Individual prole weights model is systematically better
on individuals for whom we have more data (i.e., who shop
more).
Figure
7 contains quite a bit of overplotting in the top left-
corner. Figure 8 shows an enlargement of a part of this
region of the plot. At this level of detail we can now clearly
see that at relatively low likelihood values that the Individual
prole models are again systematically better, i.e., for
most individuals we get better predictions with the Individual
weights than with the Global weights. Figure 9 shows a
similar focused plot where now we are comparing the scores
from Individual weights (y-axis again) with those from the
MAP method (x-axis). The story is again quite similar,
with the Individual weights systematically providing better
predictions. We note, however, that in all of the scatter
plots that there are a relatively small number of individuals
who get better predictions from the smoother models. We
conjecture that this may be due to lack of su-cient regularization
in the Individual prole method, e.g., these may be
individuals who buy a product they have never purchased
before and the Individual weights model has in eect over-
global weight model
logP,
model
Figure
7: Scatter plot of the log probability scores
for each individual on out-of-sample transactions,
plotting individual weights versus global weights.
logP, global weights
logP,
individual
weights
Figure
8: A close up of a portion of the plot in
Figure
7.
-10logP, MAP model
logP,
individual
weights
Figure
9: Scatter plot of the log probability scores
for each individual on out-of-sample transactions,
plotting log probability scores for individual weights
versus log probability scores for the MAP model.
committed to the historical data, whereas the others models
hedge their bets by placing probability mass more smoothly
over all 50 departments.
5.5 Scalability Experiments
We conducted some simple experiments to determine how
the methodology scales up computationally as a function of
model complexity. We recorded CPU time for the EM algorithm
(for both Global and Individual weights) as a function
of the number of components K in the mixture model.
The experiments were carried out on a Pentium III Xeon,
500Mhz with 512MB RAM (no paging).
For a given number of iterations, the EM algorithm for mixtures
of multinomials is linear in both the number of components
K and the number of total items n. We were interested
to see if increasing the model complexity might cause the
algorithm to take more iterations to converge, thus causing
computation time to increase at a rate faster than linearly
with K.
Figure
shows that this does not happen in prac-
tice. It is clear that the time taken to train the models scales
roughly linearly with model complexity. Note also that there
is eectively no dierence in computation time between the
Global and Individual weight methods, i.e., the extra computation
to compute the Individual weights is negligible.
6. APPLICATIONS: RANKING, OUTLIER
DETECTION, AND VISUALIZATION
There are several direct applications of the model-based approach
that we only brie
y sketch here due to space lim-
itations. In particular, we can use the scores to rank the
most predictable customers. Customers with relatively high
logp scores per item are the most predictable and this infor-
0Model complexity (number of components K)
Time
Global weights
Individual Weights
Figure
10: Plot of the CPU time to t the global and
individual weight mixture models, as a function of
model complexity (number of components K), with
a linear t.
Population Multinomial
Training Data Purchases for Customer 2084
Probability Test Data Purchases for Customer 2084
Figure
11: An example of a customer that is automatically
detected as having unusual purchasing
patterns: population patterns (top), training purchases
(middle), test period purchases (bottom).
mation may be useful for marketing purposes. The proles
themselves can be used as the basis for accurate personaliza-
tion. Forecasts of future purchasing behavior can be made
on a per-customer basis.
We can also use the logp scores to identify interesting and
unusual purchasing behavior. Individuals with low per item
logp score tend to have very unusual purchases. For exam-
ple, one of the lowest ranked customers in terms of this score
(in the test period) is customer 2084. He or she made several
purchases in the test period in department 45: this is
interesting since there were almost zero purchases by any individual
in this department in the training data (Figure 11).
This may well indicate some unusual behavior with this individual
(for example the data may be unreliable and the test
period data may not really belong to the same customer).
Clustering and segmentation of the transaction data may
also be performed in the lower dimensional weight-space,
which may lead to more stable estimation than performing
clustering directly in the original \item-space".
We have also developed an interactive model-based trans-action
data visualization and exploration tool that uses the
mixture models described in this paper as a basic framework
for exploring and predicting individual patterns in transaction
data. The tool allows a user to visualize the raw trans-action
data and to interactively explore various aspects of
both an individual's past behavior and predicted future be-
havior. The user can then analyze the data using a number
of dierent models, including the mixture models described
in this paper. The resulting components can be displayed
and compared and simple operations such as sorting and
ranking of the multinomial probabilities are possible. Finally
proles in the form of expected relative purchasing
behavior for individual users can be generated and visual-
ized. The tool also allows for interactive simulation of a
user prole. This allows a data analyst to add hypothetical
items to a user's transaction record (e.g., adding several
simulated purchases in the shoe department). The tool
then updates a user's prole in real-time to show how this
aects the user's probability of purchasing other items. This
type of model-based interactive exploration of large trans-action
data sets can be viewed as a rst-step in allowing a
data analyst to gain insight and understanding from a large
transaction data set, particularly since such data sets are
quite di-cult to capture and visualize using conventional
multivariate graphical methods.
7. RELATED WORK
The idea of using mixture models as a
exible framework
for modeling discrete and categorical data has been known
for many years in the statistical literature, particularly in
the social sciences under the rubric of latent class analysis
(Lazarsfeld and Henry, 1968; Bartholemew and Knott,
1999). Typically these methods are applied to relatively
small low-dimensional data sets. More recently there has
been a resurgence of interest in mixtures of multinomials
and mixtures of conditionally independent Bernoulli models
for modeling high-dimensional document-term data in text
analysis (e.g., McCallum, 1999; Homan, 1999).
In the marketing literature there have also been numerous
relatively sophisticated applications of mixture models to
retail data (see Wedel and Kamakura, 1998, for a review).
Typically, however, the focus here is on the problem of brand
choice, where one develops individual and population-level
models for consumer behavior in terms of choosing between
a relatively small number of brands (e.g., 10) for a specic
product (e.g., coee).
The work of Breese, Heckerman and Kadie (1998) and Heckerman
et al. (2000) on probabilistic model-based collaborative
ltering is also similar in spirit to the approach described
in this paper except that we focus on explicitly treating
the problem of individual proles (i.e., we have explicit
models for each individual in our framework).
Our work can be viewed as being an extension of this broad
family of probabilistic modeling ideas to the specic case of
transaction data, where we deal directly with the problem
of making inferences about specic individuals and handling
multiple transactions per individual.
Other approaches have also been proposed in the data mining
literature for clustering and exploratory analysis of trans-action
data, but typically in a non-probabilistic framework
(e.g., Strehl and Ghosh, 2000). Transaction data has always
received considerable attention from data mining re-
searchers, going back to the original work of Agrawal, Imie-
lenski, and Swami (1993) on association rules. Association
rules present a very dierent approach to transaction data
analysis, searching for patterns that indicate broad correlations
(associations) between particular sets of items. Our
work here complements that of association rules in that we
develop an explicit probabilistic model for the full joint dis-
tribution, rather than sets of \disconnected" joint and conditional
probabilities (one way to think of association rules).
Indeed, for forecasting and prediction it can be argued that
the model-based approach (such as we propose here) is a
more systematic framework: we can in principle integrate
time-dependent factors (e.g., seasonality, non-stationarity),
covariate measurements on customers (e.g., knowledge of
the customer's age, educational-level) and other such infor-
mation, all in a relatively systematic fashion. We note also
that association rule algorithms depend fairly critically on
the data being relatively sparse. In contrast, the model-based
approach proposed here should be relatively robust
with respect to the degree of sparseness of the data.
On the other hand, it should be pointed out that in this paper
we have only demonstrated the utility of the approach on
a relatively low-dimensional problem (i.e., 50 departments).
As we descend the product hierarchy from departments, to
classes of items, all the way down to specic products (the
so-called \SKU" level) there are 50,000 dierent items in the
retail transaction database used in this paper. It remains to
be seen whether the type of probabilistic model proposed
in this paper can computationally be scaled to this level of
granularity. We believe that the mixture models proposed
here can indeed be extended to model the full product tree,
all the way down to the leaves. The sparsity of the data,
and the hierarchical nature of the problem, tends to suggest
that hierarchical Bayesian approaches will play a natural
role here. We leave further discussion of this topic to future
work.
8. CONCLUSIONS
The research described in this paper can be viewed as a rst
step in the direction of probabilistic modeling of transaction
data. Among the numerous extensions and generalizations
to explore are:
The integration of temporal aspects of behavior, building
from simple stationary Poisson models with individual
extending up to seasonal and non-stationary
eects. Mathematically these temporal aspects
can be included in the model rather easily. For
example, traditionally in modeling consumer behavior,
to a rst approximation, one models the temporal rate
process (how often an individual shops) independently
from the choice model (what an individual purchases),
e.g., see Wedel and Kamakura (1998).
\Factor-style" mixture models that allow a single trans-action
to be generated by multiple components, e.g., a
customer may buy a shirts/ties and camping/outdoor
clothes in the same transaction.
Modeling of product and category hierarchies, from
department level down to the SKU-level.
To brie
y summarize, we have proposed a general probabilistic
framework for modeling transaction data and illustrated
the feasibility, utility, and accuracy of the approach
on a real-world transaction data set. The experimental results
indicate that the proposed probabilistic mixture model
framework can be a potentially powerful tool for exploration,
visualization, proling, and prediction of transaction data.
A. PARAMETER ESTIMATION
A.1 Learning of Individual Weights
Consider the log likelihood function for a set of data points
D where mixture weights are individual-specic, letting
denote all the mixture model parameters and k denote mixture
component (multinomial) parameters:
log
This is very similar to the standard mixture model but uses
individual-specic weights ik .
The learning problem is to optimize this log{likelihood with
respect to parameters which consists of mixture component
parameters k and individualized weights ik subject
to a set of N constraints
In addition, we can
dene a Global weights model that has an additional constraint
that all the individual-specic weights are equal to a
global set of weights :
This particular log{likelihood can be optimized using the
Expectation{Maximization (EM) algorithm where the Q function
becomes:
In this equation P ijk represents the class{posterior of trans-action
evaluated using the \old" set of parameters, such
that The Q function is very similar to the
Q function of the standard mixture model; the only difference
is that individualized weights are used in place of
global weights and additional constraints exist on each set
of individualized weights.
Optimizing the Q function with respect to the mixture component
parameters k does not depend on weights ik (it
does depend on the \old" weights through P ijk , though).
Therefore this optimization is unchanged. Optimization with
respect to ik with a set of Lagrange multipliers leads to the
following intuitive update equation:
which reduces to the standard equation:
when global weights are used.
Since the models with individualized weights have a very
large number of parameters and may overt the data, we
consider several dierent methodologies for calculating weights:
1. Global weights: this is the standard mixture model
above.
2. wts1: in this model the weights are constrained to be
equal during EM and a global set of weights together
with mixture component parameters are learned. After
the components are learned, the individual-specic
weights are learnt by a single EM iteration using the
Q function with individual-specic weights.
3. wts2: this model is similar to wts1 model, but instead
of a single iteration it performs a second EM
algorithm on the individualized weights after the mixture
component weights are learned. Eectively, this
model consists of two consecutive EM algorithms: one
to learn mixture component parameters and another
to learn individualized weights.
4. wts3: in this model a full EM algorithm is used, where
both the mixture component weights and individualized
weights are updated at each iteration.
Each of the methods as described represent a valid EM algorithm
since the update equations are always derived from
the appropriate Q function. Therefore, the log-likelihood is
guaranteed to increase at each iteration and consequently,
the algorithms are guaranteed to converge (within the standard
EM limitations).
A.2 General Parameters for Running the EM
Algorithm
The EM algorithm was always started from 10 random initial
random starting points for each run and the highest likelihood
solution then chosen. The parameters were initialized
by sampling from a Dirichlet distribution using single cluster
parameters as a hyperprior and an equivalent sample size
of 100. The EM iterations were halted whenever the
relative change in log likelihood was less than 0.01% or after
100 iterations. In practice, for the results in this paper, the
algorithm typically converged to the 0.01% criterion within
20 to 40 iterations and never exceeded 70 iterations.
A.3
Acknowledgements
The research described in this paper was supported in part
by NSF CAREER award IRI-9703120. The work of IC was
supported by a Microsoft Graduate Research Fellowship.
A.4
--R
Mining association rules between sets of items in large databases
A data mining framework for optimal product selection in retail supermarket data: the generalized PROFSET model
Latent Variable Models and Factor Analysis
Dependency networks for in- ference
Journal of Machine Learning Research
Personalization of supermarket product recommendations
Latent Structure Analysis
Market Segmen- tation: Conceptual and Methodological Foundations
--TR
Mining association rules between sets of items in large databases
Probabilistic latent semantic indexing
A data mining framework for optimal product selection in retail supermarket data
Personalization of Supermarket Product Recommendations
Dependency networks for inference, collaborative filtering, and data visualization
--CTR
Chidanand Apte , Bing Liu , Edwin P. D. Pednault , Padhraic Smyth, Business applications of data mining, Communications of the ACM, v.45 n.8, August 2002
Ella Bingham , Heikki Mannila , Jouni K. Seppnen, Topics in 0--1 data, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Chun-Nan Hsu , Hao-Hsiang Chung , Han-Shen Huang, Mining Skewed and Sparse Transaction Data for Personalized Shopping Recommendation, Machine Learning, v.57 n.1-2, p.35-59, October-November 2004 | EM algorithm;mixture models;profiles;transaction data |
502525 | Mining the network value of customers. | One of the major applications of data mining is in helping companies determine which potential customers to market to. If the expected profit from a customer is greater than the cost of marketing to her, the marketing action for that customer is executed. So far, work in this area has considered only the intrinsic value of the customer (i.e, the expected profit from sales to her). We propose to model also the customer's network value: the expected profit from sales to other customers she may influence to buy, the customers those may influence, and so on recursively. Instead of viewing a market as a set of independent entities, we view it as a social network and model it as a Markov random field. We show the advantages of this approach using a social network mined from a collaborative filtering database. Marketing that exploits the network value of customers---also known as viral marketing---can be extremely effective, but is still a black art. Our work can be viewed as a step towards providing a more solid foundation for it, taking advantage of the availability of large relevant databases. | INTRODUCTION
Direct marketing is one of the major applications of KDD.
In contrast to mass marketing, where a product is promoted
indiscriminately to all potential customers, direct marketing
attempts to rst select the customers likely to be protable,
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
and market only to those [19]. Data mining plays a key role
in this process, by allowing the construction of models that
predict a customer's response given her past buying behavior
and any available demographic information [29]. When suc-
cessful, this approach can signicantly increase prots [34].
One basic limitation of it is that it treats each customer
as making a buying decision independently of all other cus-
tomers. In reality, a person's decision to buy a product is
often strongly in
uenced by her friends, acquaintances, business
partners, etc. Marketing based on such word-of-mouth
networks can be much more cost-eective than the more conventional
variety, because it leverages the customers themselves
to carry out most of the promotional eort. A classic
example of this is the Hotmail free email service, which grew
from zero to 12 million users in 18 months on a minuscule
advertising budget, thanks to the inclusion of a promotional
message with the service's URL in every email sent using
it [23]. Competitors using conventional marketing fared far
less well. This type of marketing, dubbed viral marketing
because of its similarity to the spread of an epidemic, is now
used by a growing number of companies, particularly in the
Internet sector. More generally, network eects (known in
the economics literature as network externalities) are of critical
importance in many industries, including notably those
associated with information goods (e.g., software, media,
telecommunications, etc.) [38]. A technically inferior product
can often prevail in the marketplace if it better leverages
the network of users (for example, VHS prevailed over Beta
in the VCR market).
Ignoring network eects when deciding which customers
to market to can lead to severely suboptimal decisions. In
addition to the intrinsic value that derives from the purchases
she will make, a customer eectively has a network
value that derives from her in
uence on other customers. A
customer whose intrinsic value is lower than the cost of marketing
may in fact be worth marketing to when her network
value is considered. Conversely, marketing to a protable
customer may be redundant if network eects already make
her very likely to buy. However, quantifying the network
value of a customer is at rst sight an extremely di-cult un-
dertaking, and to our knowledge has never been attempted.
A customer's network value depends not only on herself,
but potentially on the conguration and state of the entire
network. As a result, marketing in the presence of strong
network eects is often a hit-and-miss aair. Many startup
companies invest heavily in customer acquisition, on the basis
that this is necessary to \seed" the network, only to face
bankruptcy when the desired network eects fail to materi-
alize. On the other hand, some companies (like Hotmail and
the ICQ instant messenger service) are much more successful
than expected. A sounder basis for action in network-driven
markets would thus have the potential to greatly reduce the
risk of companies operating in them.
We believe that, for many of these markets, the growth
of the Internet has led to the availability of a wealth of
data from which the necessary network information can be
mined. In this paper we propose a general framework for
doing this, and for using the results to optimize the choice
of which customers to market to, as well as estimating what
customer acquisition cost is justied for each. Our solution
is based on modeling social networks as Markov random
elds, where each customer's probability of buying is a
function of both the intrinsic desirability of the product for
the customer and the in
uence of other customers. We then
focus on collaborative ltering databases as an instance of a
data source for mining networks of in
uence from. We apply
our framework to the domain of marketing motion pictures,
using the publicly-available EachMovie database of 2.8 million
movie ratings, and demonstrate its advantages relative
to traditional direct marketing. The paper concludes with a
discussion of related work and a summary of contributions
and future research directions.
2. MODELING MARKETS AS SOCIAL
Consider a set of n potential customers, and let X i be a
Boolean variable that takes the value 1 if customer i buys the
product being marketed, and 0 otherwise. In what follows
we will often slightly abuse language by taking X i to \be"
the ith customer. Let the neighbors of X i be the customers
which directly in
uence
Xng. In other words, X i is independent
of
the customers whose value (i.e., whether they have bought
the product) is known (unknown), and let N u
Assume the product is described by a set of attributes
be a variable representing the marketing
action that is taken for customer i. For example, M i
could be a Boolean variable, with M the customer is
oered a given discount, and M Alternately,
could be a continuous variable indicating the size of the
discount oered, or a nominal variable indicating which of
several possible actions is taken. Let Mng.
Then, for all
C(N
C(N
C(N u
where C(N u
is the set of all possible congurations of the
unknown neighbors of X i (i.e., the set of all possible 2 jN u
assignments of 0 and 1 to them). Following Pelkowitz [33],
we approximate P (N u by its maximum entropy
estimate given the marginals P (X
This yields 1
C(N
Y
(2)
The set of variables X u , with joint probability conditioned
on X k , Y and M described by Equation 2, is an instance
of a Markov random eld [2, 25, 7]. Because Equation 2
expresses the probabilities P as a function of
themselves, it can be applied iteratively to nd them, starting
from a suitable initial assignment. This procedure is
known as relaxation labeling, and is guaranteed to converge
to locally consistent values as long as the initial assignment
is su-ciently close to them [33]. A natural choice for initialization
is to use the network-less probabilities
Notice that the number of terms in Equation 2 is exponential
in the number of unknown neighbors of X i . If this
number is small (e.g., 5), this should not be a problem; oth-
erwise, an approximate solution is necessary. A standard
method for this purpose is Gibbs sampling [16]. An alternative
based on an e-cient k-shortest-path algorithm is proposed
in Chakrabarti et al. [6].
Given N i and Y, X i should be independent of the marketing
actions for other customers. Assuming a naive Bayes
model for X i as a function of N i ,
Y
Y
where
corresponding net-
work-less probabilities are
Given Equation 3, in order to
compute Equation 2 we need to know only the following
probabilities, since all terms reduce to them: P (X
k. With the exception
of P (X i jN i ), all of these are easily obtained in one pass
through the data by counting (assuming the Yk are discrete
or have been pre-discretized; otherwise a univariate model
can be t for each numeric Yk ). The form of P (X
on the mechanism by which customers in
uence each
other, and will vary from application to application. In the
next section we focus on the particular case where X is the
set of users of a collaborative ltering system.
For simplicity, assume that M is a Boolean vector (i.e.,
only one type of marketing action is being considered, such
as oering the customer a given discount). Let c be the
cost of marketing to a customer (assumed constant), r0 be
the revenue from selling the product to the customer if no
marketing action is performed, and r1 be the revenue if marketing
is performed. r0 and r1 will be the same unless the
1 The same result can be obtained by assuming that the X j
are independent given X k , Y and M.
marketing action includes oering a discount. Let f 1
i (M) be
the result of setting M i to 1 and leaving the rest of M un-
changed, and similarly for f 0
(M). The expected lift in prot
from marketing to customer i in isolation (i.e., ignoring her
eect on other customers) is then [8]
Let M0 be the null vector (all zeros). The global lift in
prot that results from a particular choice M of customers
to market to is then
the number of 1's in M. Our goal is to nd the assignment
of values to M that maximizes ELP. In general, nding the
optimal M requires trying all possible combinations of assignments
to its components. Because this is intractable, we
propose using one of the following approximate procedures
instead:
Single pass For each i, set M
> 0, and set M
Greedy search Set through the M i 's, setting
each M i to 1 if ELP (X
Y;M). Continue looping until there are no changes
in a complete scan of the M i 's. The key dierence between
this method and the previous one is that here
later changes to the M i 's are evaluated with earlier
changes to the M i 's already in place, while in the previous
method all changes are evaluated with respect
to M0 .
Hill-climbing search Set
(M))g. Now set M i 2
(M)))g. Repeat
until there is no i for which setting M
ELP.
Each method is computationally more expensive than the
previous one, but potentially leads to a better solution for
(i.e., produces a higher ELP).
The intrinsic value of a customer is given by Equation 4.
The total value of a customer (intrinsic plus network) is the
ELP obtained by marketing to her: ELP (X
(M)). The customer's network value is the
dierence between her total and intrinsic values. Notice
that, in general, this value will depend on which other customers
are being marketed to, and which others have already
bought the product.
Suppose now that M i is a continuous variable, that we
can choose to incur dierent marketing costs for dierent
customers, and that there is a known relationship between
In other words, suppose that we can increase
a customer's probability of buying by increasing the
amount spent in marketing to her, and that we can estimate
how much needs to be spent to produce a given increase in
buying probability. The optimal customer acquisition cost
for customer i is then the value of c i that maximizes her total
value
jMjc replaced by
Equation 5.
3. MINING SOCIAL NETWORKS FROM
COLLABORATIVE FILTERING
DATABASES
Arguably, a decade ago it would have been di-cult to
make practical use of a model like Equation 2, because
of the lack of data to estimate the in
uence probabilities
Fortunately, the explosion of the Internet has
drastically changed this. People in
uence each other online
(and leave a record of it) through postings and responses to
newsgroups, review and knowledge-sharing sites like epin-
ions.com, chat rooms and IRC, online game playing and
MUDs, peer-to-peer networks, email, interlinking of Web
pages, etc. In general, any form of online community is a
potentially rich source of data for mining social networks
from. (Of course, mining these sources is subject to the
usual privacy concerns; but many sources are public infor-
mation.) In this paper we will concentrate on a particularly
simple and potentially very eective data source: the collaborative
ltering systems widely used by e-commerce sites
(e.g., amazon.com) to recommend products to consumers.
In a collaborative ltering system, users rate a set of items
(e.g., movies, books, newsgroup postings, Web pages), and
these ratings are then used to recommend other items the
user might be interested in. The ratings may be implicit
(e.g., the user did or did not buy the book) or explicit (e.g.,
the user gives a rating of zero to ve stars to the book,
depending on how much she liked it). Many algorithms have
been proposed for choosing which items to recommend given
the incomplete matrix of ratings (see, for example, Breese
et al. [3]). The most widely used method, and the one that
we will assume here, is the one proposed in GroupLens, the
project that originally introduced quantitative collaborative
ltering [35]. The basic idea in this method is to predict a
user's rating of an item as a weighted average of the ratings
given by similar users, and then recommend items with high
predicted ratings. The similarity of a pair of users (i; j) is
measured using the Pearson correlation coe-cient:
(R ik R i )(R jk R j
(R ik R i
(R jk R j
where R ik is user i's rating of item k, R i is the mean of user
i's ratings, likewise for j, and the summations and means
are computed over the items k that both i and j have rated.
Given an item k that user i has not rated, her rating of it is
then predicted as
(R jk R j
is a normalization factor, and
N i is the set of n i users most similar to i according to
Equation 6 (her neighbors). In the limit, N i can be the
entire database of users, but for reasons of noise robustness
and computational e-ciency it is usually much smaller (e.g.,
5). For neighbors that did not rate the item, R jk is set
to R j .
The key advantage of a collaborative ltering database
as a source for mining a social network for viral marketing
is that the mechanism by which individuals in
uence each
other is known and well understood: it is the collaborative
ltering algorithm itself. User i in
uences user j when j
sees a recommendation that is partly the result of i's rating.
Assuming i and j do not know each other in real life (which,
given that they can be anywhere in the world, is likely to
be true), there is no other way they can substantially inuence
each other. Obviously, a user is subject to many
in
uences besides that of the collaborative ltering system
(including the in
uence of people not on the system), but
the uncertainty caused by those in
uences is encapsulated
to a rst degree of approximation in P (X
R ik ), the probability
that a user will purchase an item given the rating the
system predicts for her. It is also reasonable to assume that
an individual would not continue to use a collaborative l-
tering system if she did not nd its recommendations useful,
and therefore that there is a causal connection (rather than
simply a correlation) between the recommendations received
and the purchases made.
To extract a social network model from a collaborative l-
tering database, we view an item as a random sample from
the space (X; Y), where Y is a set of properties of the item
(assumed available), and X i represents whether or not user
rated the item. For simplicity, we assume that if a user
rates an item then she bought it, and vice-versa; removing
this assumption would be straightforward, given the relevant
data. The prior P (X i ) can then be estimated simply as the
fraction of items rated by user i. The conditional probabilities
can be obtained by counting the number
of occurrences of each value of Yk (assumed discrete or pre-
discretized) with each value of X i . Estimating
requires a data collection phase in which users to market
to are selected at random and their responses are recorded
(both when being marketed to and not). P (M i jX i ) can be
estimated individually for each user, or (requiring far less
data) as the same for all users, as done in Chickering and
Heckerman [8]. If the necessary data is not available, we
propose setting P (M i jX i ) using prior knowledge about the
eectiveness of the type of marketing being considered, given
any demographic information available about the users. (It
is also advisable to test the sensitivity of the outcome to
trying a range of values.)
The set of neighbors N i for each i is the set of neighbors
of the corresponding user in the collaborative ltering sys-
tem. If the ratings are implicit (i.e., yes/no), a model for
naive Bayes model, as we have assumed
for P (Yk jX i )) can be t directly to the observed X vectors.
If explicit ratings are given (e.g., zero to ve stars), then
we know that X i depends on N i solely through ^
predicted rating according to Equation 7 (for readability,
we will omit the item indexes k). In other words, X i is
conditionally independent of N
R i . If the neighbors'
ratings are known, ^
R i is a deterministic function of N i given
by Equation 7, with determining whether the contribution
of the jth neighbor is R j R j or 0 (see discussion
following Equation 7). If the ratings of some or all neighbors
are unknown (i.e., the ratings that they would give if
they were to rate the item), we can estimate them as their
expected values given the item's attributes. In other words,
the contribution of a neighbor with unknown rating will be
(R j jY) can be estimated using a naive
Bayes model (assuming R j only takes on a small number of
dierent values, which is usually the case). Let ^
the value of ^
obtained in this way. Then, treating this as
a deterministic value,
Z Rmax
R min
All that remains is to estimate P (X
R i ). This can be
viewed as a univariate regression problem, with ^
R i as the
input and P (X
as the output. The most appropriate
functional form for this regression will depend on the observed
data. In the experiments described below, we used
a piecewise-linear model for P (X
obtained by dividing
R i 's range into bins, computing the mean ^
for each bin, and then estimating P (X
for an arbitrary
R i by interpolating linearly between the two nearest means.
Given a small number of bins, this approach can t a wide
variety of observations relatively well, with little danger of
overtting.
Notice that the technical denition of a Markov random
eld requires that the neighborhood relation be symmetric
(i.e., if i is a neighbor of j, then j is also a neighbor of i),
but in a collaborative ltering system this may not be the
case. The probabilistic model obtained from it in the way
described will then be an instance of a dependency network,
a generalization of Markov random elds recently proposed
by Heckerman et al. [17]. Heckerman et al. show that Gibbs
sampling applied to such a network denes a joint distribution
from which all probabilities of interest can be computed.
While in our experimental studies Gibbs sampling and relaxation
labeling produced very similar results, the formal
derivation of the properties of dependency networks under
relaxation labeling is a matter for future research.
4. EMPIRICAL STUDY
We have applied the methodology described in the previous
sections to the problem of marketing motion pictures,
using the EachMovie collaborative ltering database (www.-
research.compaq.com/src/eachmovie/). EachMovie contains
2.8 million ratings of 1628 movies by 72916 users, gathered
between January 29, 1996 and September 15, 1997 by
the eponymous recommendation site, which was run by the
DEC (now Compaq) Systems Research Center. EachMovie
is publicly available, and has become a standard database
for evaluating collaborative ltering systems (e.g., Breese at
al. [3]). Motion picture marketing is an interesting application
for the techniques we propose because the success of a
movie is known to be strongly driven by word of mouth [12].
EachMovie is composed of three databases: one containing
the ratings, one containing demographic information
about the users (which we did not use), and one containing
information about the movies. The latter includes the
movie's title, studio, theater and video status (old or cur-
rent), theater and video release dates, and ten Boolean attributes
describing the movie's genre (action, animation,
art/foreign, classic, comedy, drama, family, horror, romance,
and thriller; a movie can have more than one genre). The
movie's URL in the Internet Movie Database (www.imdb.-
com) is also included. This could be used to augment the
movie description with attributes extracted from the IMDB;
we plan to do so in the future. The ratings database contains
an entry for each movie that each user rated, on a scale of
zero to ve stars, and the time and date on which the rating
was generated.
The collaborative ltering algorithm used in EachMovie
has not been published, but we will assume that the algorithm
described in the previous section is a reasonable
approximation to it. This assumption is supported by the
observation that, despite their variety in form, all the many
collaborative ltering algorithms proposed attempt to capture
essentially the same information (namely, correlations
between users).
The meaning of the variables in the EachMovie domain
is as follows: X i is whether person i saw the movie being
considered. Y contains the movie attributes. R i is the rating
(zero to ve stars) given to the movie by person i. For
simplicity, throughout this section we assume the ^
R i 's are
centered at zero (i.e., R i has been subtracted from ^
Equation 7).
4.1 The Model
We used the ten Boolean movie
genre attributes. Thus P (YjX i ) was in essence a model of
a user's genre preferences, and during inference two movies
with the same genre attributes were indistinguishable. The
network consisted of all people who had rated at least ten
movies, and whose ratings had non-zero standard deviation
(otherwise they contained no useful information). Neighbor
weights determined using a modied Pearson
correlation coe-cient, which penalized the correlation
by 0.05 for each movie less than ten that both X i and X j
had rated. This correction is commonly used in collaborative
ltering systems to avoid concluding that two users
are very highly correlated simply because they rated very
few movies in common, and by chance rated them similarly
[18]. The neighbors of X i were the 's for which W ji was
highest. With n i =5, a number we believe provides a reason-able
tradeo between model accuracy and speed, the average
W ji of neighbors was 0.91. Repeating the experiments
described below with produced no sig-
nicant change in model accuracy, and small improvements
in prot. Interestingly, the network obtained in each case
was completely connected (i.e., it contained no isolated sub-
graphs).
As discussed above, the calculation of P (X
requires estimating P (X
and P (R simply the fraction of movies X i
rated. We used a naive Bayes model for P (R
(R j jY), and P (X i ) were all smoothed using an m-estimate
[5] with m=1 and the population average as the prior. We
did not know the true values of P (M i jX i ). We expected
marketing to have a larger eect on a customer who was
already inclined to see the movie, and thus we set the probabilities
so as to obtain
where > 1 is a parameter that we varied in the experiments
described below. 2 As described in the previous sec-
To fully specify P (M i jX i ) we used the additional constraint
that the values of 0.20.61
R
Figure
1: Empirical distribution of ^
R i and X i given
R i .
tion,
modeled using a piecewise linear func-
tion. We measured P (X
for each of nine bins, whose
boundaries were 5.0, 2.0, 1.0, 0.5, 0.1, 0.1, 0.5, 1.0,
2.0, and 5.0. Note that while R i must be between 0 and 5,
R i is a weighted sum of the neighbors' dierence from their
average, and thus may range from 5 to 5. We also had a
zero-width bin located at ^
Movies were seen with low
probability (1{5%), and thus there was a high probability
that a movie had not been rated by any of X i 's neighbors. In
the absence of a rating, a neighbor's contribution to ^
zero. 84% of the samples fell into this zero bin. Bin boundaries
were chosen by examination of the distribution of data
in the training set, shown in Figure 1. ^
unlikely to
deviate far from 0, for the reasons given above. We used
narrow bins near ^
to obtain higher accuracy in this
area, which contained a majority of the data (96.4% of the
data fell between 0.5 and 0.5). To combat data sparseness,
both
and the per-bin mean ^
R i were smoothed for
each bin using an m-estimate with m=1 and the population
average as the prior.
Initially, we expected P (X
increase monotonically
R i . The actual shape, shown in Figure 1, shows increasing
signicantly away from 0
in either direction. This shape is due to a correlation between
and the popularity of a movie: for a popular
R i is more likely to deviate further from zero and
more likely to be 1. Note, however, that
is indeed monotonically increasing in the [ 0:1; 0:1] inter-
val, where the highest density of ratings is. Furthermore,
4.2 The Data
While the EachMovie database is large, it has problems
which had to be overcome. The movies in the database
which were in theaters before January 1996 were drawn
from a long time period, and so tended to be very well
known movies. Over 75% (2.2 million) of the ratings were
on these movies. In general, the later a movie was released,
the fewer ratings and thus the less information we had for
it. We divided the database into a training set consisting
of all ratings received through September 1, 1996, and a
test set consisting of all movies released between September
1, 1996 and December 31, 1996, with the ratings received
we used it was always possible to satisfy Equation 9 and this
constraint simultaneously.
for those movies any time between September 1, 1996 and
the end of the database. Because there was such a large
dierence in average movie popularity between the early
movies and the later ones, we further divided the training
set into two subsets: S old , containing movies released before
January 1996 (1.06 million votes), and Srecent , containing
movies released between January and September 1996 (90k
votes). The average movie viewership of S old was 5.6%, versus
1.4% for Srecent . Since 92% of the training data was in
old , we could not aord to ignore it. However, in terms of
the probability that someone rates a movie, the test period
could be expected to be much more similar to Srecent . Thus,
we trained using all training data, then rescaled P (X i ) and
using Srecent , and smoothed these values using
an m-estimate with m=1 and the distribution on the full
training set as the prior.
Many movies in the test set had very low probability (36%
were viewed by 10 people or less, and 48% were viewed by
20 people or less, out of over 20748 people 3 ). Since it is not
possible to model such low probability events with any reli-
ability, we removed all movies which were viewed by fewer
than 1% of the people. This left 737,579 votes over 462
movies for training, and 3912 votes over 12 movies for test-
ing.
learned
using only these movies. However, because the EachMovie
collaborative ltering system presumably used all movies,
we used all movies when simulating it (i.e., when computing
similarities (Equation 6), selecting neighbors, and predicting
ratings (Equation 7)).
A majority of the people in the EachMovie database provided
ratings once, and never returned. These people affected
the predicted ratings ^
R i seen by users of EachMovie,
but because they never returned to the system for queries,
their movie viewing choices were not aected by their neigh-
bors. We call these people inactive. A person was marked as
inactive if there were more than days between her last rating
and the end of the training period. In our tests, we used
a of 60, which resulted in 11163 inactive people. Inactive
people could be marketed to, since they were presumably
still watching movies; they were just not reporting ratings
to EachMovie. If an inactive person was marketed to, she
was assumed to have no eect on the rest of the network.
4.3 Inference and Search
Inference was performed by relaxation labeling, as described
in Section 2. This involved iteratively re-estimating
probabilities until they all converged to within a threshold
(We used
We maintained a queue of nodes whose
probabilities needed to be re-estimated, which initially contained
all nodes in the network. Each X i was removed from
the queue in turn, and its probability was re-estimated using
Equation 2. If P (X i jX k ; Y;M) had changed by more than
, all nodes that X i was a neighbor of that were not already
in the queue were added to it. Note that the probabilities
of nodes corresponding to inactive people only needed to be
computed once, since they are independent of the rest of the
network.
The computation of Equation 2 can be sped up by noting
that, after factoring, all terms involving the Yk 's are constant
throughout a run, and so these terms and their com-
3 This is the number of people left after we removed anyone
who rated fewer than ten movies, rated movies only after
September 1996, or gave the same rating to all movies.
binations only need to be computed once. Further, since in
a single search step only one M i changes, most of the results
of one step can be reused in the next, greatly speeding
up the search process. With these optimizations, we were
able to measure the eect of over 10,000 single changes in
per second, on a 1 GHz Pentium III machine. In preliminary
experiments, we found relaxation labeling carried
out this way to be several orders of magnitude faster than
Gibbs sampling; we expect that it would also be much faster
than the more e-cient version of Gibbs sampling proposed
in Heckerman et al. [17]. 4 The relaxation labeling process
typically converged quite quickly; few nodes ever required
more than a few updates.
4.4 Model Accuracy
To test the accuracy of our model, we computed the estimated
probability for each person X i with
We measured the correlation between
this and the actual value of X i in the test set, over all movies,
over all people. 5 (Note that, since the comparison is with
test set values, we did not expect to receive ratings from
inactive people, and therefore P (X them.) The
resulting correlation was 0.18. Although smaller than desir-
able, this correlation is remarkably high considering that the
only input to the model was the movie's genre. We expect
the correlation would increase if a more informative set of
movie attributes Y were used.
4.5 Network Values
For the rst movie in the test set (\Space Jam"), we measured
the network value for all 9585 active people 6 in the
following scenario (see Equations 4 and 9):
Figure 2 shows the 500
highest network values (out of 9585) in decreasing order.
The unit of value in this graph is the average revenue that
would be obtained by marketing to a customer in isolation,
without costs or discounts. Thus, a network value of 20 for
a given customer implies that by marketing to her we essentially
marketing to an additional 20 customers.
The scale of the graph depends on the marketing scenario
(e.g., network values increase with ), but the shape generally
remains the same. The gure shows that a few users
have very high network value. This is the ideal situation for
the type of targeted viral marketing we propose, since we
can eectively market to many people while incurring only
the expense of marketing to those few. A good customer
to market to is one who: (1) is likely to give the product
a high rating, (2) has a strong weight in determining the
rating prediction for many of her neighbors, (3) has many
neighbors who are easily in
uenced by the rating prediction
they receive, (4) will have a high probability of purchasing
the product, and thus will be likely to actually submit a rating
that will aect her neighbors, and nally (5) has many
neighbors with the same four characteristics outlined above,
4 In our experiments, one Gibbs cycle of sampling all the
nodes in the network took on the order of a ftieth of a
second. The total runtime would be this value multiplied
by the number of sampling iterations desired and by the
number of search steps.
Simply measuring the predictive error rate would not be
very useful, because a very low error rate could be obtained
simply by predicting that no one sees the movie.
6 Inactive people always have a network value of zero.
Rank
Normalized
Network
Value
Figure
2: Typical distribution of network values.
and so on recursively. In the movie domain, these correspond
to nding a person who (1) will enjoy the movie, (2)
has many close friends, who are (3) easily swayed, (4) will
very likely see the movie if marketed to, and (5) has friends
whose friends also have these properties.
4.6 Marketing Experiments
We compared three marketing strategies: mass marketing,
traditional direct marketing, and the network-based marketing
method we proposed in Section 2. In mass marketing,
all customers were marketed to (M In direct
marketing, a customer X i was marketed to (M
and only if ELP i (X Equation ignoring
network eects (i.e., using the network-less probabilities
our approach, we compared the three
approximation methods proposed in Section 2: single pass,
greedy search and hill-climbing. Figure 3 compares these
three search types and direct marketing on three dierent
marketing scenarios. For all scenarios, means
prot numbers are in units of number of movies seen. In
the and in the discounted movie
In both of these scenarios we assumed a
cost of marketing of 10% of the revenue from a single sale:
0:1. In the advertising scenario no discount was offered
1), and a lower cost of marketing was assumed
(corresponding, for example, to online marketing instead of
physical 0:02. Notice that all the marketing
actions considered were eectively in addition to the (pre-
sumably mass) marketing that was actually carried out for
the movie. The average number of people who saw a movie
given only this marketing (i.e., with
The baseline prot would be obtained by subtracting from
this the (unknown) original costs. The correct for each
marketing scenario was unknown, so we present the results
for a range of values. We believe we have chosen plausible
ranges, with a free movie providing more incentive than a
discount, which in turn provides more incentive than simply
advertising. experiments.
In all scenarios, mass marketing resulted in negative prof-
its. Not surprisingly, it fared particularly poorly in the
discounted movie scenarios, producing prots which
ranged from 2057 to 2712. In the advertising scenario,
mass marketing resulted in prots ranging from 143 to
381 (depending on the choice of ). In the case of a free
movie oer, the prot from direct marketing could not be
positive, since without network eects we were guaranteed
to lose money on anyone who saw a movie for free. Figure 3
shows that our method was able to nd protable marketing
opportunities that were missed by direct marketing. For
the discounted movie, direct marketing actually resulted in
a loss of prot. A customer that looked protable on her
own may actually have had a negative overall value. This
situation demonstrates that not only can ignoring network
eects cause missed marketing opportunities, but it can also
make an unprotable marketing action look protable. In
the advertising scenario, for small our method increased
prots only slightly, while direct marketing again reduced
them. Both methods improved with increasing , but our
method consistently outperformed direct marketing.
As can be seen in Figure 3, greedy search produced results
that were quite close to those of hill climbing. The
average dierence between greedy and hill-climbing prots
(as a percentage of the latter) in the three marketing scenarios
was 9.6%, 4.0%, and 0.0% respectively. However, as
seen in Figure 3, the runtimes diered signicantly, with
hill-climbing time ranging from 4.6 minutes to 42.1 minutes
while greedy-search time ranged from 3.8 to 5.5 minutes.
The contrast was even more pronounced in the advertising
scenario, where the prots found by the two methods were
nearly identical, but hill climbing took 14 hours to com-
plete, compared to greedy search's 6.7 minutes. Single-pass
was the fastest method and was comparable in speed to direct
marketing, but led to signicantly lower prots in the
discounted movie scenarios.
The lift in prot was considerably higher if all users were
assumed to be active. In the free movie scenario, the lift in
prot using greedy search was 4.7 times greater than when
the network had inactive nodes. In the discount and advertising
scenarios the ratio was 4.1 and 1.8, respectively. This
was attributable to the fact that the more inactive neighbors
a node had, the less responsive it could be to the network.
From the point of view of an e-merchant applying our ap-
proach, this suggests modifying the collaborative ltering
system to only assign active users as neighbors.
5. RELATED WORK
Social networks have been an object of study for some
time, but previous work within sociology and statistics has
suered from a lack of data and focused almost exclusively
on very small networks, typically in the low tens of individuals
[41]. Interestingly, the Google search engine [4] and
Kleinberg's (1998) HITS algorithm for nding hubs and authorities
on the Web are based on social network ideas. The
success of these approaches, and the discovery of widespread
network topologies with nontrivial properties [42], has led to
a
urry of research on modeling the Web as a semi-random
graph (e.g., Kumar et al. [28], Barabasi et al. [1]). Some of
this work might be applicable in our context.
In retrospect, the earliest sign of the potential of viral
marketing was perhaps the classic paper by Milgram [31]
estimating that every person in the world is only six edges
away from every other, if an edge between i and j means \i
knows j." Schwartz and Wood [37] mined social relationships
from email logs. The ReferralWeb project mined a social
network from a wide variety of publicly-available online
information [24], and used it to help individuals nd experts
who could answer their questions. The COBOT project
Free Movie26101
Alpha
Profit
hill greedy single-pass direct
Advertising
Alpha
Profit
hill greedy single-pass direct
Discounted Movie
Alpha
Profit
hill greedy single-pass direct
Discounted Movie Runtimes103050
Alpha
Time
hill greedy single-pass direct
Figure
3: Prots and runtimes obtained using dierent marketing strategies.
gathered social statistics from participant interactions in the
LambdaMoo MUD, but did not explicitly construct a social
network from them [21]. A Markov random eld formulation
similar to Equation 2 was used by Chakrabarti et al. [6] for
classication of Web pages, with pages corresponding to cus-
tomers, hyperlinks between pages corresponding to in
uence
between customers, and the bag of words in the page corresponding
to properties of the product. Neville and Jensen
[32] proposed a simple iterative algorithm for labeling nodes
in social networks, based on the naive Bayes classier. Cook
and Holder [9] developed a system for mining graph-based
data. Flake et al. [13] used graph algorithms to mine communities
from the Web (dened as sets of sites that have
more links to each other than to non-members).
Several researchers have studied the problem of estimating
a customer's lifetime value from data [22]. This line of re-search
generally focuses on variables like an individual's expected
tenure as a customer [30] and future frequency of purchases
[15]. Customer networks have received some attention
in the marketing literature [20]. Most of these studies
are purely qualitative; where data sets appear, they are very
small, and used only for descriptive purposes. Krackhardt
[27] proposes a very simple model for optimizing which customers
to oer a free sample of a product to. The model only
considers the impact on the customer's immediate friends,
ignores the eect of product characteristics, assumes the relevant
probabilities are the same for all customers, and is only
applied to a made-up network with seven nodes.
Collaborative ltering systems proposed in the literature
include GroupLens [35], PHOAKS [40], Siteseer [36], and
others. A list of collaborative ltering systems, projects
and related resources can be found at www.sims.berkeley.-
edu/resources/collab/.
6. FUTURE WORK
The type of data mining proposed here opens up a rich
eld of directions for future research. In this section we
brie
y mention some of the main ones.
Although the network we have mined is large by the standards
of previous research, much larger ones can be en-
visioned. Scaling up may be helped by developing search
methods specic to the problem, to replace the generic ones
we used here. Segmenting a network into more tractable
parts with minimal loss of prot may also be important.
Flake et al. [13] provide a potential way of doing this. A
related approach would be to mine subnetworks with high
prot potential embedded in larger ones. Recent work on
mining signicant Web subgraphs such as bipartite cores,
cliques and webrings (e.g., [28]) provides a starting point.
More generally, we would like to develop a characterization
of network types with respect to the prot that can be obtained
in them using an optimal marketing strategy. This
would, for example, help a company to better gauge the
prot potential of a market before entering (or attempting
to create) it.
In this paper we mined a network from a single source
(a collaborative ltering database). In general, multiple
sources of relevant information will be available; the ReferralWeb
project [24] exemplied their use. Methods for
combining diverse information into a sound representation of
the underlying in
uence patterns are thus an important area
for research. In particular, detecting the presence of causal
relations between individuals (as opposed to purely correlational
ones) is key. While mining causal knowledge from
observational databases is di-cult, there has been much recent
progress [10, 39].
We have also assumed so far that the relevant social net-work
is completely known. In many (or most) applications
this will not be the case. For example, a long-distance telephone
company may know the pattern of telephone calls
among its customers, but not among its non-customers. How-
ever, it may be able to make good use of connections between
customers and non-customers, or to take advantage
of information about former customers. A relevant question
is thus: what can be inferred from a (possibly biased)
sample of nodes and their neighbors in a network? At the
extreme where no detailed information about individual interactions
is available, our method could be extended to
apply to networks where nodes are groups of similar or related
customers, and edges correspond to in
uence among
groups.
Another promising research direction is towards more detailed
node models and multiple types of relations between
nodes. A theoretical framework for this could be provided
by the probabilistic relational models of Friedman et al. [14].
We would also like to extend our approach to consider multiple
types of marketing actions and product-design decisions,
and to multi-player markets (i.e., markets where the actions
of competitors must also be taken into account, leading to
a game-like search process).
This paper considered making marketing decisions at a
specic point in time. A more sophisticated alternative
would be to plan a marketing strategy by explicitly simulating
the sequential adoption of a product by customers
given dierent interventions at dierent times, and adapting
the strategy as new data on customer response arrives. A
further time-dependent aspect of the problem is that social
networks are not static objects; they evolve, and particularly
on the Internet can do so quite rapidly. Some of the largest
opportunities may lie in modeling and taking advantage of
this evolution.
Once markets are viewed as social networks, the inadequacy
of random sampling for pilot tests of products subject
to strong network eects (e.g., smart cards, video on
demand) becomes clear. Developing a better methodology
for studies of this type could help avoid some expensive failures
Many e-commerce sites already routinely use collaborative
ltering. Given that the infrastructure for data gathering
and for inexpensive execution of marketing actions (e.g.,
making specic oers to specic customers when they visit
the site) is already in place, these would appear to be good
candidates for a real-world test of our method. The greatest
potential, however, may lie in knowledge-sharing and customer
review sites like epinions.com, because the interaction
between users is richer and stronger there. For example, it
may be protable for a company to oer its products at a
loss to in
uential contributors to such sites. Our method
is also potentially applicable beyond marketing, to promoting
any type of social change for which the relevant network
of in
uence can be mined from available data. The spread
of online interaction creates unprecedented opportunities for
the study of social information processing; our work is a step
towards better exploiting this new wealth of information.
7. CONCLUSION
This paper proposed the application of data mining to viral
marketing. Viewing customers as nodes in a social net-
work, we modeled their in
uence on each other as a Markov
random eld. We developed methods for mining social net-work
models from collaborative ltering databases, and for
using these models to optimize marketing decisions. An
empirical study using the EachMovie collaborative ltering
database conrmed the promise of this approach.
8.
--R
Spatial interaction and the statistical analysis of lattice systems.
Empirical analysis of predictive algorithms for collaborative
The anatomy of a large-scale hypertextual Web search engine
Estimating probabilities: A crucial task in machine learning.
Enhanced hypertext categorization using hyperlinks.
Markov Random Fields: Theory and Application.
A decision theoretic approach to targeted advertising.
A simple constraint-based algorithm for e-ciently mining observational databases for causal relationships
On the optimality of the simple Bayesian classi
The buzz on buzz.
Value Miner: A data mining environment for the calculation of the customer lifetime value with application to the automotive industry.
Stochastic relaxation
Dependency networks for inference
An algorithmic framework for performing collaborative
The Complete Database Marketer: Second-Generation Strategies and Techniques for Tapping the Power of your Customer Database
Networks in Marketing.
Strategic application of customer lifetime value in direct marketing.
What exactly is viral marketing?
Combining social networks and collaborative
Markov Random Fields and Their Applications.
Authoritative sources in a hyperlinked environment.
Structural leverage in marketing.
Extracting large-scale knowledge bases from the Web
Data mining for direct marketing: Problems and solutions.
Statistics and data mining techniques for lifetime value modeling.
The small world problem.
Iterative classi
A continuous relaxation labeling algorithm for Markov random
Estimating campaign bene
GroupLens: An open architecture for collaborative
Personalized navigation for the web.
Discovering shared interests using graph analysis.
Information Rules: A Strategic Guide to the Network Economy.
Scalable techniques for mining causal structures.
PHOAKS: A system for sharing recommendations.
Social Network Analysis: Methods and Applications.
Collective dynamics of
--TR
Discovering shared interests using graph analysis
GroupLens
PHOAKS
Referral Web
Siteseer
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Enhanced hypertext categorization using hyperlinks
Information rules
The anatomy of a large-scale hypertextual Web search engine
Statistics and data mining techniques for lifetime value modeling
Estimating campaign benefits and modeling lift
An algorithmic framework for performing collaborative filtering
Authoritative sources in a hyperlinked environment
Efficient identification of Web communities
A Simple Constraint-Based Algorithm for Efficiently Mining Observational Databases for Causal Relationships
Scalable Techniques for Mining Causal Structures
Graph-Based Data Mining
Value Miner
Extracting Large-Scale Knowledge Bases from the Web
Learning Probabilistic Relational Models
A Decision Theoretic Approach to Targeted Advertising
Cobot in LambdaMOO
Dependency networks for inference, collaborative filtering, and data visualization
--CTR
Steffen Staab , Pedro Domingos , Peter Mika , Jennifer Golbeck , Li Ding , Tim Finin , Anupam Joshi , Andrzej Nowak , Robin R. Vallacher, Social Networks Applied, IEEE Intelligent Systems, v.20 n.1, p.80-93, January 2005
Elchanan Mossel , Sebastien Roch, On the submodularity of influence in social networks, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA
Muhammad A. Ahmad , Ankur Teredesai, Modeling spread of ideas in online social networks, Proceedings of the fifth Australasian conference on Data mining and analystics, p.185-190, November 29-30, 2006, Sydney, Australia
David Jensen , Jennifer Neville , Brian Gallagher, Why collective inference improves relational classification, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Louis Licamele , Mustafa Bilgic , Lise Getoor , Nick Roussopoulos, Capital and benefit in social networks, Proceedings of the 3rd international workshop on Link discovery, p.44-51, August 21-25, 2005, Chicago, Illinois
Pnar Yolum , Munindar P. Singh, Dynamic communities in referral networks, Web Intelligence and Agent System, v.1 n.2, p.105-116, April
Andrew Y. Wu , Michael Garland , Jiawei Han, Mining scale-free networks using geodesic clustering, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Paat Rusmevichientong , Shenghuo Zhu , David Selinger, Identifying early buyers from purchase data, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Ding Zhou , Eren Manavoglu , Jia Li , C. Lee Giles , Hongyuan Zha, Probabilistic models for discovering e-communities, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Esteban Arcaute , Adam Kirsch , Ravi Kumar , David Liben-Nowell , Sergei Vassilvitskii, On threshold behavior in query incentive networks, Proceedings of the 8th ACM conference on Electronic commerce, June 11-15, 2007, San Diego, California, USA
Pinar Yolum , Munindar P. Singh, Dynamic communities in referral networks, Web Intelligence and Agent System, v.1 n.2, p.105-116, December
John Galloway , Simeon J. Simoff, Network data mining: methods and techniques for discovering deep linkage between attributes, Proceedings of the 3rd Asia-Pacific conference on Conceptual modelling, p.21-32, January 16-19, 2006, Hobart, Australia
Matthew Richardson , Pedro Domingos, Mining knowledge-sharing sites for viral marketing, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Christos Faloutsos , Kevin S. McCurley , Andrew Tomkins, Fast discovery of connection subgraphs, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Dan Cosley , Shyong K. Lam , Istvan Albert , Joseph A. Konstan , John Riedl, Is seeing believing?: how recommender system interfaces affect users' opinions, Proceedings of the SIGCHI conference on Human factors in computing systems, April 05-10, 2003, Ft. Lauderdale, Florida, USA
Andrew Fast , David Jensen , Brian Neil Levine, Creating social networks to improve peer-to-peer networking, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Christopher R. Palmer , Phillip B. Gibbons , Christos Faloutsos, ANF: a fast and scalable tool for data mining in massive graphs, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Andrew Fast , David Jensen , Brian Neil Levine, Creating social networks to improve peer-to-peer networking, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Jon Kleinberg, Distributed social systems, Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, p.5-6, July 23-26, 2006, Denver, Colorado, USA
Deng Cai , Zheng Shao , Xiaofei He , Xifeng Yan , Jiawei Han, Mining hidden community in heterogeneous social networks, Proceedings of the 3rd international workshop on Link discovery, p.58-65, August 21-25, 2005, Chicago, Illinois
Ralitsa Angelova , Gerhard Weikum, Graph-based text classification: learn from your neighbors, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Rakesh Agrawal , Sridhar Rajagopalan , Ramakrishnan Srikant , Yirong Xu, Mining newsgroups using networks arising from social behavior, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
Perlich , Foster Provost, Distribution-based aggregation for relational learning with identifier attributes, Machine Learning, v.62 n.1-2, p.65-105, February 2006
David Kempe , Jon Kleinberg , va Tardos, Maximizing the spread of influence through a social network, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
YongSeog Kim, Toward a successful CRM: variable selection, sampling, and ensemble, Decision Support Systems, v.41 n.2, p.542-553, January 2006
Jennifer Neville , David Jensen, Relational Dependency Networks, The Journal of Machine Learning Research, 8, p.653-692, 5/1/2007
Pedro Domingos, Prospects and challenges for multi-relational data mining, ACM SIGKDD Explorations Newsletter, v.5 n.1, July
Xiaodan Song , Belle L. Tseng , Ching-Yung Lin , Ming-Ting Sun, Personalized recommendation driven by information flow, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Lise Getoor, Link mining: a new data mining challenge, ACM SIGKDD Explorations Newsletter, v.5 n.1, July
YongSeog Kim , W. Nick Street, An intelligent system for customer targeting: a data mining approach, Decision Support Systems, v.37 n.2, p.215-228, May 2004
Lars Backstrom , Dan Huttenlocher , Jon Kleinberg , Xiangyang Lan, Group formation in large social networks: membership, growth, and evolution, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Jonathan L. Herlocker , Joseph A. Konstan , Loren G. Terveen , John T. Riedl, Evaluating collaborative filtering recommender systems, ACM Transactions on Information Systems (TOIS), v.22 n.1, p.5-53, January 2004
Sofus A. Macskassy , Foster Provost, Classification in Networked Data: A Toolkit and a Univariate Case Study, The Journal of Machine Learning Research, 8, p.935-983, 5/1/2007
Xiaodan Song , Yun Chi , Koji Hino , Belle L. Tseng, Information flow modeling based on diffusion rate for prediction and ranking, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Deepayan Chakrabarti , Christos Faloutsos, Graph mining: Laws, generators, and algorithms, ACM Computing Surveys (CSUR), v.38 n.1, p.2-es, 2006 | collaborative filtering;direct marketing;viral marketing;markov random fields;social networks;dependency networks |
502527 | Proximal support vector machine classifiers. | Instead of a standard support vector machine (SVM) that classifies points by assigning them to one of two disjoint half-spaces, points are classified by assigning them to the closest of two parallel planes (in input or feature space) that are pushed apart as far as possible. This formulation, which can also be interpreted as regularized least squares and considered in the much more general context of regularized networks [8, 9], leads to an extremely fast and simple algorithm for generating a linear or nonlinear classifier that merely requires the solution of a single system of linear equations. In contrast, standard SVMs solve a quadratic or a linear program that require considerably longer computational time. Computational results on publicly available datasets indicate that the proposed proximal SVM classifier has comparable test set correctness to that of standard SVM classifiers, but with considerably faster computational time that can be an order of magnitude faster. The linear proximal SVM can easily handle large datasets as indicated by the classification of a 2 million point 10-attribute set in 20.8 seconds. All computational results are based on 6 lines of MATLAB code. | INTRODUCTION
Standard support vector machines (SVMs) [36, 6, 3, 5,
20], which are powerful tools for data classification, classify
points by assigning them to one of two disjoint halfspaces.
These halfspaces are either in the original input space of
the problem for linear classifiers, or in a higher dimensional
feature space for nonlinear classifiers [36, 6, 20]. Such standard
SVMs require the solution of either a quadratic or a
linear program which require specialized codes such as [7].
In contrast we propose here a proximal SVM (PSVM) which
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
KDD 2001 San Francisco CA USA
classifies points depending on proximity to one of two parallel
planes that are pushed as far apart as possible. In fact our
geometrically motivated proximal formulation has been considered
in the much more general context of regularization
networks [8, 9]. These results, which give extensive theoretical
and statistical justification for the proximal approach,
do not contain the extensive computational implementation
and results given here. Furthermore, our specific formulation
leads to a strongly convex objective function which is
not always the case in [8, 9]. Strong convexity plays a key
role in the simple proximal code provided here as well the
very fast computational times obtained. Obtaining a linear
or nonlinear PSVM classifier requires nothing more sophisticated
than solving a single system of linear equations. Efficient
and fast linear equation solvers are freely available [1]
or are part of standard commercial packages such as MATLAB
[26], and can solve large systems very fast.
We briefly summarize the contents of the paper now. In
Section 2 we introduce the proximal linear support vector
machine, give the Linear Proximal Algorithm 2.1 and an explicit
expression for the leave-one-out-correctness in terms of
problem data (16). In Section 3 we introduce the proximal
kernel-based nonlinear support vector machine, the corresponding
nonlinear classifier (28) and the Nonlinear Proximal
Algorithm 3.1. Section 3 contains many numerical testing
results for both the linear and nonlinear classifiers based
on an extremely simple MATLAB [26] code of 6 lines for
both the linear and nonlinear PSVM. The results surpass
all other algorithms compared to in speed and give very
comparable testing set correctness.
A word about our notation and background material. All
vectors will be column vectors unless transposed to a row
vector by a prime superscript # . For a vector x in the n-dimensional
real space R n , the step function step(x) is defined
as
n. The scalar (inner) product of two vectors
x and y in the n-dimensional real space R n will be denoted
by x # y and the 2-norm of x will be denoted by #x#. For a
matrix A # R m-n , A i is the ith row of A which is a row
vector in R n , while A -j is the jth column of A. A column
vector of ones of arbitrary dimension will be denoted by e.
For A # R m-n and B # R n-k , the kernel K(A,B) maps
R m-n
-R n-k into R m-k . In particular, if x and y are column
vectors in R n then, K(x # , y) is a real number, K(x # , A # )
is a row vector in R m and K(A,A # ) is an m-m matrix. The
base of the natural logarithm will be denoted by #. We will
make use of the following Gaussian kernel [36, 6, 20] that is
frequently used in the SVM literature:
(1)
where A # R m-n , B # R n-k and - is a positive constant.
The identity matrix of arbitrary dimension will be denoted
by I.
2. THE LINEAR PROXIMAL SUPPORT VECTOR
We consider the problem, depicted in Figure 1, of classifying
m points in the n-dimensional real space R n , represented
by the m - n matrix A, according to membership of each
point A i in the class A+ or A- as specified by a given m-m
diagonal matrix D with plus ones or minus ones along its
diagonal. For this problem, the standard support vector machine
with a linear kernel [35, 6] is given by the following
quadratic program with parameter # > 0:
min
(w,#,y)#R n+1+m
(2)
As depicted in Figure 1, w is the normal to the bounding
planes:
that bound most of the sets A+ and A- respectively. The
constant # determines their location relative to the origin.
When the two classes are strictly linearly separable, that is
when the error variable in (2) (which is not the case
shown in Figure 1), the plane x # all of the
class A+ points, while the plane x #
the class A- points as follows:
Consequently, the plane:
midway between the bounding planes (3), is a separating
plane that separates A+ from A- completely if
only approximately as depicted in Figure 1. The quadratic
term in (2), which is twice the reciprocal of the square of
the 2-norm distance 2
#w# between the two bounding planes
of (3) (see
Figure
1), maximizes this distance, often called
the "margin". Maximizing the margin enhances the generalization
capability of a support vector machine [35, 6]. If
the classes are linearly inseparable, which is the case shown
in
Figure
1, then the two planes bound the two classes with
a "soft margin" (i.e. bound approximately with some error)
determined by the nonnegative error variable y, that is:
The 1-norm of the error variable y is minimized parametrically
with weight # in (2) resulting in an approximate separating
plane (5) as depicted in Figure 1. This plane acts as
a linear classifier as follows:
< 0, then x # A-,
Our point of departure is similar to that of [23, 24], where
the optimization problem (2) is replaced by the following
problem:
min
(w,#,y)#R n+1+m
Note that no explicit nonnegativity constraint is needed on
y, because if any component y i is negative then the objective
function can be decreased by setting that y
still satisfying the corresponding inequality constraint. Note
further that the 2-norm of the error vector y is minimized
instead of the 1-norm, and the margin between the bounding
planes is maximized with respect to both orientation w
and relative location to the origin #. Extensive computational
experience, as in [22, 23, 24, 18, 17] indicates that
this formulation is just as good as the classical formulation
(2) with some added advantages such as strong convexity of
the objective function. Our key idea in this present paper is
to make a very simple, but very fundamental change in the
formulation (8), namely replace the inequality constraint by
an equality as follows:
min
(w,#,y)#R n+1+m
This modification, even though very simple, changes the nature
of optimization problem significantly. In fact it turns
out that we can write an explicit exact solution to the problem
in terms of the problem data as we show below, whereas
it is impossible to do that in the previous formulations because
of their combinatorial nature. Geometrically the formulation
is depicted in Figure 2 which can be interpreted
as follows. The planes x # w -1 are not bounding
planes anymore, but can be thought of as "proximal"
planes, around which the points of each class are clustered
and which are pushed as far apart as possible by the term
(w in the objective function which is nothing other
than the reciprocal of the 2-norm distance squared between
the two planes in the (w, #) space of R n+1 .
x
A-
x
x
x
x
x
x
x
PSfrag replacements
#w#
Separating Plane: x #
Figure
1: The Standard Support Vector Machine
Classifier in the w-space of R n : The approximately
bounding planes of equation (3) with a soft (i.e. with
some error) margin 2
#w# , and the plane of equation
approximately separating A+ from A-.
x
x
x
x
x
A-
x
x
x
x
x
x
x
PSfrag replacements
Separating Plane: x # w
Figure
2: The Proximal Support Vector Machine
Classifier in the (w, #)-space of R n+1 : The planes
which points of the sets A+
and A- cluster and which are pushed apart by the
optimization problem (9).
We note that our formulation (9) can be also interpreted
as a regularized least squares solution [34] of the system of
linear equations D(Aw - e, that is finding an approximate
solution (w, #) with least 2-norm. Similarly the
standard SVM formulation (2) can be interpreted, by using
linear programming perturbation theory [21], as a least 2-
norm approximate solution to the system of linear inequalities
e. Neither of these interpretations,
however, is based on the idea of maximizing the margin,
the distance between the parallel planes (3), which is a key
feature of support vector machines [36, 6, 20].
The Karush-Kuhn-Tucker (KKT) necessary and su#cient
optimality conditions [19, p. 112] for our equality constrained
problem are obtained by setting equal to zero
the gradients with respect to (w, #, y, u) of the Lagrangian:
L(w, #, y,
Here, u is the Lagrange multiplier associated with the
equality constraint of (9). Setting the gradients of L equal
to zero gives the following KKT optimality conditions:
The first three equations of (11) give the following expressions
for the original problem variables (w, #, y) in terms of
the Lagrange multiplier u:
. (12)
Substituting these expressions in the last equality of (11)
allows us to obtain an explicit expression for u in terms of
the problem data A and D as follows:
I
I
where H is defined as:
Having u from (13), the explicit solution (w, #, y) to our
problem (9) is given by (12). Because the solution (13) for
u entails the inversion of a possibly massive m-m matrix,
we make immediate use of the Sherman-Morrison-Woodbury
formula [14, p. 51] for matrix inversion, as was done in [23,
10, 24], which results in:
I
This expression, as well as another simple expression (29)
for
below, involve the inversion of a much smaller dimensional
matrix of order (n completely
solves the classification problem. For concreteness we explicitly
state our very simple algorithm.
Algorithm 2.1. Linear Proximal SVM Given m data
points in R n represented by the m - n matrix A and a diagonal
matrix D of -1 labels denoting the class of each row
of A, we generate the linear classifier (7) as follows:
(i) Define H by (14) where e is an m - 1 vector of ones
and compute u by (15) for some positive #. Typically
# is chosen by means of a tuning (validating) set.
(ii) Determine (w, #) from (12).
(iii) Classify a new x by using (7).
For standard SVMs, support vectors consist of all data
points which are the complement of the data points that
can be dropped from the problem without changing the separating
plane (5) [36, 20]. Thus, for the standard SVM formulation
(2), support vectors correspond to data points for
which the Lagrange multipliers are nonzero because, solving
(2) with these data points only will give the same answer as
solving it with the entire dataset. In our proximal formulation
however, the Lagrange multipliers u are merely a
multiple of the error vector y: #y as given by (12). Con-
sequently, because all components of y are typically nonzero
since none of the data points usually lie on the proximal
planes x # the concept of support vectors needs to
be modified as follows. Because (w, # R n+1 are given as
linear functions of y by (11), it follows by the basis theorem
for linear equations [13, Theorem 2.11][25, Lemma 2.1],
applied to the last equality of (11) for a fixed value of the
error vector y, that at most n+ 1 linearly independent data
points are needed to determine the basic nonzero components
of (w, # R n+1 . Guided by this fact that only a
small number of data points can characterize any specific
(w, #), we define the concept of #-support vectors as those
data points A i for which error vector y i is less than # in
absolute value. We typically pick # small enough such that
about 1% of the data are #-support vectors. Re-solving our
proximal SVM problem (9) with these data points and a #
adjusted (typically upwards) by a tuning set gives test set
correctness that is essentially identical to that obtained by
using the entire dataset.
We note that with explicit expressions (w, #, y, u) in terms
of problem data given by (12) and (15), we are able to get
also an explicit expression for the leave-one-out-correctness
looc [32], that is the fraction of correctly classified data
points if each point in turn is left out of the PSVM formulation
and then is classified by the classifier (7). Omitting
some algebra, we have the following leave-one-out-correctness:
where the "step" function is defined in the Introduction, and
I
Here, H is defined by (14), H i denotes row i of H, while H i
denotes H with row H i removed from H, and u i is defined
by (15) with H replaced by H i . Similarly, D i denotes row i
of D.
We extend now some of the above results to nonlinear
proximal support vector machines.
3. NONLINEAR PROXIMAL SUPPORT VECTOR
MACHINES
To obtain our nonlinear proximal classifier we modify our
equality constrained optimization problem (9) as in [20, 18]
by replacing the primal variable w by its dual equivalent
# Du from (12) to obtain:
min
e,
where the objective function has also been modified to minimize
weighted 2-norm sums of the problem variables (u, #, y).
If we now replace the linear kernel AA # by a nonlinear kernel
defined in the Introduction, we obtain:
min
Using the shorthand notation:
the Lagrangian for (19) can be written similarly to (10) as:
L(u, #, y,
Here, v # R m is the Lagrange multiplier associated with
the equality constraint of (19). Setting the gradients of this
Lagrangian with respect to (u, #, y, v) equal to zero gives the
following KKT optimality conditions:
The first three equations of (22) give the following expressions
for (u, #, y) in terms of the Lagrange multiplier v:
. (23)
Substituting these expressions in the last equality of (22)
gives an explicit expression for v in terms of the problem
data A and D as follows:
I
I
e, (24)
where G is defined as:
Note the similarity between G above and H as defined in
(14). This similarity allows us to obtain G from the expression
replacing A by K in (14). This can be
taken advantage of in the MATLAB code 4.1 of Algorithm
2.1 which is written for the linear classifier (7). Thus, to generate
a nonlinear classifier by Algorithm 3.1 merely replace
A by K in the algorithm.
Having the solution v from (24), the solution (u, #, y) to
our problem (19) is given by (23). Unlike the situation with
linear kernels, the Sherman-Morrison-Woodbury formula is
useless here because the kernel matrix
square so the inversion must take place in
a potentially high-dimensional R m . However, the reduced
kernel techniques of [17] can be utilized to reduce the m-
m dimensionality of the kernel to a much
smaller m-
m dimensionality of a rectangular kernel
m is as small as 1% of m and -
A is an
random submatrix of of A. Such reduced kernels not
only make most large problems tractable, but they also often
lead to improved generalization by avoiding data overfitting.
The e#ectiveness of these reduced kernels is demonstrated
by means of a numerical test problem in the next section of
the paper.
The nonlinear separating surface corresponding to the kernel
Equation (8.1)] and can be deduced from
the linear separating surface (5) and
as follows:
If we replace x # A # by the corresponding kernel expression
substitute from (23) for u and #:
we obtain the nonlinear separating surface:
The corresponding nonlinear classifier to this nonlinear separating
surface is then:
< 0, then x # A-,
We now give an explicit statement of our nonlinear classifier
algorithm.
Algorithm 3.1. Nonlinear Proximal SVM Given m
data points in R n represented by the m- n matrix A and a
diagonal matrix D of -1 labels denoting the class of each row
of A, we generate the nonlinear classifier (28) as follows:
(i) Choose a kernel function K(A,A # ), typically the Gaussian
kernel (1).
(ii) Define G by (25) where is an
vector of ones. Compute v by (24) for some
positive #. (Typically # is chosen by means of a tuning
set.)
(iii) The nonlinear surface (27) with the computed v constitutes
the nonlinear classifier (28) for classifying a
new point x.
The nonlinear classifier (28), which is a direct generalization
of the linear classifier (7), works quite e#ectively as indicated
by the numerical examples presented in the next section.
4. NUMERICAL IMPLEMENTATION AND COMPARISON
Most of our computations were performed on the University
of Wisconsin Data Mining Institute "locop1" machine,
which utilizes a 400 Mhz Pentium II and allows a maximum
of 2 Gigabytes of memory for each process. This computer
runs on Windows NT server 4.0, with MATLAB 6 installed.
Even though "locop1" is a multiprocessor machine, only one
processor was used for all the experiments since MATLAB
is a single threaded application and does not distribute any
load across processors [26]. Our algorithms require the solution
of a single square system of linear equations of the
size of the number of input attributes n in the linear case,
and of the size of the number of data points m in the non-linear
case. When using a rectangular kernel [18], the size
of the problem can be reduced from m to k with k << m
for the nonlinear case. Because of the simplicity of our algo-
rithm, we give below the actual MATLAB implementation
that was used in our experiments and which consists of 6
lines of native MATLAB code:
-226
Figure
3: The spiral dataset consisting of 97 black
points and 97 white points intertwined as two spirals
in 2-dimensional space. PSVM with a Gaussian kernel
generated a sharp nonlinear spiral-shaped separating
Code 4.1. PSVM MATLAB Code
function
% PSVM:linear and nonlinear classification
Note that the command line in the MATLAB code above:
computes directly the factor ( I
of (15). This is much more economical and stable
than computing the inverse ( I
explicitly then
multiplying it by H # e. The calculations H # e and A # s involve
the transpose of typically large matrices which can be time
consuming. Instead, we calculate r=sum(H)' and w=(s'*A)'
respectively, the transposes of these vectors.
We further note that the MATLAB code above not only
works for a linear classifier, but also for a nonlinear classifier
as well. In the nonlinear case, the matrix K(A,A # ) is used
as input instead of A, [20, Equations (1), (10)], and the pair
returned instead of (w, #).
The nonlinear separating surface is then given by (27) as:
Rectangular kernels [17] can also be handled by this code.
The input then is the rectangular matrix K(A, -
A # R m-k , k << m and the given output is the pair (-u, #)
with -
D and - u are the D and u
associated with the reduced matrix -
A.
A final note regarding a further simplification of PSVM.
If we substitute the expression (15) for u in (12), we obtain
after some algebra the following simple expression for w and
# in terms of the problem data:
I
E)
This direct explicit solution of our PSVM problem
be written as the following single line of MATLAB code,
which also does not perform the explicit matrix inversion
I
E) -1 , and is slightly faster than the above MATLAB
code:
Here, according to MATLAB commands, diag(D) is an m-1
vector generated from the diagonal of the matrix D. Computational
testing results using this one-line MATLAB code
are slightly better than those obtained with Code 4.1
and are the ones reported in the tables below. We comment
further that the solution (29) can also be obtained directly
from (9) by using the equality constraint to eliminate y from
the problem and solving the resulting unconstrained minimization
problem in the variables w and # by setting to zero
the gradients with respect to w and #. We turn now to our
computations.
The datasets used for our numerical tests were the following
. Seven publicly available datasets from the UCI Machine
Learning Repository [28]: WPBC, Ionosphere,
Cleveland Heart, Pima Indians, BUPA Liver, Mush-
room, Tic-Tac-Toe.
. The Census dataset is a version of the US Census Bureau
"Adult" dataset, which is publicly available from
the Silicon Graphics website [4].
. The Galaxy Dim dataset used in galaxy discrimination
with neural networks from [30]
. Two large datasets (2 million points and 10 attributes)
created using David Musicant's NDC Data Generator
[29].
. The Spiral dataset proposed by Alexis Wieland of the
MITRE Corporation and available from the CMU Artificial
Intelligence Repository [37].
We outline our computational results now in five groups
as follows.
1.
Table
1: Comparison of seven di#erent methods
on the Adult dataset In this experiment we
compared the performance of seven di#erent methods
for linear classification on di#erent sized versions of
the Adult dataset. Reported results on the SOR [22],
SMO [31] and SVM light [16] are from [22]. Results for
LSVM [24] results were computed here using "locop1",
whereas SSVM [18] and RLP [2] are from [18]. The
SMO experiments were run on a 266 MHz Pentium II
processor under Windows NT 4 using Microsoft's Visual
C++ 5.0 compiler. The SOR experiments were
run on a 200 MHz Pentium Pro with 64 megabytes
of RAM, also under Windows NT 4 and using Visual
C++ 5.0. The SVM light experiments were run on the
same hardware as that for SOR, but under the Solaris
5.6 operating system. Bold type indicates the
best result and a dash (-) indicates that the results
were not available from [22]. Although the timing comparisons
are approximate because of the di#erent machines
used, they do indicate that PSVM has a distinct
edge in speed, e.g. solving the largest problem in 7.4
seconds, which is much faster than any other method.
Times and ten-fold testing correctness are shown in
Table
1. Times are for the ten-folds.
2.
Table
4: Comparative performances of LSVM
[24] and PSVM on a large dataset
Two large datasets consisting of 2 million points and
attributes were created using the NDC Data Generator
[29]. One of them is called NDC-easy because it
is highly linearly separable (around 90%). The other
one is called NDC-hard since it has linear separability
of around 70%. As is shown in Table 4 the linear classifiers
obtained using both methods performed almost
identically. Despite the 2 million size of the datasets,
PSVM solved the problems in about 20 seconds each
compared to LSVM's times of over 650 seconds. In
contrast, SVM light [16] failed on this problem [24].
3.
Table
3: Comparison of PSVM, SSVM and
LSVM and SVM light , using a Linear Classifier
In this experiment we compared four methods: PSVM,
SSVM, LSVM and SVM light on seven publicly available
datasets from UCI Machine Learning Repository
[28] and [30]. As shown in Table 3, the correctness of
the four methods were very similar but the execution
time including ten-fold cross validation for PSVM was
smaller by as much as one order of magnitude or more
than the other three methods tested. Since LSVM,
SSVM and PSVM are all based on similar formulations
of the classification problem the same value of #
was used for all of them. For SVM light the trade-o#
between trading error and margin is represented by a
parameter C. The value of C was chosen by tuning.
A paired t-test [27] at 95% confidence level was performed
to compare the performance of PSVM and the
other algorithms tested. The p-values obtained show
that there is no significant di#erence between PSVM
and the other methods tested.
4.
Figure
3: PSVM on the Spiral Dataset
We used a Gaussian kernel in order to classify the spiral
dataset. This dataset consisting of 194 black and
white points intertwined in the shape of a spiral is a
synthetic dataset [37]. However, it apparently is a di#-
cult test case for data mining algorithms and is known
to give neural networks severe problems [15]. In con-
trast, a sharp separation was obtained using PSVM as
can be seen in Figure 3.
5.
Table
2: Nonlinear Classifier Comparison using
PSVM, SSVM and LSVM
For this experiment we chose four datasets from the
UCI Machine Learning Repository for which it is known
that a nonlinear classifier performs significantly better
that a linear classifier. We used PSVM, SSVM and
LSVM in order to find a Gaussian-kernel-based non-linear
classifier to classify the data. In all datasets
tested, the three methods performed similarly as far
as ten-fold cross validation is concerned. However, execution
time of PSVM was much smaller than that
of other two methods. Note that for the mushroom
dataset that consists of points with
attributes each, the square 8124 - 8124 kernel matrix
does not fit into memory. In order to address this prob-
lem, we used a rectangular kernel with -
A # R 215-8124
instead, as described in [17]. In general, our algorithm
performed particularly well with a rectangular kernel
since the system solved is of size k - k, with k << m
and where k is the much smaller number of rows of -
A.
In contrast with a full square kernel matrix the system
solved is of size m - m. A paired t-test [27] at 95%
confidence level was performed to compare the performance
of PSVM and the other algorithms tested. The
p-values obtained show that there is no significant difference
between PSVM and the other methods tested
as far as ten-fold testing correctness is concerned.
5. CONCLUSION AND FUTURE WORK
We have proposed an extremely simple procedure for generating
linear and nonlinear classifiers based on proximity to
one of two parallel planes that are pushed as far apart as pos-
sible. This procedure, a proximal support vector machine
(PSVM), requires nothing more sophisticated than solving
a simple nonsingular system of linear equations, for either a
linear or nonlinear classifier. In contrast, standard support
vector machine classifiers require a more costly solution of a
linear or quadratic program. For a linear classifier, all that
is needed by PSVM is the inversion of a small matrix of the
order of the input space dimension, typically of the order of
100 or less, even if there are millions of data points to clas-
sify. For a nonlinear classifier, a linear system of equations
of the order of the number of data points needs to be solved.
This allows us to easily classify datasets with as many as a
few thousand of points. For larger datasets, data selection
and reduction methods such as [11, 17, 12] can be utilized as
indicated by some of our numerical results and will be the
subject of future work. Our computational results demonstrate
that PSVM classifiers obtain test set correctness statistically
comparable to that of standard of SVM classifiers
at a fraction of the time, sometimes an order of magnitude
less.
Another avenue for future research is that of incremental
classification for large datasets. This appears particularly
promising in view of the very simple explicit solutions
and (24) for the linear and nonlinear PSVM classifiers
that can be updated incrementally as new data points come
streaming in.
To sum up, the principal contribution of this work, is a
very e#cient classifier that requires no specialized software.
PSVM can be easily incorporated into all sorts of data mining
applications that require a fast, simple and e#ective classifier
Acknowledgements
The research described in this Data Mining Institute Report
01-02, February 2001, was supported by National Science
Foundation Grants CCR-9729842 and CDA-9623632, by Air
Force O#ce of Scientific Research Grant F49620-00-1-0085
and by the Microsoft Corporation. We are grateful to Professor
C.-J. Lin of National Taiwan University who pointed
out reference [33], upon reading the original version of this
paper. Least squares are also used in [33] to construct an
SVM, but with the explicit requirement of Mercer's positive
definiteness condition [35], which is not needed here. Fur-
thermore, the objective function of the quadratic program
of [33] is not strongly convex like ours. This important feature
of PSVM influences its speed as evidenced by the many
numerical comparisons given here but not in [33].
6.
--R
LAPACK User's Guide.
Robust linear programming discrimination of two linearly inseparable sets.
Massive data discrimination via linear support vector machines.
US Census Bureau.
A tutorial on support vector machines for pattern recognition.
Learning from Data - Concepts
CPLEX Optimization Inc.
Regularization networks and support vector machines.
Regularization networks and support vector machines.
Interior point methods for massive support vector machines.
Data selection for support vector machine classification.
The Theory of Linear Economic Models.
Matrix Computations.
Data mining with sparse grids.
Making large-scale support vector machine learning practical
RSVM: Reduced support vector machines.
SSVM: A smooth support vector machine.
Nonlinear Programming.
Generalized support vector machines.
Nonlinear perturbation of linear programs.
Successive overrelaxation for support vector machines.
Active support vector machine classification.
Lagrangian support vector machines.
Lipschitz continuity of solutions of linear inequalities
The MathWorks
Machine Learning.
UCI repository of machine learning databases
NDC: normally distributed clustered datasets
Automated star/galaxy discrimination with neural networks.
Sequential minimal optimization: A fast algorithm for training support vector machines.
Advances in Large Margin Classifiers.
Least squares support vector machine classifiers.
Solutions of Ill-Posed Problems
The Nature of Statistical Learning Theory.
The Nature of Statistical Learning Theory.
Twin spiral dataset.
--TR
Lipschitz continuity of solutions of linear inequalities, programs and complementarity problems
The nature of statistical learning theory
Matrix computations (3rd ed.)
Making large-scale support vector machine learning practical
Fast training of support vector machines using sequential minimal optimization
Least Squares Support Vector Machine Classifiers
Data selection for support vector machine classifiers
Machine Learning
Learning from Data
A Tutorial on Support Vector Machines for Pattern Recognition
Lagrangian support vector machines
--CTR
Wenye Li , Kin-Hong Lee , Kwong-Sak Leung, Large-scale RLSC learning without agony, Proceedings of the 24th international conference on Machine learning, p.529-536, June 20-24, 2007, Corvalis, Oregon
Soumen Chakrabarti , Shourya Roy , Mahesh V. Soundalgekar, Fast and accurate text classification via multiple linear discriminant projections, Proceedings of the 28th international conference on Very Large Data Bases, p.658-669, August 20-23, 2002, Hong Kong, China
Tsong Song Hwang , Tsung-Ju Lee , Yuh-Jye Lee, A three-tier IDS via data mining approach, Proceedings of the 3rd annual ACM workshop on Mining network data, June 12-12, 2007, San Diego, California, USA
Simon I. Hill , Arnaud Doucet, Adapting two-class support vector classification methods to many class problems, Proceedings of the 22nd international conference on Machine learning, p.313-320, August 07-11, 2005, Bonn, Germany
Thorsten Joachims, Training linear SVMs in linear time, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Glenn Fung , Sathyakama Sandilya , R. Bharat Rao, Rule extraction from linear support vector machines, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Hwanjo Yu , Jiong Yang , Jiawei Han , Xiaolei Li, Making SVMs Scalable to Large Data Sets using Hierarchical Cluster Indexing, Data Mining and Knowledge Discovery, v.11 n.3, p.295-321, November 2005
Kristin P. Bennett , Michinari Momma , Mark J. Embrechts, MARK: a boosting algorithm for heterogeneous kernel models, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Hwanjo Yu , Jiong Yang , Jiawei Han, Classifying large data sets using SVMs with hierarchical clusters, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Tonatiuh Pea Centeno , Neil D. Lawrence, Optimising Kernel Parameters and Regularisation Coefficients for Non-linear Discriminant Analysis, The Journal of Machine Learning Research, 7, p.455-491, 12/1/2006
Yang , Ali R. Hurson, Content-aware search of multimedia data in ad hoc networks, Proceedings of the 8th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems, October 10-13, 2005, Montral, Quebec, Canada
Bin Li , Mingmin Chi , Jianping Fan , Xiangyang Xue, Support cluster machine, Proceedings of the 24th international conference on Machine learning, p.505-512, June 20-24, 2007, Corvalis, Oregon
Dacheng Tao , Xuelong Li , Xindong Wu , Weiming Hu , Stephen J. Maybank, Supervised tensor learning, Knowledge and Information Systems, v.13 n.1, p.1-42, September 2007
Hwanjo Yu , Xiaoqian Jiang , Jaideep Vaidya, Privacy-preserving SVM using nonlinear kernels on horizontally partitioned data, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Brian Whitman , Deb Roy , Barry Vercoe, Learning word meanings and descriptive parameter spaces from music, Proceedings of the HLT-NAACL workshop on Learning word meaning from non-linguistic data, p.92-99, May 31,
Soumen Chakrabarti , Shourya Roy , Mahesh V. Soundalgekar, Fast and accurate text classification via multiple linear discriminant projections, The VLDB Journal The International Journal on Very Large Data Bases, v.12 n.2, p.170-185, August
Glenn Fung , Murat Dundar , Jinbo Bi , Bharat Rao, A fast iterative algorithm for fisher discriminant using heterogeneous kernels, Proceedings of the twenty-first international conference on Machine learning, p.40, July 04-08, 2004, Banff, Alberta, Canada
Glenn M. Fung , O. L. Mangasarian, Multicategory Proximal Support Vector Machine Classifiers, Machine Learning, v.59 n.1-2, p.77-97, May 2005
Deepak K. Agarwal, Shrinkage estimator generalizations of Proximal Support Vector Machines, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Glenn M. Fung , Olvi L. Mangasarian , Alexander J. Smola, Minimal kernel classifiers, The Journal of Machine Learning Research, 3, p.303-321, 3/1/2003
W. A. Chaovalitwongse , P. M. Pardalos, On the time series support vector machine using dynamic time warping kernel for brain activity classification, Cybernetics and Systems Analysis, v.44 n.1, p.125-138, January 2008
Yuh-Jye Lee , Wen-Feng Hsieh , Chien-Ming Huang, epsilon-SSVR: A Smooth Support Vector Machine for epsilon-Insensitive Regression, IEEE Transactions on Knowledge and Data Engineering, v.17 n.5, p.678-685, May 2005
Ryan Rifkin , Aldebaro Klautau, In Defense of One-Vs-All Classification, The Journal of Machine Learning Research, 5, p.101-141, 12/1/2004
Rolando Grave de Peralta Menendez , Quentin Noirhomme , Febo Cincotti , Donatella Mattia , Fabio Aloise , Sara Gonzlez Andino, Modern electrophysiological methods for brain-computer interfaces, Computational Intelligence and Neuroscience, v.2007 n.2, p.1-11, April 2007
Rolando Grave de Peralta Menendez , Quentin Noirhomme , Febo Cincotti , Donatella Mattia , Fabio Aloise , Sara Gonzlez Andino, Modern electrophysiological methods for brain-computer interfaces, Computational Intelligence and Neuroscience, v.7 n.3, p.1-8, August 2007 | support vector machines;data classification;linear equations |